--- license: apache-2.0 base_models: - HuggingFaceTB/SmolLM3-3B (87.0% on 100 samples. Model run only once.) - TroglodyteDerivations/symbolic-math-qwen2p5-1p5b-lora - Kevinmastascusa/qwen2p5-math-1p5b-merged - Kevinmastascusa/symbolic-math-qwen2p5-1p5b-lora tags: - gsm8k - llm - model_accuracy - fine-tuning - merged_model - gsm8k-style-llm-math-problem-solving - mathematical-reasoning-and-word-problems datasets: - openai/gsm8k --- ## Model Descriptions Models: HuggingFaceTB/SmolLM3-3B | Symbolic-Math-Qwen2.5-1.5B-LoRA | Qwen2.5-Math-1.5B-Merged a Fine-Tuned version of Qwen2.5-1.5B specifically optimized for solving mathematical word problems using chain-of-thought reasoning. The model was trained with LoRA adapters to enhance its mathematical reasoning capabilities while maintaining the base model's general language understanding. @software{HuggingFaceTB/SmolLM3-3B, symbolic-math-qwen2p5-1p5b-lora, qwen2p5-math-1p5b-merged title = { HuggingFaceTB/SmolLM3-3B & Symbolic-Math-Qwen2.5-1.5B-LoRA: Enhanced Mathematical Reasoning Model}, author = {TroglodyteDerivations}, year = {2025}, note = {qwen2p5-math-1p5b-merged: Fine-tuned for GSM8K mathematical problem solving} }