--- datasets: - When-Does-Reasoning-Matter/general-reasoning-ift-pairs - When-Does-Reasoning-Matter/math-reasoning-ift-pairs language: - en library_name: transformers pipeline_tag: text-generation tags: - generated_from_trainer --- # When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance (Qwen2.5-0.5B-ift)

Dataset Icon

arXiv:2509.22193

This model was trained as part of the paper [When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance](https://arxiv.org/pdf/2509.22193). It belongs to a collection of **General and Math-specific student models** distilled from Instruction-Fine-Tuned (IFT) or Reasoning answers generated by [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B). **Abstract:** Large Language Models (LLMs) with reasoning capabilities have achieved state-of-the-art performance on a wide range of tasks. Despite its empirical success, the tasks and model scales at which reasoning becomes effective, as well as its training and inference costs, remain underexplored. In this work, we rely on a synthetic data distillation framework to conduct a large-scale supervised study. We compare Instruction Fine-Tuning (IFT) and reasoning models of varying sizes, on a wide range of math-centric and general-purpose tasks, evaluating both multiple-choice and open-ended formats. Our analysis reveals that reasoning consistently improves model performance, often matching or surpassing significantly larger IFT systems. Notably, while IFT remains Pareto-optimal in training and inference costs, reasoning models become increasingly valuable as model size scales, overcoming IFT performance limits on reasoning-intensive and open-ended tasks. results --- ## Paper Read the full paper on Hugging Face: [When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance](https://huggingface.co/papers/2509.22193) ## Project Page Explore the project and other related models on the Hugging Face organization page: [When Does Reasoning Matter?](https://huggingface.co/when-does-reasoning-matter) --- ## Datasets These models were trained on the **largest set of IFT and Reasoning answer pairs**: - **General dataset**: [general-reasoning-ift-pairs](https://huggingface.co/datasets/When-Does-Reasoning-Matter/general-reasoning-ift-pairs) - **Math dataset**: [math-reasoning-ift-pairs](https://huggingface.co/datasets/When-Does-Reasoning-Matter/math-reasoning-ift-pairs) --- ## Available Models
General Math
IFT Models Reasoning Models IFT Models Reasoning Models
Qwen2.5-0.5B-ift Qwen2.5-0.5B-reasoning Qwen2.5-0.5B-math-ift Qwen2.5-0.5B-math-reasoning
Qwen2.5-1.5B-ift Qwen2.5-1.5B-reasoning Qwen2.5-1.5B-math-ift Qwen2.5-1.5B-math-reasoning
Qwen2.5-3B-ift Qwen2.5-3B-reasoning Qwen2.5-3B-math-ift Qwen2.5-3B-math-reasoning
Qwen2.5-7B-ift Qwen2.5-7B-reasoning Qwen2.5-7B-math-ift Qwen2.5-7B-math-reasoning
Qwen2.5-14B-ift Qwen2.5-14B-reasoning Qwen2.5-14B-math-ift Qwen2.5-14B-math-reasoning
--- If you use this dataset in your work, please cite: **[When Does Reasoning Matter?](https://arxiv.org/pdf/2509.22193)** ```bibtex @misc{boizard2025doesreasoningmattercontrolled, title={When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance}, author={Nicolas Boizard and Hippolyte Gisserot-Boukhlef and Kevin El-Haddad and Céline Hudelot and Pierre Colombo}, year={2025}, eprint={2509.22193}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2509.22193}, } ```