Text Generation
Transformers
TensorBoard
Safetensors
English
qwen2
Generated from Trainer
conversational
text-generation-inference
nielsr HF Staff commited on
Commit
58eec9c
·
verified ·
1 Parent(s): ee43e7c

Improve model card for Qwen2.5-0.5B-ift with paper abstract, HF paper link, and project page link

Browse files

This PR enhances the model card for the `Qwen2.5-0.5B-ift` model by:

- Updating the main title to the full paper title for clarity.
- Adding a summary of the paper "When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance" from its abstract.
- Including a direct link to the Hugging Face paper page: `https://huggingface.co/papers/2509.22193` in a new section, complementing the existing arXiv link.
- Providing an explicit link to the overarching project page: `https://huggingface.co/when-does-reasoning-matter`.

These additions improve the model's discoverability and provide more comprehensive information to users.

Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
- library_name: transformers
3
- tags:
4
- - generated_from_trainer
5
  datasets:
6
  - When-Does-Reasoning-Matter/general-reasoning-ift-pairs
7
  - When-Does-Reasoning-Matter/math-reasoning-ift-pairs
8
  language:
9
  - en
 
10
  pipeline_tag: text-generation
 
 
11
  ---
12
 
13
- # When Does Reasoning Matter?
14
 
15
  <p align="left">
16
  <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/GjJ15tY7-F4bqR96FN4pd.png" alt="Dataset Icon" width="180"/>
@@ -22,14 +22,23 @@ pipeline_tag: text-generation
22
  </a>
23
  </p>
24
 
25
-
26
- This model was trained as part of the paper [When Does Reasoning Matter?](https://arxiv.org/pdf/2509.22193)
27
  It belongs to a collection of **General and Math-specific student models** distilled from Instruction-Fine-Tuned (IFT) or Reasoning answers generated by [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B).
28
 
 
 
29
  <img src="https://huggingface.co/api/resolve-cache/models/When-Does-Reasoning-Matter/Qwen2.5-0.5B-ift/733797fee2fdd300e1a0453d368250327fe4cc44/results.png?%2FWhen-Does-Reasoning-Matter%2FQwen2.5-0.5B-ift%2Fresolve%2Fmain%2Fresults.png=&etag=%22d36dedfbca764a8ac9a7a5ebc043ca53f5ee4966%22" alt="results" width="600"/>
30
 
31
  ---
32
 
 
 
 
 
 
 
 
 
33
  ## Datasets
34
 
35
  These models were trained on the **largest set of IFT and Reasoning answer pairs**:
@@ -50,7 +59,7 @@ These models were trained on the **largest set of IFT and Reasoning answer pairs
50
  <th>IFT Models</th>
51
  <th>Reasoning Models</th>
52
  <th>IFT Models</th>
53
- <th>Reasoning Models</th>
54
  </tr>
55
  </thead>
56
  <tbody>
@@ -101,4 +110,4 @@ If you use this dataset in your work, please cite: **[When Does Reasoning Matter
101
  primaryClass={cs.CL},
102
  url={https://arxiv.org/abs/2509.22193},
103
  }
104
- ```
 
1
  ---
 
 
 
2
  datasets:
3
  - When-Does-Reasoning-Matter/general-reasoning-ift-pairs
4
  - When-Does-Reasoning-Matter/math-reasoning-ift-pairs
5
  language:
6
  - en
7
+ library_name: transformers
8
  pipeline_tag: text-generation
9
+ tags:
10
+ - generated_from_trainer
11
  ---
12
 
13
+ # When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance (Qwen2.5-0.5B-ift)
14
 
15
  <p align="left">
16
  <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/GjJ15tY7-F4bqR96FN4pd.png" alt="Dataset Icon" width="180"/>
 
22
  </a>
23
  </p>
24
 
25
+ This model was trained as part of the paper [When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance](https://arxiv.org/pdf/2509.22193).
 
26
  It belongs to a collection of **General and Math-specific student models** distilled from Instruction-Fine-Tuned (IFT) or Reasoning answers generated by [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B).
27
 
28
+ **Abstract:** Large Language Models (LLMs) with reasoning capabilities have achieved state-of-the-art performance on a wide range of tasks. Despite its empirical success, the tasks and model scales at which reasoning becomes effective, as well as its training and inference costs, remain underexplored. In this work, we rely on a synthetic data distillation framework to conduct a large-scale supervised study. We compare Instruction Fine-Tuning (IFT) and reasoning models of varying sizes, on a wide range of math-centric and general-purpose tasks, evaluating both multiple-choice and open-ended formats. Our analysis reveals that reasoning consistently improves model performance, often matching or surpassing significantly larger IFT systems. Notably, while IFT remains Pareto-optimal in training and inference costs, reasoning models become increasingly valuable as model size scales, overcoming IFT performance limits on reasoning-intensive and open-ended tasks.
29
+
30
  <img src="https://huggingface.co/api/resolve-cache/models/When-Does-Reasoning-Matter/Qwen2.5-0.5B-ift/733797fee2fdd300e1a0453d368250327fe4cc44/results.png?%2FWhen-Does-Reasoning-Matter%2FQwen2.5-0.5B-ift%2Fresolve%2Fmain%2Fresults.png=&etag=%22d36dedfbca764a8ac9a7a5ebc043ca53f5ee4966%22" alt="results" width="600"/>
31
 
32
  ---
33
 
34
+ ## Paper
35
+ Read the full paper on Hugging Face: [When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance](https://huggingface.co/papers/2509.22193)
36
+
37
+ ## Project Page
38
+ Explore the project and other related models on the Hugging Face organization page: [When Does Reasoning Matter?](https://huggingface.co/when-does-reasoning-matter)
39
+
40
+ ---
41
+
42
  ## Datasets
43
 
44
  These models were trained on the **largest set of IFT and Reasoning answer pairs**:
 
59
  <th>IFT Models</th>
60
  <th>Reasoning Models</th>
61
  <th>IFT Models</th>
62
+ <th><th>Reasoning Models</th>
63
  </tr>
64
  </thead>
65
  <tbody>
 
110
  primaryClass={cs.CL},
111
  url={https://arxiv.org/abs/2509.22193},
112
  }
113
+ ```