nielsr HF Staff commited on
Commit
c10d55c
·
verified ·
1 Parent(s): 0839be7

Improve model card: add license, sample usage, performance table, and acknowledgements

Browse files

This PR significantly improves the model card by:
1. Adding the `license` to the YAML metadata (`apache-2.0`), as indicated in the associated GitHub repository.
2. Including a "Sample Usage" section with a `transformers` pipeline example, making the model easier to use for quick inference.
3. Expanding the "Performance" section to include Table 2 from the paper's official GitHub repository, providing a more comprehensive view of the model's capabilities.
4. Adding an "Acknowledgement" section to recognize the foundational work used in this project.
5. Refining the overall structure for improved readability.

Files changed (1) hide show
  1. README.md +73 -6
README.md CHANGED
@@ -1,19 +1,58 @@
1
  ---
 
 
2
  datasets:
3
  - yolay/RAIF-ComplexInstruction-Qwen
4
  library_name: transformers
5
  pipeline_tag: text-generation
6
- base_model:
7
- - Qwen/Qwen2.5-1.5B-Instruct
8
  ---
9
 
10
  This model is the official implementation of the paper "[Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://arxiv.org/abs/2506.01413)". Code and data are available at [https://github.com/yuleiqin/RAIF](https://github.com/yuleiqin/RAIF).
11
 
12
  Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
13
 
14
- To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM.
 
 
15
 
16
- The model Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **Qwen2.5-1.5B-Instruct (Ours)** in the Table 1.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  **Table 1** Performance on seven instruction benchmarks. Best/2nd best are marked **bold**/<u>underlined</u>.
19
 
@@ -49,9 +88,37 @@ The model Qwen2.5-1.5B is our optimized model for its advanced instruction-follo
49
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
50
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
51
 
52
- The model Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **Qwen2.5-1.5B-Instruct (Ours)** in the Table 1.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
- 🎓 If you find this work useful, please consider the following citation:
55
 
56
  ```
57
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-1.5B-Instruct
4
  datasets:
5
  - yolay/RAIF-ComplexInstruction-Qwen
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
+ license: apache-2.0
 
9
  ---
10
 
11
  This model is the official implementation of the paper "[Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://arxiv.org/abs/2506.01413)". Code and data are available at [https://github.com/yuleiqin/RAIF](https://github.com/yuleiqin/RAIF).
12
 
13
  Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
14
 
15
+ To this end, we propose RAIF, a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. Evaluation on OOD constraints also confirms the generalizability of our RAIF.
16
+
17
+ ## Sample Usage
18
 
19
+ You can use the model with the `transformers` library.
20
+
21
+ ```python
22
+ from transformers import pipeline
23
+ import torch
24
+
25
+ pipe = pipeline(
26
+ "text-generation",
27
+ model="yolay/RAIF-Qwen2.5-1.5B-Instruct",
28
+ torch_dtype=torch.bfloat16,
29
+ device_map="auto",
30
+ )
31
+
32
+ messages = [
33
+ {"role": "system", "content": "You are a helpful assistant."},
34
+ {"role": "user", "content": "Write a short story about a robot who discovers music, but make sure the story is exactly 3 sentences long and includes the word 'serendipity'."},
35
+ ]
36
+ prompt = pipe.tokenizer.apply_chat_template(
37
+ messages,
38
+ tokenize=False,
39
+ add_generation_prompt=True
40
+ )
41
+
42
+ outputs = pipe(
43
+ prompt,
44
+ max_new_tokens=256,
45
+ do_sample=True,
46
+ temperature=0.7,
47
+ top_k=50,
48
+ top_p=0.95
49
+ )
50
+ print(outputs[0]["generated_text"])
51
+ ```
52
+
53
+ ## Performance
54
+
55
+ The model Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **Qwen2.5-1.5B-Instruct (Ours)** in the Table 1.
56
 
57
  **Table 1** Performance on seven instruction benchmarks. Best/2nd best are marked **bold**/<u>underlined</u>.
58
 
 
88
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
89
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
90
 
91
+ **Table 2** Performance on ComplexBench (Qwen2.5-7B-Instruct). Best/2nd best are marked **bold**/<u>underlined</u>. OD, SC, CNFR, FC, and SR stand for Oracle Decomposition, Self-Consistency, Conifer, FollowComplex, and Self-Refine.
92
+
93
+ | Category | ND | I/O | OD | SC | CNFR | FC | SR | Ours |
94
+ |------------------|------|--------|--------|--------|--------|--------|--------|---------|
95
+ | And | 1 | __85.85__ | 84.27 | 84.03 | 75.10 | 84.77 | 85.66 | **86.57** |
96
+ | **Chain** | | | | | | | | |
97
+ | | 1 | 72.18| __74.68__ | 73.54 | 60.95 | 66.27 | **75.25** | 73.96 |
98
+ | | 2 | 70.56| 72.70 | 69.63 | 64.43 | 70.66 | __73.07__ | **76.88** |
99
+ | *Avg.* | - | 70.96 | 73.18 | 70.57 | 63.59 | 69.60 | __73.59__ | **76.18** |
100
+ | **Selection** | | | | | | | | |
101
+ | | 1 | **77.25** | __76.61__ | 72.08 | 60.52 | 71.67 | 69.61 | 73.39 |
102
+ | | 2 | 65.61| __71.83__ | 68.23 | 53.25 | 61.96 | 64.34 | **72.92** |
103
+ | | 3 | __63.39__ | **68.45** | 56.13 | 46.04 | 51.70 | 58.67 | 60.75 |
104
+ | *Avg.* | - | 65.67 | **70.49** | 65.83 | 51.92 | 60.92 | 62.69 | __69.16__ |
105
+ | **Selection & Chain** | | | | | | | | |
106
+ | | 2 | __65.64__ | **65.94** | 60.81 | 47.33 | 61.07 | 52.01 | 61.06 |
107
+ | | 3 | 59.70| **65.77** | 64.08 | 48.53 | 57.65 | 60.41 | __65.00__ |
108
+ | *Avg.* | - | 62.68 | **65.85** | 62.44 | 47.93 | 59.36 | 56.20 | __63.03__ |
109
+ | **Overall** | - | 74.47 | __76.26__ | 73.76 | 63.51 | 71.97 | 74.00 | **77.40** |
110
+
111
+ ## Acknowledgement🫡
112
+
113
+ In this project, we follow the SimpleRL and the OpenRLHF framework to prepare the codebase. We acknowledge their great work for open-sourcing the implementations of reinforcement learning algorithms.
114
+ * [[SimpleRL](https://github.com/hkust-nlp/simpleRL-reason/)]
115
+ * [[OpenRLHF](https://github.com/OpenRLHF/OpenRLHF)]
116
+
117
+ We also would like to express gratitude to the research community that organize the existing benchmarks for validating the LLMs of solving complex instructions.
118
+
119
+ ## Citation🎓
120
 
121
+ If you find this work useful, please consider the following citation:
122
 
123
  ```
124
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,