nielsr HF Staff commited on
Commit
076501c
·
verified ·
1 Parent(s): 95f3b9d

Add library_name, pipeline tag and link to the Github repository

Browse files

This PR adds the `library_name` (transformers) and `pipeline_tag` (text-generation) metadata to the model card. It also adds a link to the Github repository for the project.

Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -1,8 +1,11 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - yolay/RAIF-ComplexInstruction-Qwen
 
 
 
5
  ---
 
6
  This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
7
 
8
  Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
@@ -45,6 +48,9 @@ The model Qwen2.5-1.5B is our optimized model for its advanced instruction-follo
45
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
46
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
47
 
 
 
 
48
 
49
  🎓 If you find this work useful, please consider the following citation:
50
  ```
 
1
  ---
 
2
  datasets:
3
  - yolay/RAIF-ComplexInstruction-Qwen
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
  This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
10
 
11
  Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
 
48
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
49
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
50
 
51
+ The model Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **Qwen2.5-1.5B-Instruct (Ours)** in the Table 1.
52
+
53
+ Code and data are available at [https://github.com/yuleiqin/RAIF](https://github.com/yuleiqin/RAIF).
54
 
55
  🎓 If you find this work useful, please consider the following citation:
56
  ```