|
|
--- |
|
|
datasets: |
|
|
- agentica-org/DeepScaleR-Preview-Dataset |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
base_model: |
|
|
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B |
|
|
--- |
|
|
# Model Overview |
|
|
<div align="center"> |
|
|
<span style="font-family: default; font-size: 1.5em;">DLER-R1-7B</span> |
|
|
<div> |
|
|
🚀 The leading efficient reasoning model for cutting-edge research and development 🌟 |
|
|
</div> |
|
|
</div> |
|
|
|
|
|
[](https://www.arxiv.org/abs/2510.15110) |
|
|
[](https://github.com/NVlabs/DLER) |
|
|
[](https://huggingface.co/collections/nvidia/reasoning-efficiency-research) |
|
|
[](https://nvlabs.github.io/DLER/) |
|
|
 |
|
|
|
|
|
### Description: |
|
|
DLER-Qwen-R1-7B is an ultra-efficient 7B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 7B model, DLER-Qwen-R1-7B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy. |
|
|
|
|
|
This model is for research and development only. |
|
|
|
|
|
### Evaluation Results: |
|
|
| Model | MATH | Length | AIME | Length | AMC | Length | Minerva |Length | Olympiad |Length | Total Avg Length | |
|
|
|------------------|----------|------------|--------------------|------------------|--------------------|------------------|--------------------|------------------|--------------------|------------------|-----------------| |
|
|
| Deepseek-R1-7B | 93.60 | 3999 | 55.40 | 13241 | 82.90 | 7461 | 49.79 | 5199 | 58.21 | 8837 | 7747 | |
|
|
| **DLER-R1-7B** | **94.21 (+0.61%)** | **1634 (-60%)** | **55.62 (+0.22%)** | **3230 (-76%)** | **84.41 (+1.51%)** | **2512 (-0.67%)** | **53.88 (+4.09%)** | **2058 (-61%)** | **60.48 (+2.27%)** | **2592 (-71%)** | **2405 (-69%)** | |
|
|
|
|
|
### Environment Setup |
|
|
|
|
|
``` |
|
|
pip install transformers==4.51.3 |
|
|
``` |
|
|
# Inference: |
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
import torch |
|
|
|
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained('nvidia/DLER-R1-7B-Research').to(device) |
|
|
tokenizer = AutoTokenizer.from_pretrained('nvidia/DLER-R1-7B-Research') |
|
|
|
|
|
messages = [ |
|
|
{"role": "user", "content": "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$"+" Let's think step by step and output the final answer within \\boxed{}."}, |
|
|
] |
|
|
|
|
|
|
|
|
tokenized_chat = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=True, |
|
|
add_generation_prompt=True, |
|
|
return_tensors="pt" |
|
|
).to(model.device) |
|
|
|
|
|
outputs = model.generate( |
|
|
tokenized_chat, |
|
|
max_new_tokens=10000, |
|
|
eos_token_id=tokenizer.eos_token_id |
|
|
) |
|
|
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
### License/Terms of Use |
|
|
NSCLv1 |
|
|
|
|
|
|
|
|
## Citation |
|
|
If you find our model helpful, please cite the following [paper](): |
|
|
|
|
|
``` |
|
|
@article{liu2025dler, |
|
|
title={DLER: Doing Length pEnalty Right-Incentivizing More Intelligence per Token via Reinforcement Learning}, |
|
|
author={Liu, Shih-Yang and Dong, Xin and Lu, Ximing and Diao, Shizhe and Liu, Mingjie and Chen, Min-Hung and Yin, Hongxu and Wang, Yu-Chiang Frank and Cheng, Kwang-Ting and Choi, Yejin and others}, |
|
|
journal={arXiv preprint arXiv:2510.15110}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|