Qwen
/

Text Generation
Transformers
Safetensors
qwen3_next
conversational
littlebird13 commited on
Commit
e07e118
·
verified ·
1 Parent(s): 658d849

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +371 -3
README.md CHANGED
@@ -1,3 +1,371 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
+ ---
7
+
8
+ # Qwen3-Next-80B-A3B-Thinking
9
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
10
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
11
+ </a>
12
+
13
+ Over the past few months, we have observed increasingly clear trends toward scaling both total parameters and context lengths in the pursuit of more powerful and agentic artificial intelligence (AI).
14
+ We are excited to share our latest advancements in addressing these demands, centered on improving scaling efficiency through innovative model architecture.
15
+ We call this next-generation foundation models **Qwen3-Next**.
16
+
17
+ ## Highlights
18
+
19
+ **Qwen3-Next-80B-A3B** is the first installment in the Qwen3-Next series and features the following key enchancements:
20
+ - **Hybrid Attention**: Replaces standard attention with the combination of **Gated DeltaNet** and **Gated Attention**, enabling efficient context modeling for ultra-long context length.
21
+ - **High-Sparsity Mixture-of-Experts (MoE)**: Achieves an extreme low activation ratio in MoE layers, drastically reducing FLOPs per token while preserving model capacity.
22
+ - **Stability Optimizations**: Includes techniques such as **zero-centered and weight-decayed layernorm**, and other stabilizing enhancements for robust pre-training and post-training.
23
+ - **Multi-Token Prediction (MTP)**: Boosts pretraining model performance and accelerates inference.
24
+
25
+ We are seeing strong performance in terms of both parameter efficiency and inference speed for Qwen3-Next-80B-A3B:
26
+ - Qwen3-Next-80B-A3B-Base outperforms Qwen3-32B-Base on downstream tasks with 10% of the total training cost and with 10 times inference throughput for context over 32K tokens.
27
+ - Leveraging [GSPO](https://qwenlm.github.io/blog/gspo/), we have addressed the stability and efficiency challenges posed by the hybrid attention mechanism combined with a high-sparsity MoE architecture in RL training.
28
+ Qwen3-Next-80B-A3B-Thinking demonstrates outstanding performance on complex reasoning tasks, not only **surpassing Qwen3-30B-A3B-Thinking-2507 and Qwen3-32B-Thinking**, but also **outperforming the proprietary model Gemini-2.5-Flash-Thinking** across multiple benchmarks.
29
+
30
+ ![Qwen3-Next-80B-A3B-Thinking Benchmark Comparison](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-Next/Qwen3-Next-80B-A3B-Thinking.001.jpeg)
31
+
32
+ For more details, please refer to our blog post [Qwen3-Next](https://qwenlm.github.io/blog/qwen3_next/).
33
+
34
+ ## Model Overview
35
+
36
+ > [!Note]
37
+ > **Qwen3-Next-80B-A3B-Thinking** supports only thinking mode.
38
+ > To enforce model thinking, the default chat template automatically includes `<think>`.
39
+ > Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
40
+
41
+ > [!Note]
42
+ > **Qwen3-Next-80B-A3B-Thinking** may generate thinking content longer than its predecessor.
43
+ > We strongly recommend its use in highly complex reasoning tasks.
44
+
45
+
46
+ **Qwen3-Next-80B-A3B-Thinking** has the following features:
47
+ - Type: Causal Language Models
48
+ - Training Stage: Pretraining (15T tokens) & Post-training
49
+ - Number of Parameters: 80B in total and 3B activated
50
+ - Number of Paramaters (Non-Embedding): 79B
51
+ - Number of Layers: 48
52
+ - Hidden Dimension: 2048
53
+ - Hybrid Layout: 12 \* (3 \* (Gated DeltaNet -> MoE) -> (Gated Attention -> MoE))
54
+ - Gated Attention:
55
+ - Number of Attention Heads: 16 for Q and 2 for KV
56
+ - Head Dimension: 256
57
+ - Rotary Position Embedding Dimension: 64
58
+ - Gated DeltaNet:
59
+ - Number of Linear Attention Heads: 32 for V and 16 for QK
60
+ - Head Dimension: 128
61
+ - Mixture of Experts:
62
+ - Number of Experts: 512
63
+ - Number of Activated Experts: 10
64
+ - Number of Shared Experts: 1
65
+ - Expert Intermediate Dimension: 512
66
+ - Context Length: 262,144 natively and extensible up to 1,010,000 tokens
67
+
68
+ <img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-Next/model_architecture.png" height="384px" title="Qwen3-Next Model Architecture" />
69
+
70
+
71
+ ## Performance
72
+
73
+ | | Qwen3-30B-A3B-Thinking-2507 | Qwen3-32B Thinking | Qwen3-235B-A22B-Thinking-2507 | Gemini-2.5-Flash Thinking | Qwen3-Next-80B-A3B-Thinking |
74
+ |--- | --- | --- | --- | --- | --- |
75
+ | **Knowledge** | | | | |
76
+ | MMLU-Pro | 80.9 | 79.1 | **84.4** | 81.9 | 82.7 |
77
+ | MMLU-Redux | 91.4 | 90.9 | **93.8** | 92.1 | 92.5 |
78
+ | GPQA | 73.4 | 68.4 | 81.1 | **82.8** | 77.2 |
79
+ | SuperGPQA | 56.8 | 54.1 | **64.9** | 57.8 | 60.8 |
80
+ | **Reasoning** | | | | |
81
+ | AIME25 | 85.0 | 72.9 | **92.3** | 72.0 | 87.8 |
82
+ | HMMT25 | 71.4 | 51.5 | **83.9** | 64.2 | 73.9 |
83
+ | LiveBench 241125 | 76.8 | 74.9 | **78.4** | 74.3 | 76.6 |
84
+ | **Coding** | | | | |
85
+ | LiveCodeBench v6 (25.02-25.05) | 66.0 | 60.6 | **74.1** | 61.2 | 68.7 |
86
+ | CFEval | 2044 | 1986 | **2134** | 1995 | 2071 |
87
+ | OJBench | 25.1 | 24.1 | **32.5** | 23.5 | 29.7 |
88
+ | **Alignment** | | | | |
89
+ | IFEval | 88.9 | 85.0 | 87.8 | **89.8** | 88.9 |
90
+ | Arena-Hard v2* | 56.0 | 48.4 | **79.7** | 56.7 | 62.3 |
91
+ | WritingBench | 85.0 | 79.0 | **88.3** | 83.9 | 84.6 |
92
+ | **Agent** | | | | |
93
+ | BFCL-v3 | **72.4** | 70.3 | 71.9 | 68.6 | 72.0 |
94
+ | TAU1-Retail | 67.8 | 52.8 | 67.8 | 65.2 | **69.6** |
95
+ | TAU1-Airline | 48.0 | 29.0 | 46.0 | **54.0** | 49.0 |
96
+ | TAU2-Retail | 58.8 | 49.7 | **71.9** | 66.7 | 67.8 |
97
+ | TAU2-Airline | 58.0 | 45.5 | 58.0 | 52.0 | **60.5** |
98
+ | TAU2-Telecom | 26.3 | 27.2 | **45.6** | 31.6 | 43.9 |
99
+ | **Multilingualism** | | | | |
100
+ | MultiIF | 76.4 | 73.0 | **80.6** | 74.4 | 77.8 |
101
+ | MMLU-ProX | 76.4 | 74.6 | **81.0** | 80.2 | 78.7 |
102
+ | INCLUDE | 74.4 | 73.7 | 81.0 | **83.9** | 78.9 |
103
+ | PolyMATH | 52.6 | 47.4 | **60.1** | 49.8 | 56.3 |
104
+
105
+ *: For reproducibility, we report the win rates evaluated by GPT-4.1.
106
+
107
+ ## Quickstart
108
+
109
+ The code for Qwen3-Next has been merged into the main branch of Hugging Face `transformers`.
110
+
111
+ ```shell
112
+ pip install git+https://github.com/huggingface/transformers.git@main
113
+ ```
114
+
115
+ With earlier versions, you will encounter the following error:
116
+ ```
117
+ KeyError: 'qwen3_next'
118
+ ```
119
+
120
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
121
+ ```python
122
+ from transformers import AutoModelForCausalLM, AutoTokenizer
123
+
124
+ model_name = "Qwen/Qwen3-Next-80B-A3B-Thinking"
125
+
126
+ # load the tokenizer and the model
127
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
128
+ model = AutoModelForCausalLM.from_pretrained(
129
+ model_name,
130
+ dtype="auto",
131
+ device_map="auto"
132
+ )
133
+
134
+ # prepare the model input
135
+ prompt = "Give me a short introduction to large language model."
136
+ messages = [
137
+ {"role": "user", "content": prompt},
138
+ ]
139
+ text = tokenizer.apply_chat_template(
140
+ messages,
141
+ tokenize=False,
142
+ add_generation_prompt=True,
143
+ )
144
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
145
+
146
+ # conduct text completion
147
+ generated_ids = model.generate(
148
+ **model_inputs,
149
+ max_new_tokens=32768,
150
+ )
151
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
152
+
153
+ # parsing thinking content
154
+ try:
155
+ # rindex finding 151668 (</think>)
156
+ index = len(output_ids) - output_ids[::-1].index(151668)
157
+ except ValueError:
158
+ index = 0
159
+
160
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
161
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
162
+
163
+ print("thinking content:", thinking_content) # no opening <think> tag
164
+ print("content:", content)
165
+ ```
166
+
167
+ > [!Note]
168
+ > Multi-Token Prediction (MTP) is not generally available in Hugging Face Transformers.
169
+
170
+ > [!Note]
171
+ > The efficiency or throughput improvement depends highly on the implementation.
172
+ > It is recommended to adopt a dedicated inference framework, e.g., SGLang and vLLM, for inference tasks.
173
+
174
+ > [!Tip]
175
+ > Depending on the inference settings, you may observe better efficiency with [`flash-linear-attention`](https://github.com/fla-org/flash-linear-attention#installation) and [`causal-conv1d`](https://github.com/Dao-AILab/causal-conv1d).
176
+ > See the above links for detailed instructions and requirements.
177
+
178
+ ## Deployment
179
+
180
+ For deployment, you can use the latest `sglang` or `vllm` to create an OpenAI-compatible API endpoint.
181
+
182
+ ### SGLang
183
+
184
+ [SGLang](https://github.com/sgl-project/sglang) is a fast serving framework for large language models and vision language models.
185
+ SGLang could be used to launch a server with OpenAI-compatible API service.
186
+
187
+ SGLang has supported Qwen3-Next in its `main` branch, which can be installed from source:
188
+ ```shell
189
+ pip install 'sglang[all] @ git+https://github.com/sgl-project/sglang.git@main#subdirectory=python'
190
+ ```
191
+
192
+ The following command can be used to create an API endpoint at `http://localhost:30000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
193
+ ```shell
194
+ SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Thinking --port 30000 --tp-size 4 --context-length 262144 --reasoning-parser deepseek-r1 --mem-fraction-static 0.8
195
+ ```
196
+
197
+ The following command is recommended for MTP with the rest settings the same as above:
198
+ ```shell
199
+ SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Thinking --port 30000 --tp-size 4 --context-length 262144 --reasoning-parser deepseek-r1 --mem-fraction-static 0.8 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
200
+ ```
201
+
202
+ > [!Note]
203
+ > The environment variable `SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1` is required at the moment.
204
+
205
+ > [!Note]
206
+ > The default context length is 256K.
207
+ > If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value.
208
+ > However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072.
209
+
210
+ ### vLLM
211
+
212
+ [vLLM](https://github.com/vllm-project/vllm) is a high-throughput and memory-efficient inference and serving engine for LLMs.
213
+ vLLM could be used to launch a server with OpenAI-compatible API service.
214
+
215
+ vLLM has supported Qwen3-Next in its `main` branch, which can be installed from source:
216
+ ```shell
217
+ pip install git+https://github.com/vllm-project/vllm.git
218
+ ```
219
+
220
+ The following command can be used to create an API endpoint at `http://localhost:8000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
221
+ ```shell
222
+ VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen/Qwen3-Next-80B-A3B-Thinking --port 8000 --tensor-parallel-size 4 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
223
+ ```
224
+
225
+ The following command is recommended for MTP with the rest settings the same as above:
226
+ ```shell
227
+ VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen/Qwen3-Next-80B-A3B-Thinking --port 8000 --tensor-parallel-size 4 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
228
+ ```
229
+
230
+ > [!Note]
231
+ > The environment variable `VLLM_ALLOW_LONG_MAX_MODEL_LEN=1` is required at the moment.
232
+
233
+ > [!Note]
234
+ > The default context length is 256K.
235
+ > If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value.
236
+ > However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.
237
+
238
+ ## Agentic Use
239
+
240
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
241
+
242
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
243
+ ```python
244
+ from qwen_agent.agents import Assistant
245
+
246
+ # Define LLM
247
+ # Using Alibaba Cloud Model Studio
248
+ llm_cfg = {
249
+ 'model': 'Qwen3-Next-80B-A3B-Thinking',
250
+ 'model_type': 'qwen_dashscope',
251
+ }
252
+
253
+ # Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
254
+ # functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
255
+ # `VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve Qwen/Qwen3-Next-80B-A3B-Thinking --served-model-name Qwen3-Next-80B-A3B-Thinking --port 8000 --tensor-parallel-size 4 --max-model-len 262144`.
256
+ #
257
+ # llm_cfg = {
258
+ # 'model': 'Qwen3-Next-80B-A3B-Thinking',
259
+ #
260
+ # # Use a custom endpoint compatible with OpenAI API:
261
+ # 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
262
+ # 'api_key': 'EMPTY',
263
+ # 'generate_cfg': {
264
+ # 'thought_in_content': True,
265
+ # },
266
+ # }
267
+
268
+ # Define Tools
269
+ tools = [
270
+ {'mcpServers': { # You can specify the MCP configuration file
271
+ 'time': {
272
+ 'command': 'uvx',
273
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
274
+ },
275
+ "fetch": {
276
+ "command": "uvx",
277
+ "args": ["mcp-server-fetch"]
278
+ }
279
+ }
280
+ },
281
+ 'code_interpreter', # Built-in tools
282
+ ]
283
+
284
+ # Define Agent
285
+ bot = Assistant(llm=llm_cfg, function_list=tools)
286
+
287
+ # Streaming generation
288
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
289
+ for responses in bot.run(messages=messages):
290
+ pass
291
+ print(responses)
292
+ ```
293
+
294
+
295
+ ## Processing Ultra-Long Texts
296
+
297
+ Qwen3-Next natively supports context lengths of up to 262,144 tokens.
298
+ For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively.
299
+ We have validated the model's performance on context lengths of up to 1 million tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
300
+
301
+ YaRN is currently supported by several inference frameworks, e.g., `transformers`, `vllm` and `sglang`.
302
+ In general, there are two approaches to enabling YaRN for supported frameworks:
303
+
304
+ - Modifying the model files:
305
+ In the `config.json` file, add the `rope_scaling` fields:
306
+ ```json
307
+ {
308
+ ...,
309
+ "rope_scaling": {
310
+ "rope_type": "yarn",
311
+ "factor": 4.0,
312
+ "original_max_position_embeddings": 262144
313
+ }
314
+ }
315
+ ```
316
+
317
+ - Passing command line arguments:
318
+
319
+ For `vllm`, you can use
320
+ ```shell
321
+ VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":262144}' --max-model-len 1010000
322
+ ```
323
+
324
+ For `sglang`, you can use
325
+ ```shell
326
+ SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":262144}}' --context-length 1010000
327
+ ```
328
+
329
+ > [!NOTE]
330
+ > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
331
+ > We advise adding the `rope_scaling` configuration only when processing long contexts is required.
332
+ > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set `factor` as 2.0.
333
+
334
+ ## Best Practices
335
+
336
+ To achieve optimal performance, we recommend the following settings:
337
+
338
+ 1. **Sampling Parameters**:
339
+ - We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
340
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
341
+
342
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
343
+
344
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
345
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
346
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
347
+
348
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
349
+
350
+ ### Citation
351
+
352
+ If you find our work helpful, feel free to give us a cite.
353
+
354
+ ```
355
+ @misc{qwen3technicalreport,
356
+ title={Qwen3 Technical Report},
357
+ author={Qwen Team},
358
+ year={2025},
359
+ eprint={2505.09388},
360
+ archivePrefix={arXiv},
361
+ primaryClass={cs.CL},
362
+ url={https://arxiv.org/abs/2505.09388},
363
+ }
364
+
365
+ @article{qwen2.5-1m,
366
+ title={Qwen2.5-1M Technical Report},
367
+ author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
368
+ journal={arXiv preprint arXiv:2501.15383},
369
+ year={2025}
370
+ }
371
+ ```