NobodyExistsOnTheInternet commited on
Commit
7934b72
·
verified ·
1 Parent(s): 52f981a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -798
README.md CHANGED
@@ -3,802 +3,5 @@ license: other
3
  license_name: modified-mit
4
  library_name: transformers
5
  ---
6
- <div align="center">
7
- <picture>
8
- <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
9
- </picture>
10
- </div>
11
 
12
- <hr>
13
-
14
- <div align="center" style="line-height:1">
15
- <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
16
- <a href="https://github.com/moonshotai/Kimi-K2"><img alt="github" src="https://img.shields.io/badge/🤖%20Github-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
17
- <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
18
- </div>
19
-
20
- <div align="center" style="line-height: 1;">
21
- <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
22
- <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
23
- <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
24
- </div>
25
-
26
- <div align="center" style="line-height: 1;">
27
- <a href="https://github.com/moonshotai/Kimi-K2/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
28
- </div>
29
-
30
- <p align="center">
31
- <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;<a href="https://github.com/MoonshotAI/Kimi-K2/blob/main/tech_report.pdf">Paper</a></b>
32
- </p>
33
-
34
- ## 0. Changelog
35
- ### 2025.8.11
36
- - Messages with `name` field are now supported. We’ve also moved the chat template to a standalone file for easier viewing.
37
- ### 2025.7.18
38
- - We further modified our chat template to improve its robustness. The default system prompt has also been updated.
39
- ### 2025.7.15
40
- - We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
41
- - We fixed a bug in the chat template that was breaking multi-turn tool calls.
42
-
43
- ## 1. Model Introduction
44
-
45
- Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
46
-
47
- ### Key Features
48
- - Large-Scale Training: Pre-trained a 1T parameter MoE model on 15.5T tokens with zero training instability.
49
- - MuonClip Optimizer: We apply the Muon optimizer to an unprecedented scale, and develop novel optimization techniques to resolve instabilities while scaling up.
50
- - Agentic Intelligence: Specifically designed for tool use, reasoning, and autonomous problem-solving.
51
-
52
- ### Model Variants
53
- - **Kimi-K2-Base**: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
54
- - **Kimi-K2-Instruct**: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.
55
-
56
- <div align="center">
57
- <picture>
58
- <img src="figures/banner.png" width="80%" alt="Evaluation Results">
59
- </picture>
60
- </div>
61
-
62
- ## 2. Model Summary
63
-
64
- <div align="center">
65
-
66
-
67
- | | |
68
- |:---:|:---:|
69
- | **Architecture** | Mixture-of-Experts (MoE) |
70
- | **Total Parameters** | 1T |
71
- | **Activated Parameters** | 32B |
72
- | **Number of Layers** (Dense layer included) | 61 |
73
- | **Number of Dense Layers** | 1 |
74
- | **Attention Hidden Dimension** | 7168 |
75
- | **MoE Hidden Dimension** (per Expert) | 2048 |
76
- | **Number of Attention Heads** | 64 |
77
- | **Number of Experts** | 384 |
78
- | **Selected Experts per Token** | 8 |
79
- | **Number of Shared Experts** | 1 |
80
- | **Vocabulary Size** | 160K |
81
- | **Context Length** | 128K |
82
- | **Attention Mechanism** | MLA |
83
- | **Activation Function** | SwiGLU |
84
- </div>
85
-
86
- ## 3. Evaluation Results
87
-
88
- #### Instruction model evaluation results
89
-
90
- <div align="center">
91
- <table>
92
- <thead>
93
- <tr>
94
- <th align="center">Benchmark</th>
95
- <th align="center">Metric</th>
96
- <th align="center"><sup>Kimi K2 Instruct</sup></th>
97
- <th align="center"><sup>DeepSeek-V3-0324</sup></th>
98
- <th align="center"><sup>Qwen3-235B-A22B <br><sup>(non-thinking)</sup></sup></th>
99
- <th align="center"><sup>Claude Sonnet 4 <br><sup>(w/o extended thinking)</sup></sup></th>
100
- <th align="center"><sup>Claude Opus 4 <br><sup>(w/o extended thinking)</sup></sup></th>
101
- <th align="center"><sup>GPT-4.1</sup></th>
102
- <th align="center"><sup>Gemini 2.5 Flash <br> Preview (05-20)</sup></th>
103
- </tr>
104
- </thead>
105
- <tbody>
106
- <tr>
107
- <td align="center" colspan=9><strong>Coding Tasks</strong></td>
108
- </tr>
109
- <tr>
110
- <td align="center">LiveCodeBench v6<br><sup>(Aug 24 - May 25)</sup></td>
111
- <td align="center">Pass@1</td>
112
- <td align="center"><strong>53.7</strong></td>
113
- <td align="center">46.9</td>
114
- <td align="center">37.0</td>
115
- <td align="center">48.5</td>
116
- <td align="center">47.4</td>
117
- <td align="center">44.7</td>
118
- <td align="center">44.7</td>
119
- </tr>
120
- <tr>
121
- <td align="center">OJBench</td>
122
- <td align="center">Pass@1</td>
123
- <td align="center"><strong>27.1</strong></td>
124
- <td align="center">24.0</td>
125
- <td align="center">11.3</td>
126
- <td align="center">15.3</td>
127
- <td align="center">19.6</td>
128
- <td align="center">19.5</td>
129
- <td align="center">19.5</td>
130
- </tr>
131
-
132
- <tr>
133
- <td align="center">MultiPL-E</td>
134
- <td align="center">Pass@1</td>
135
- <td align="center"><ins><strong>85.7</strong></ins></td>
136
- <td align="center">83.1</td>
137
- <td align="center">78.2</td>
138
- <td align="center">88.6</td>
139
- <td align="center"><strong>89.6</strong></td>
140
- <td align="center">86.7</td>
141
- <td align="center">85.6</td>
142
- </tr>
143
-
144
- <tr>
145
- <td align="center">SWE-bench Verified <br/><sup>(Agentless Coding)</sup></td>
146
- <td align="center">Single Patch w/o Test (Acc)</td>
147
- <td align="center"><ins><strong>51.8</strong></ins></td>
148
- <td align="center">36.6</td>
149
- <td align="center">39.4</td>
150
- <td align="center">50.2</td>
151
- <td align="center"><strong>53.0</strong></td>
152
- <td align="center">40.8</td>
153
- <td align="center">32.6</td>
154
- </tr>
155
-
156
- <tr>
157
- <td align="center" rowspan="2">SWE-bench Verified <br/> <sup>(Agentic Coding)</sup></td>
158
- <td align="center">Single Attempt (Acc)</td>
159
- <td align="center"><ins><strong>65.8</strong></ins></td>
160
- <td align="center">38.8</td>
161
- <td align="center">34.4</td>
162
- <td align="center"><strong>72.7</strong><sup>*</sup></td>
163
- <td align="center">72.5<sup>*</sup></td>
164
- <td align="center">54.6</td>
165
- <td align="center">—</td>
166
- </tr>
167
-
168
- <tr>
169
- <!--<td align="center">(Agentic Coding)</td>-->
170
- <td align="center">Multiple Attempts (Acc)</td>
171
- <td align="center"><ins><strong>71.6</strong></ins></td>
172
- <td align="center">—</td>
173
- <td align="center">—</td>
174
- <td align="center"><strong>80.2</strong></td>
175
- <td align="center">79.4<sup>*</sup></td>
176
- <td align="center">—</td>
177
- <td align="center">—</td>
178
- </tr>
179
-
180
- <tr>
181
- <td align="center">SWE-bench Multilingual<br /> <sup>(Agentic Coding)</sup></td>
182
- <td align="center">Single Attempt (Acc)</td>
183
- <td align="center"><ins><strong>47.3</strong> </ins></td>
184
- <td align="center">25.8</td>
185
- <td align="center">20.9</td>
186
- <td align="center"><strong>51.0</strong></td>
187
- <td align="center">—</td>
188
- <td align="center">31.5</td>
189
- <td align="center">—</td>
190
- </tr>
191
-
192
- <tr>
193
- <td align="center" rowspan="2">TerminalBench</td>
194
- <td align="center">Inhouse Framework (Acc)</td>
195
- <td align="center"><ins><strong>30.0</strong></ins></td>
196
- <td align="center">—</td>
197
- <td align="center">—</td>
198
- <td align="center">35.5</td>
199
- <td align="center"><strong>43.2</strong></td>
200
- <td align="center">8.3</td>
201
- <td align="center">—</td>
202
- </tr>
203
-
204
- <tr>
205
- <!--<td align="center">TerminalBench</td>-->
206
- <td align="center">Terminus (Acc)</td>
207
- <td align="center"><ins><strong>25.0</strong> </ins></td>
208
- <td align="center">16.3</td>
209
- <td align="center">6.6</td>
210
- <td align="center">—</td>
211
- <td align="center">—</td>
212
- <td align="center"><strong>30.3</strong></td>
213
- <td align="center">16.8</td>
214
- </tr>
215
- <tr>
216
- <td align="center">Aider-Polyglot</td>
217
- <td align="center">Acc</td>
218
- <td align="center">60.0</td>
219
- <td align="center">55.1</td>
220
- <td align="center"><ins><strong>61.8</strong></ins></td>
221
- <td align="center">56.4</td>
222
- <td align="center"><strong>70.7</strong></td>
223
- <td align="center">52.4</td>
224
- <td align="center">44.0</td>
225
- </tr>
226
- <tr>
227
- <td align="center" colspan=9><strong>Tool Use Tasks</strong></td>
228
- </tr>
229
- <tr>
230
- <td align="center">Tau2 retail</td>
231
- <td align="center">Avg@4</td>
232
- <td align="center"><ins><strong>70.6</strong></ins></td>
233
- <td align="center">69.1</td>
234
- <td align="center">57.0</td>
235
- <td align="center">75.0</td>
236
- <td align="center"><strong>81.8</strong></td>
237
- <td align="center">74.8</td>
238
- <td align="center">64.3</td>
239
- </tr>
240
- <tr>
241
- <td align="center">Tau2 airline</td>
242
- <td align="center">Avg@4</td>
243
- <td align="center"><ins><strong>56.5</strong></ins></td>
244
- <td align="center">39.0</td>
245
- <td align="center">26.5</td>
246
- <td align="center">55.5</td>
247
- <td align="center"><strong>60.0</strong></td>
248
- <td align="center">54.5</td>
249
- <td align="center">42.5</td>
250
- </tr>
251
- <tr>
252
- <td align="center">Tau2 telecom</td>
253
- <td align="center">Avg@4</td>
254
- <td align="center"><strong>65.8</strong></td>
255
- <td align="center">32.5</td>
256
- <td align="center">22.1</td>
257
- <td align="center">45.2</td>
258
- <td align="center">57.0</td>
259
- <td align="center">38.6</td>
260
- <td align="center">16.9</td>
261
- </tr>
262
- <tr>
263
- <td align="center">AceBench</td>
264
- <td align="center">Acc</td>
265
- <td align="center"><ins><strong>76.5</strong></ins></td>
266
- <td align="center">72.7</td>
267
- <td align="center">70.5</td>
268
- <td align="center">76.2</td>
269
- <td align="center">75.6</td>
270
- <td align="center"><strong>80.1</strong></td>
271
- <td align="center">74.5</td>
272
- </tr>
273
- <tr>
274
- <td align="center" colspan=9><strong>Math &amp; STEM Tasks</strong></td>
275
- </tr>
276
- <tr>
277
- <td align="center">AIME 2024</td>
278
- <td align="center">Avg@64</td>
279
- <td align="center"><strong>69.6</strong></td>
280
- <td align="center">59.4<sup>*</sup></td>
281
- <td align="center">40.1<sup>*</sup></td>
282
- <td align="center">43.4</td>
283
- <td align="center">48.2</td>
284
- <td align="center">46.5</td>
285
- <td align="center">61.3</td>
286
- </tr>
287
- <tr>
288
- <td align="center">AIME 2025</td>
289
- <td align="center">Avg@64</td>
290
- <td align="center"><strong>49.5</strong></td>
291
- <td align="center">46.7</td>
292
- <td align="center">24.7<sup>*</sup></td>
293
- <td align="center">33.1<sup>*</sup></td>
294
- <td align="center">33.9<sup>*</sup></td>
295
- <td align="center">37.0</td>
296
- <td align="center">46.6</td>
297
- </tr>
298
- <tr>
299
- <td align="center">MATH-500</td>
300
- <td align="center">Acc</td>
301
- <td align="center"><strong>97.4</strong></td>
302
- <td align="center">94.0<sup>*</sup></td>
303
- <td align="center">91.2<sup>*</sup></td>
304
- <td align="center">94.0</td>
305
- <td align="center">94.4</td>
306
- <td align="center">92.4</td>
307
- <td align="center">95.4</td>
308
- </tr>
309
- <tr>
310
- <td align="center">HMMT 2025</td>
311
- <td align="center">Avg@32</td>
312
- <td align="center"><strong>38.8</strong></td>
313
- <td align="center">27.5</td>
314
- <td align="center">11.9</td>
315
- <td align="center">15.9</td>
316
- <td align="center">15.9</td>
317
- <td align="center">19.4</td>
318
- <td align="center">34.7</td>
319
- </tr>
320
- <tr>
321
- <td align="center">CNMO 2024</td>
322
- <td align="center">Avg@16</td>
323
- <td align="center">74.3</td>
324
- <td align="center"><ins><strong>74.7</strong></ins></td>
325
- <td align="center">48.6</td>
326
- <td align="center">60.4</td>
327
- <td align="center">57.6</td>
328
- <td align="center">56.6</td>
329
- <td align="center"><strong>75.0</strong></td>
330
- </tr>
331
- <tr>
332
- <td align="center">PolyMath-en</td>
333
- <td align="center">Avg@4</td>
334
- <td align="center"><strong>65.1</strong></td>
335
- <td align="center">59.5</td>
336
- <td align="center">51.9</td>
337
- <td align="center">52.8</td>
338
- <td align="center">49.8</td>
339
- <td align="center">54.0</td>
340
- <td align="center">49.9</td>
341
- </tr>
342
-
343
- <tr>
344
- <td align="center">ZebraLogic</td>
345
- <td align="center">Acc</td>
346
- <td align="center"><strong>89.0</strong></td>
347
- <td align="center">84.0</td>
348
- <td align="center">37.7<sup>*</sup></td>
349
- <td align="center">73.7</td>
350
- <td align="center">59.3</td>
351
- <td align="center">58.5</td>
352
- <td align="center">57.9</td>
353
- </tr>
354
-
355
- <tr>
356
- <td align="center">AutoLogi</td>
357
- <td align="center">Acc</td>
358
- <td align="center"><ins><strong>89.5</strong></ins></td>
359
- <td align="center">88.9</td>
360
- <td align="center">83.3</td>
361
- <td align="center"><strong>89.8</strong></td>
362
- <td align="center">86.1</td>
363
- <td align="center">88.2</td>
364
- <td align="center">84.1</td>
365
- </tr>
366
-
367
- <tr>
368
- <td align="center">GPQA-Diamond</td>
369
- <td align="center">Avg@8</td>
370
- <td align="center"><strong>75.1</strong></td>
371
- <td align="center">68.4<sup>*</sup></td>
372
- <td align="center">62.9<sup>*</sup></td>
373
- <td align="center">70.0<sup>*</sup></td>
374
- <td align="center">74.9<sup>*</sup></td>
375
- <td align="center">66.3</td>
376
- <td align="center">68.2</td>
377
- </tr>
378
-
379
- <tr>
380
- <td align="center">SuperGPQA</td>
381
- <td align="center">Acc</td>
382
- <td align="center"><strong>57.2</strong></td>
383
- <td align="center">53.7</td>
384
- <td align="center">50.2</td>
385
- <td align="center">55.7</td>
386
- <td align="center">56.5</td>
387
- <td align="center">50.8</td>
388
- <td align="center">49.6</td>
389
- </tr>
390
-
391
- <tr>
392
- <td align="center">Humanity's Last Exam<br><sup>(Text Only)</sup></td>
393
- <td align="center">-</td>
394
- <td align="center">4.7</td>
395
- <td align="center">5.2</td>
396
- <td align="center"><ins><strong>5.7</strong></ins></td>
397
- <td align="center">5.8</td>
398
- <td align="center"><strong>7.1</strong></td>
399
- <td align="center">3.7</td>
400
- <td align="center">5.6</td>
401
- </tr>
402
-
403
- <tr>
404
- <td align="center" colspan=9><strong>General Tasks</strong></td>
405
- </tr>
406
-
407
- <tr>
408
- <td align="center">MMLU</td>
409
- <td align="center">EM</td>
410
- <td align="center"><ins><strong>89.5</strong></ins></td>
411
- <td align="center">89.4</td>
412
- <td align="center">87.0</td>
413
- <td align="center">91.5</td>
414
- <td align="center"><strong>92.9</strong></td>
415
- <td align="center">90.4</td>
416
- <td align="center">90.1</td>
417
- </tr>
418
-
419
- <tr>
420
- <td align="center">MMLU-Redux</td>
421
- <td align="center">EM</td>
422
- <td align="center"><ins><strong>92.7</strong></ins></td>
423
- <td align="center">90.5</td>
424
- <td align="center">89.2</td>
425
- <td align="center">93.6</td>
426
- <td align="center"><strong>94.2</strong></td>
427
- <td align="center">92.4</td>
428
- <td align="center">90.6</td>
429
- </tr>
430
-
431
- <tr>
432
- <td align="center">MMLU-Pro</td>
433
- <td align="center">EM</td>
434
- <td align="center">81.1</td>
435
- <td align="center"><ins><strong>81.2</strong></ins><sup>*</sup></td>
436
- <td align="center">77.3</td>
437
- <td align="center">83.7</td>
438
- <td align="center"><strong>86.6</strong></td>
439
- <td align="center">81.8</td>
440
- <td align="center">79.4</td>
441
- </tr>
442
-
443
- <tr>
444
- <td align="center">IFEval</td>
445
- <td align="center">Prompt Strict</td>
446
- <td align="center"><strong>89.8</strong></td>
447
- <td align="center">81.1</td>
448
- <td align="center">83.2<sup>*</sup></td>
449
- <td align="center">87.6</td>
450
- <td align="center">87.4</td>
451
- <td align="center">88.0</td>
452
- <td align="center">84.3</td>
453
- </tr>
454
-
455
- <tr>
456
- <td align="center">Multi-Challenge</td>
457
- <td align="center">Acc</td>
458
- <td align="center"><strong>54.1</strong></td>
459
- <td align="center">31.4</td>
460
- <td align="center">34.0</td>
461
- <td align="center">46.8</td>
462
- <td align="center">49.0</td>
463
- <td align="center">36.4</td>
464
- <td align="center">39.5</td>
465
- </tr>
466
-
467
- <tr>
468
- <td align="center">SimpleQA</td>
469
- <td align="center">Correct</td>
470
- <td align="center"><ins><strong>31.0</strong></ins></td>
471
- <td align="center">27.7</td>
472
- <td align="center">13.2</td>
473
- <td align="center">15.9</td>
474
- <td align="center">22.8</td>
475
- <td align="center"><strong>42.3</strong></td>
476
- <td align="center">23.3</td>
477
- </tr>
478
-
479
- <tr>
480
- <td align="center">Livebench</td>
481
- <td align="center">Pass@1</td>
482
- <td align="center"><strong>76.4</strong></td>
483
- <td align="center">72.4</td>
484
- <td align="center">67.6</td>
485
- <td align="center">74.8</td>
486
- <td align="center">74.6</td>
487
- <td align="center">69.8</td>
488
- <td align="center">67.8</td>
489
- </tr>
490
- </tbody>
491
- </table>
492
- </div>
493
- <sup>
494
- • Bold denotes global SOTA, and underlined denotes open-source SOTA.
495
- </sup><br/><sup>
496
- • Data points marked with * are taken directly from the model's tech report or blog.
497
- </sup><br/><sup>
498
- • All metrics, except for SWE-bench Verified (Agentless), are evaluated with an 8k output token length. SWE-bench Verified (Agentless) is limited to a 16k output token length.
499
- </sup><br/><sup>
500
- • Kimi K2 achieves 65.8% pass@1 on the SWE-bench Verified tests with bash/editor tools (single-attempt patches, no test-time compute). It also achieves a 47.3% pass@1 on the SWE-bench Multilingual tests under the same conditions. Additionally, we report results on SWE-bench Verified tests (71.6%) that leverage parallel test-time compute by sampling multiple sequences and selecting the single best via an internal scoring model.
501
- </sup><br/><sup>
502
- • To ensure the stability of the evaluation, we employed avg@k on the AIME, HMMT, CNMO, PolyMath-en, GPQA-Diamond, EvalPlus, Tau2.
503
- </sup><br/><sup>
504
- • Some data points have been omitted due to prohibitively expensive evaluation costs.
505
- </sup>
506
-
507
- ---
508
-
509
- #### Base model evaluation results
510
-
511
- <div align="center">
512
-
513
- <table>
514
- <thead>
515
- <tr>
516
- <th align="center">Benchmark</th>
517
- <th align="center">Metric</th>
518
- <th align="center">Shot</th>
519
- <th align="center">Kimi K2 Base</th>
520
- <th align="center">Deepseek-V3-Base</th>
521
- <th align="center">Qwen2.5-72B</th>
522
- <th align="center">Llama 4 Maverick</th>
523
- </tr>
524
- </thead>
525
- <tbody>
526
- <tr>
527
- <td align="center" colspan="7"><strong>General Tasks</strong></td>
528
- </tr>
529
- <tr>
530
- <td align="center">MMLU</td>
531
- <td align="center">EM</td>
532
- <td align="center">5-shot</td>
533
- <td align="center"><strong>87.8</strong></td>
534
- <td align="center">87.1</td>
535
- <td align="center">86.1</td>
536
- <td align="center">84.9</td>
537
- </tr>
538
- <tr>
539
- <td align="center">MMLU-pro</td>
540
- <td align="center">EM</td>
541
- <td align="center">5-shot</td>
542
- <td align="center"><strong>69.2</strong></td>
543
- <td align="center">60.6</td>
544
- <td align="center">62.8</td>
545
- <td align="center">63.5</td>
546
- </tr>
547
- <tr>
548
- <td align="center">MMLU-redux-2.0</td>
549
- <td align="center">EM</td>
550
- <td align="center">5-shot</td>
551
- <td align="center"><strong>90.2</strong></td>
552
- <td align="center">89.5</td>
553
- <td align="center">87.8</td>
554
- <td align="center">88.2</td>
555
- </tr>
556
- <tr>
557
- <td align="center">SimpleQA</td>
558
- <td align="center">Correct</td>
559
- <td align="center">5-shot</td>
560
- <td align="center"><strong>35.3</strong></td>
561
- <td align="center">26.5</td>
562
- <td align="center">10.3</td>
563
- <td align="center">23.7</td>
564
- </tr>
565
- <tr>
566
- <td align="center">TriviaQA</td>
567
- <td align="center">EM</td>
568
- <td align="center">5-shot</td>
569
- <td align="center"><strong>85.1</strong></td>
570
- <td align="center">84.1</td>
571
- <td align="center">76.0</td>
572
- <td align="center">79.3</td>
573
- </tr>
574
- <tr>
575
- <td align="center">GPQA-Diamond</td>
576
- <td align="center">Avg@8</td>
577
- <td align="center">5-shot</td>
578
- <td align="center">48.1</td>
579
- <td align="center"><strong>50.5</strong></td>
580
- <td align="center">40.8</td>
581
- <td align="center">49.4</td>
582
- </tr>
583
- <tr>
584
- <td align="center">SuperGPQA</td>
585
- <td align="center">EM</td>
586
- <td align="center">5-shot</td>
587
- <td align="center"><strong>44.7</strong></td>
588
- <td align="center">39.2</td>
589
- <td align="center">34.2</td>
590
- <td align="center">38.8</td>
591
- </tr>
592
- <tr>
593
- <td align="center" colspan="7"><strong>Coding Tasks</strong></td>
594
- </tr>
595
- <tr>
596
- <td align="center">LiveCodeBench v6</td>
597
- <td align="center">Pass@1</td>
598
- <td align="center">1-shot</td>
599
- <td align="center"><strong>26.3</strong></td>
600
- <td align="center">22.9</td>
601
- <td align="center">21.1</td>
602
- <td align="center">25.1</td>
603
- </tr>
604
- <tr>
605
- <td align="center">EvalPlus</td>
606
- <td align="center">Pass@1</td>
607
- <td align="center">-</td>
608
- <td align="center"><strong>80.3</strong></td>
609
- <td align="center">65.6</td>
610
- <td align="center">66.0</td>
611
- <td align="center">65.5</td>
612
- </tr>
613
- <tr>
614
- <td align="center" colspan="7"><strong>Mathematics Tasks</strong></td>
615
- </tr>
616
- <tr>
617
- <td align="center">MATH</td>
618
- <td align="center">EM</td>
619
- <td align="center">4-shot</td>
620
- <td align="center"><strong>70.2</strong></td>
621
- <td align="center">60.1</td>
622
- <td align="center">61.0</td>
623
- <td align="center">63.0</td>
624
- </tr>
625
- <tr>
626
- <td align="center">GSM8k</td>
627
- <td align="center">EM</td>
628
- <td align="center">8-shot</td>
629
- <td align="center"><strong>92.1</strong></td>
630
- <td align="center">91.7</td>
631
- <td align="center">90.4</td>
632
- <td align="center">86.3</td>
633
- </tr>
634
- <tr>
635
- <td align="center" colspan="7"><strong>Chinese Tasks</strong></td>
636
- </tr>
637
- <tr>
638
- <td align="center">C-Eval</td>
639
- <td align="center">EM</td>
640
- <td align="center">5-shot</td>
641
- <td align="center"><strong>92.5</strong></td>
642
- <td align="center">90.0</td>
643
- <td align="center">90.9</td>
644
- <td align="center">80.9</td>
645
- </tr>
646
- <tr>
647
- <td align="center">CSimpleQA</td>
648
- <td align="center">Correct</td>
649
- <td align="center">5-shot</td>
650
- <td align="center"><strong>77.6</strong></td>
651
- <td align="center">72.1</td>
652
- <td align="center">50.5</td>
653
- <td align="center">53.5</td>
654
- </tr>
655
- </tbody>
656
- </table>
657
- </div>
658
- <sup>
659
- • We only evaluate open-source pretrained models in this work. We report results for Qwen2.5-72B because the base checkpoint for Qwen3-235B-A22B was not open-sourced at the time of our study.
660
- </sup><br/><sup>
661
- • All models are evaluated using the same evaluation protocol.
662
-
663
- </sup>
664
-
665
-
666
- ## 4. Deployment
667
- > [!Note]
668
- > You can access Kimi K2's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
669
- >
670
- > The Anthropic-compatible API maps temperature by `real_temperature = request_temperature * 0.6` for better compatible with existing applications.
671
-
672
- Our model checkpoints are stored in the block-fp8 format, you can find it on [Huggingface](https://huggingface.co/moonshotai/Kimi-K2-Instruct).
673
-
674
- Currently, Kimi-K2 is recommended to run on the following inference engines:
675
-
676
- * vLLM
677
- * SGLang
678
- * KTransformers
679
- * TensorRT-LLM
680
-
681
- Deployment examples for vLLM and SGLang can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
682
-
683
- ---
684
-
685
- ## 5. Model Usage
686
-
687
- ### Chat Completion
688
-
689
- Once the local inference service is up, you can interact with it through the chat endpoint:
690
-
691
- ```python
692
- def simple_chat(client: OpenAI, model_name: str):
693
- messages = [
694
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
695
- {"role": "user", "content": [{"type": "text", "text": "Please give a brief self-introduction."}]},
696
- ]
697
- response = client.chat.completions.create(
698
- model=model_name,
699
- messages=messages,
700
- stream=False,
701
- temperature=0.6,
702
- max_tokens=256
703
- )
704
- print(response.choices[0].message.content)
705
- ```
706
-
707
- > [!NOTE]
708
- > The recommended temperature for Kimi-K2-Instruct is `temperature = 0.6`.
709
- > If no special instructions are required, the system prompt above is a good default.
710
-
711
- ---
712
-
713
- ### Tool Calling
714
-
715
- Kimi-K2-Instruct has strong tool-calling capabilities.
716
- To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
717
-
718
- The following example demonstrates calling a weather tool end-to-end:
719
-
720
- ```python
721
- # Your tool implementation
722
- def get_weather(city: str) -> dict:
723
- return {"weather": "Sunny"}
724
-
725
- # Tool schema definition
726
- tools = [{
727
- "type": "function",
728
- "function": {
729
- "name": "get_weather",
730
- "description": "Retrieve current weather information. Call this when the user asks about the weather.",
731
- "parameters": {
732
- "type": "object",
733
- "required": ["city"],
734
- "properties": {
735
- "city": {
736
- "type": "string",
737
- "description": "Name of the city"
738
- }
739
- }
740
- }
741
- }
742
- }]
743
-
744
- # Map tool names to their implementations
745
- tool_map = {
746
- "get_weather": get_weather
747
- }
748
-
749
- def tool_call_with_client(client: OpenAI, model_name: str):
750
- messages = [
751
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
752
- {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
753
- ]
754
- finish_reason = None
755
- while finish_reason is None or finish_reason == "tool_calls":
756
- completion = client.chat.completions.create(
757
- model=model_name,
758
- messages=messages,
759
- temperature=0.6,
760
- tools=tools, # tool list defined above
761
- tool_choice="auto"
762
- )
763
- choice = completion.choices[0]
764
- finish_reason = choice.finish_reason
765
- if finish_reason == "tool_calls":
766
- messages.append(choice.message)
767
- for tool_call in choice.message.tool_calls:
768
- tool_call_name = tool_call.function.name
769
- tool_call_arguments = json.loads(tool_call.function.arguments)
770
- tool_function = tool_map[tool_call_name]
771
- tool_result = tool_function(**tool_call_arguments)
772
- print("tool_result:", tool_result)
773
-
774
- messages.append({
775
- "role": "tool",
776
- "tool_call_id": tool_call.id,
777
- "name": tool_call_name,
778
- "content": json.dumps(tool_result)
779
- })
780
- print("-" * 100)
781
- print(choice.message.content)
782
- ```
783
-
784
- The `tool_call_with_client` function implements the pipeline from user query to tool execution.
785
- This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
786
- For streaming output and manual tool-parsing, see the [Tool Calling Guide](docs/tool_call_guidance.md).
787
-
788
- ---
789
-
790
- ## 6. License
791
-
792
- Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
793
-
794
- ---
795
-
796
- ## 7. Third Party Notices
797
-
798
- See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
799
-
800
- ---
801
-
802
- ## 7. Contact Us
803
-
804
- If you have any questions, please reach out at [[email protected]](mailto:[email protected]).
 
3
  license_name: modified-mit
4
  library_name: transformers
5
  ---
 
 
 
 
 
6
 
7
+ Experimental frankenmerge of Kimi K2-07, 09 and Base