NobodyExistsOnTheInternet commited on
Commit
0b09fcd
·
verified ·
1 Parent(s): 7672544

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. .gitattributes +3 -0
  2. LICENSE +27 -0
  3. README.md +804 -3
  4. THIRD_PARTY_NOTICES.md +47 -0
  5. chat_template.jinja +46 -0
  6. config.json +129 -0
  7. configuration_deepseek.py +212 -0
  8. generation_config.json +4 -0
  9. kimi-logo.png +0 -0
  10. model-00174-of-00348.safetensors +3 -0
  11. model-00175-of-00348.safetensors +3 -0
  12. model-00176-of-00348.safetensors +3 -0
  13. model-00177-of-00348.safetensors +3 -0
  14. model-00178-of-00348.safetensors +3 -0
  15. model-00179-of-00348.safetensors +3 -0
  16. model-00180-of-00348.safetensors +3 -0
  17. model-00181-of-00348.safetensors +3 -0
  18. model-00182-of-00348.safetensors +3 -0
  19. model-00183-of-00348.safetensors +3 -0
  20. model-00184-of-00348.safetensors +3 -0
  21. model-00185-of-00348.safetensors +3 -0
  22. model-00186-of-00348.safetensors +3 -0
  23. model-00187-of-00348.safetensors +3 -0
  24. model-00188-of-00348.safetensors +3 -0
  25. model-00189-of-00348.safetensors +3 -0
  26. model-00190-of-00348.safetensors +3 -0
  27. model-00191-of-00348.safetensors +3 -0
  28. model-00192-of-00348.safetensors +3 -0
  29. model-00193-of-00348.safetensors +3 -0
  30. model-00194-of-00348.safetensors +3 -0
  31. model-00195-of-00348.safetensors +3 -0
  32. model-00196-of-00348.safetensors +3 -0
  33. model-00197-of-00348.safetensors +3 -0
  34. model-00198-of-00348.safetensors +3 -0
  35. model-00199-of-00348.safetensors +3 -0
  36. model-00200-of-00348.safetensors +3 -0
  37. model-00201-of-00348.safetensors +3 -0
  38. model-00202-of-00348.safetensors +3 -0
  39. model-00203-of-00348.safetensors +3 -0
  40. model-00204-of-00348.safetensors +3 -0
  41. model-00205-of-00348.safetensors +3 -0
  42. model-00206-of-00348.safetensors +3 -0
  43. model-00207-of-00348.safetensors +3 -0
  44. model-00208-of-00348.safetensors +3 -0
  45. model-00209-of-00348.safetensors +3 -0
  46. model-00210-of-00348.safetensors +3 -0
  47. model-00211-of-00348.safetensors +3 -0
  48. model-00212-of-00348.safetensors +3 -0
  49. model-00213-of-00348.safetensors +3 -0
  50. model-00214-of-00348.safetensors +3 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
37
+ figures/Base-Evaluation.png filter=lfs diff=lfs merge=lfs -text
38
+ banner.png filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Modified MIT License
2
+
3
+ Copyright (c) 2025 Moonshot AI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the “Software”), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ Our only modification part is that, if the Software (or any derivative works
24
+ thereof) is used for any of your commercial products or services that have
25
+ more than 100 million monthly active users, or more than 20 million US dollars
26
+ (or equivalent in other currencies) in monthly revenue, you shall prominently
27
+ display "Kimi K2" on the user interface of such product or service.
README.md CHANGED
@@ -1,3 +1,804 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: modified-mit
4
+ library_name: transformers
5
+ ---
6
+ <div align="center">
7
+ <picture>
8
+ <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
9
+ </picture>
10
+ </div>
11
+
12
+ <hr>
13
+
14
+ <div align="center" style="line-height:1">
15
+ <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
16
+ <a href="https://github.com/moonshotai/Kimi-K2"><img alt="github" src="https://img.shields.io/badge/🤖%20Github-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
17
+ <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
18
+ </div>
19
+
20
+ <div align="center" style="line-height: 1;">
21
+ <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
22
+ <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
23
+ <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
24
+ </div>
25
+
26
+ <div align="center" style="line-height: 1;">
27
+ <a href="https://github.com/moonshotai/Kimi-K2/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
28
+ </div>
29
+
30
+ <p align="center">
31
+ <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;<a href="https://github.com/MoonshotAI/Kimi-K2/blob/main/tech_report.pdf">Paper</a></b>
32
+ </p>
33
+
34
+ ## 0. Changelog
35
+ ### 2025.8.11
36
+ - Messages with `name` field are now supported. We’ve also moved the chat template to a standalone file for easier viewing.
37
+ ### 2025.7.18
38
+ - We further modified our chat template to improve its robustness. The default system prompt has also been updated.
39
+ ### 2025.7.15
40
+ - We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
41
+ - We fixed a bug in the chat template that was breaking multi-turn tool calls.
42
+
43
+ ## 1. Model Introduction
44
+
45
+ Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
46
+
47
+ ### Key Features
48
+ - Large-Scale Training: Pre-trained a 1T parameter MoE model on 15.5T tokens with zero training instability.
49
+ - MuonClip Optimizer: We apply the Muon optimizer to an unprecedented scale, and develop novel optimization techniques to resolve instabilities while scaling up.
50
+ - Agentic Intelligence: Specifically designed for tool use, reasoning, and autonomous problem-solving.
51
+
52
+ ### Model Variants
53
+ - **Kimi-K2-Base**: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
54
+ - **Kimi-K2-Instruct**: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.
55
+
56
+ <div align="center">
57
+ <picture>
58
+ <img src="figures/banner.png" width="80%" alt="Evaluation Results">
59
+ </picture>
60
+ </div>
61
+
62
+ ## 2. Model Summary
63
+
64
+ <div align="center">
65
+
66
+
67
+ | | |
68
+ |:---:|:---:|
69
+ | **Architecture** | Mixture-of-Experts (MoE) |
70
+ | **Total Parameters** | 1T |
71
+ | **Activated Parameters** | 32B |
72
+ | **Number of Layers** (Dense layer included) | 61 |
73
+ | **Number of Dense Layers** | 1 |
74
+ | **Attention Hidden Dimension** | 7168 |
75
+ | **MoE Hidden Dimension** (per Expert) | 2048 |
76
+ | **Number of Attention Heads** | 64 |
77
+ | **Number of Experts** | 384 |
78
+ | **Selected Experts per Token** | 8 |
79
+ | **Number of Shared Experts** | 1 |
80
+ | **Vocabulary Size** | 160K |
81
+ | **Context Length** | 128K |
82
+ | **Attention Mechanism** | MLA |
83
+ | **Activation Function** | SwiGLU |
84
+ </div>
85
+
86
+ ## 3. Evaluation Results
87
+
88
+ #### Instruction model evaluation results
89
+
90
+ <div align="center">
91
+ <table>
92
+ <thead>
93
+ <tr>
94
+ <th align="center">Benchmark</th>
95
+ <th align="center">Metric</th>
96
+ <th align="center"><sup>Kimi K2 Instruct</sup></th>
97
+ <th align="center"><sup>DeepSeek-V3-0324</sup></th>
98
+ <th align="center"><sup>Qwen3-235B-A22B <br><sup>(non-thinking)</sup></sup></th>
99
+ <th align="center"><sup>Claude Sonnet 4 <br><sup>(w/o extended thinking)</sup></sup></th>
100
+ <th align="center"><sup>Claude Opus 4 <br><sup>(w/o extended thinking)</sup></sup></th>
101
+ <th align="center"><sup>GPT-4.1</sup></th>
102
+ <th align="center"><sup>Gemini 2.5 Flash <br> Preview (05-20)</sup></th>
103
+ </tr>
104
+ </thead>
105
+ <tbody>
106
+ <tr>
107
+ <td align="center" colspan=9><strong>Coding Tasks</strong></td>
108
+ </tr>
109
+ <tr>
110
+ <td align="center">LiveCodeBench v6<br><sup>(Aug 24 - May 25)</sup></td>
111
+ <td align="center">Pass@1</td>
112
+ <td align="center"><strong>53.7</strong></td>
113
+ <td align="center">46.9</td>
114
+ <td align="center">37.0</td>
115
+ <td align="center">48.5</td>
116
+ <td align="center">47.4</td>
117
+ <td align="center">44.7</td>
118
+ <td align="center">44.7</td>
119
+ </tr>
120
+ <tr>
121
+ <td align="center">OJBench</td>
122
+ <td align="center">Pass@1</td>
123
+ <td align="center"><strong>27.1</strong></td>
124
+ <td align="center">24.0</td>
125
+ <td align="center">11.3</td>
126
+ <td align="center">15.3</td>
127
+ <td align="center">19.6</td>
128
+ <td align="center">19.5</td>
129
+ <td align="center">19.5</td>
130
+ </tr>
131
+
132
+ <tr>
133
+ <td align="center">MultiPL-E</td>
134
+ <td align="center">Pass@1</td>
135
+ <td align="center"><ins><strong>85.7</strong></ins></td>
136
+ <td align="center">83.1</td>
137
+ <td align="center">78.2</td>
138
+ <td align="center">88.6</td>
139
+ <td align="center"><strong>89.6</strong></td>
140
+ <td align="center">86.7</td>
141
+ <td align="center">85.6</td>
142
+ </tr>
143
+
144
+ <tr>
145
+ <td align="center">SWE-bench Verified <br/><sup>(Agentless Coding)</sup></td>
146
+ <td align="center">Single Patch w/o Test (Acc)</td>
147
+ <td align="center"><ins><strong>51.8</strong></ins></td>
148
+ <td align="center">36.6</td>
149
+ <td align="center">39.4</td>
150
+ <td align="center">50.2</td>
151
+ <td align="center"><strong>53.0</strong></td>
152
+ <td align="center">40.8</td>
153
+ <td align="center">32.6</td>
154
+ </tr>
155
+
156
+ <tr>
157
+ <td align="center" rowspan="2">SWE-bench Verified <br/> <sup>(Agentic Coding)</sup></td>
158
+ <td align="center">Single Attempt (Acc)</td>
159
+ <td align="center"><ins><strong>65.8</strong></ins></td>
160
+ <td align="center">38.8</td>
161
+ <td align="center">34.4</td>
162
+ <td align="center"><strong>72.7</strong><sup>*</sup></td>
163
+ <td align="center">72.5<sup>*</sup></td>
164
+ <td align="center">54.6</td>
165
+ <td align="center">—</td>
166
+ </tr>
167
+
168
+ <tr>
169
+ <!--<td align="center">(Agentic Coding)</td>-->
170
+ <td align="center">Multiple Attempts (Acc)</td>
171
+ <td align="center"><ins><strong>71.6</strong></ins></td>
172
+ <td align="center">—</td>
173
+ <td align="center">—</td>
174
+ <td align="center"><strong>80.2</strong></td>
175
+ <td align="center">79.4<sup>*</sup></td>
176
+ <td align="center">—</td>
177
+ <td align="center">—</td>
178
+ </tr>
179
+
180
+ <tr>
181
+ <td align="center">SWE-bench Multilingual<br /> <sup>(Agentic Coding)</sup></td>
182
+ <td align="center">Single Attempt (Acc)</td>
183
+ <td align="center"><ins><strong>47.3</strong> </ins></td>
184
+ <td align="center">25.8</td>
185
+ <td align="center">20.9</td>
186
+ <td align="center"><strong>51.0</strong></td>
187
+ <td align="center">—</td>
188
+ <td align="center">31.5</td>
189
+ <td align="center">—</td>
190
+ </tr>
191
+
192
+ <tr>
193
+ <td align="center" rowspan="2">TerminalBench</td>
194
+ <td align="center">Inhouse Framework (Acc)</td>
195
+ <td align="center"><ins><strong>30.0</strong></ins></td>
196
+ <td align="center">—</td>
197
+ <td align="center">—</td>
198
+ <td align="center">35.5</td>
199
+ <td align="center"><strong>43.2</strong></td>
200
+ <td align="center">8.3</td>
201
+ <td align="center">—</td>
202
+ </tr>
203
+
204
+ <tr>
205
+ <!--<td align="center">TerminalBench</td>-->
206
+ <td align="center">Terminus (Acc)</td>
207
+ <td align="center"><ins><strong>25.0</strong> </ins></td>
208
+ <td align="center">16.3</td>
209
+ <td align="center">6.6</td>
210
+ <td align="center">—</td>
211
+ <td align="center">—</td>
212
+ <td align="center"><strong>30.3</strong></td>
213
+ <td align="center">16.8</td>
214
+ </tr>
215
+ <tr>
216
+ <td align="center">Aider-Polyglot</td>
217
+ <td align="center">Acc</td>
218
+ <td align="center">60.0</td>
219
+ <td align="center">55.1</td>
220
+ <td align="center"><ins><strong>61.8</strong></ins></td>
221
+ <td align="center">56.4</td>
222
+ <td align="center"><strong>70.7</strong></td>
223
+ <td align="center">52.4</td>
224
+ <td align="center">44.0</td>
225
+ </tr>
226
+ <tr>
227
+ <td align="center" colspan=9><strong>Tool Use Tasks</strong></td>
228
+ </tr>
229
+ <tr>
230
+ <td align="center">Tau2 retail</td>
231
+ <td align="center">Avg@4</td>
232
+ <td align="center"><ins><strong>70.6</strong></ins></td>
233
+ <td align="center">69.1</td>
234
+ <td align="center">57.0</td>
235
+ <td align="center">75.0</td>
236
+ <td align="center"><strong>81.8</strong></td>
237
+ <td align="center">74.8</td>
238
+ <td align="center">64.3</td>
239
+ </tr>
240
+ <tr>
241
+ <td align="center">Tau2 airline</td>
242
+ <td align="center">Avg@4</td>
243
+ <td align="center"><ins><strong>56.5</strong></ins></td>
244
+ <td align="center">39.0</td>
245
+ <td align="center">26.5</td>
246
+ <td align="center">55.5</td>
247
+ <td align="center"><strong>60.0</strong></td>
248
+ <td align="center">54.5</td>
249
+ <td align="center">42.5</td>
250
+ </tr>
251
+ <tr>
252
+ <td align="center">Tau2 telecom</td>
253
+ <td align="center">Avg@4</td>
254
+ <td align="center"><strong>65.8</strong></td>
255
+ <td align="center">32.5</td>
256
+ <td align="center">22.1</td>
257
+ <td align="center">45.2</td>
258
+ <td align="center">57.0</td>
259
+ <td align="center">38.6</td>
260
+ <td align="center">16.9</td>
261
+ </tr>
262
+ <tr>
263
+ <td align="center">AceBench</td>
264
+ <td align="center">Acc</td>
265
+ <td align="center"><ins><strong>76.5</strong></ins></td>
266
+ <td align="center">72.7</td>
267
+ <td align="center">70.5</td>
268
+ <td align="center">76.2</td>
269
+ <td align="center">75.6</td>
270
+ <td align="center"><strong>80.1</strong></td>
271
+ <td align="center">74.5</td>
272
+ </tr>
273
+ <tr>
274
+ <td align="center" colspan=9><strong>Math &amp; STEM Tasks</strong></td>
275
+ </tr>
276
+ <tr>
277
+ <td align="center">AIME 2024</td>
278
+ <td align="center">Avg@64</td>
279
+ <td align="center"><strong>69.6</strong></td>
280
+ <td align="center">59.4<sup>*</sup></td>
281
+ <td align="center">40.1<sup>*</sup></td>
282
+ <td align="center">43.4</td>
283
+ <td align="center">48.2</td>
284
+ <td align="center">46.5</td>
285
+ <td align="center">61.3</td>
286
+ </tr>
287
+ <tr>
288
+ <td align="center">AIME 2025</td>
289
+ <td align="center">Avg@64</td>
290
+ <td align="center"><strong>49.5</strong></td>
291
+ <td align="center">46.7</td>
292
+ <td align="center">24.7<sup>*</sup></td>
293
+ <td align="center">33.1<sup>*</sup></td>
294
+ <td align="center">33.9<sup>*</sup></td>
295
+ <td align="center">37.0</td>
296
+ <td align="center">46.6</td>
297
+ </tr>
298
+ <tr>
299
+ <td align="center">MATH-500</td>
300
+ <td align="center">Acc</td>
301
+ <td align="center"><strong>97.4</strong></td>
302
+ <td align="center">94.0<sup>*</sup></td>
303
+ <td align="center">91.2<sup>*</sup></td>
304
+ <td align="center">94.0</td>
305
+ <td align="center">94.4</td>
306
+ <td align="center">92.4</td>
307
+ <td align="center">95.4</td>
308
+ </tr>
309
+ <tr>
310
+ <td align="center">HMMT 2025</td>
311
+ <td align="center">Avg@32</td>
312
+ <td align="center"><strong>38.8</strong></td>
313
+ <td align="center">27.5</td>
314
+ <td align="center">11.9</td>
315
+ <td align="center">15.9</td>
316
+ <td align="center">15.9</td>
317
+ <td align="center">19.4</td>
318
+ <td align="center">34.7</td>
319
+ </tr>
320
+ <tr>
321
+ <td align="center">CNMO 2024</td>
322
+ <td align="center">Avg@16</td>
323
+ <td align="center">74.3</td>
324
+ <td align="center"><ins><strong>74.7</strong></ins></td>
325
+ <td align="center">48.6</td>
326
+ <td align="center">60.4</td>
327
+ <td align="center">57.6</td>
328
+ <td align="center">56.6</td>
329
+ <td align="center"><strong>75.0</strong></td>
330
+ </tr>
331
+ <tr>
332
+ <td align="center">PolyMath-en</td>
333
+ <td align="center">Avg@4</td>
334
+ <td align="center"><strong>65.1</strong></td>
335
+ <td align="center">59.5</td>
336
+ <td align="center">51.9</td>
337
+ <td align="center">52.8</td>
338
+ <td align="center">49.8</td>
339
+ <td align="center">54.0</td>
340
+ <td align="center">49.9</td>
341
+ </tr>
342
+
343
+ <tr>
344
+ <td align="center">ZebraLogic</td>
345
+ <td align="center">Acc</td>
346
+ <td align="center"><strong>89.0</strong></td>
347
+ <td align="center">84.0</td>
348
+ <td align="center">37.7<sup>*</sup></td>
349
+ <td align="center">73.7</td>
350
+ <td align="center">59.3</td>
351
+ <td align="center">58.5</td>
352
+ <td align="center">57.9</td>
353
+ </tr>
354
+
355
+ <tr>
356
+ <td align="center">AutoLogi</td>
357
+ <td align="center">Acc</td>
358
+ <td align="center"><ins><strong>89.5</strong></ins></td>
359
+ <td align="center">88.9</td>
360
+ <td align="center">83.3</td>
361
+ <td align="center"><strong>89.8</strong></td>
362
+ <td align="center">86.1</td>
363
+ <td align="center">88.2</td>
364
+ <td align="center">84.1</td>
365
+ </tr>
366
+
367
+ <tr>
368
+ <td align="center">GPQA-Diamond</td>
369
+ <td align="center">Avg@8</td>
370
+ <td align="center"><strong>75.1</strong></td>
371
+ <td align="center">68.4<sup>*</sup></td>
372
+ <td align="center">62.9<sup>*</sup></td>
373
+ <td align="center">70.0<sup>*</sup></td>
374
+ <td align="center">74.9<sup>*</sup></td>
375
+ <td align="center">66.3</td>
376
+ <td align="center">68.2</td>
377
+ </tr>
378
+
379
+ <tr>
380
+ <td align="center">SuperGPQA</td>
381
+ <td align="center">Acc</td>
382
+ <td align="center"><strong>57.2</strong></td>
383
+ <td align="center">53.7</td>
384
+ <td align="center">50.2</td>
385
+ <td align="center">55.7</td>
386
+ <td align="center">56.5</td>
387
+ <td align="center">50.8</td>
388
+ <td align="center">49.6</td>
389
+ </tr>
390
+
391
+ <tr>
392
+ <td align="center">Humanity's Last Exam<br><sup>(Text Only)</sup></td>
393
+ <td align="center">-</td>
394
+ <td align="center">4.7</td>
395
+ <td align="center">5.2</td>
396
+ <td align="center"><ins><strong>5.7</strong></ins></td>
397
+ <td align="center">5.8</td>
398
+ <td align="center"><strong>7.1</strong></td>
399
+ <td align="center">3.7</td>
400
+ <td align="center">5.6</td>
401
+ </tr>
402
+
403
+ <tr>
404
+ <td align="center" colspan=9><strong>General Tasks</strong></td>
405
+ </tr>
406
+
407
+ <tr>
408
+ <td align="center">MMLU</td>
409
+ <td align="center">EM</td>
410
+ <td align="center"><ins><strong>89.5</strong></ins></td>
411
+ <td align="center">89.4</td>
412
+ <td align="center">87.0</td>
413
+ <td align="center">91.5</td>
414
+ <td align="center"><strong>92.9</strong></td>
415
+ <td align="center">90.4</td>
416
+ <td align="center">90.1</td>
417
+ </tr>
418
+
419
+ <tr>
420
+ <td align="center">MMLU-Redux</td>
421
+ <td align="center">EM</td>
422
+ <td align="center"><ins><strong>92.7</strong></ins></td>
423
+ <td align="center">90.5</td>
424
+ <td align="center">89.2</td>
425
+ <td align="center">93.6</td>
426
+ <td align="center"><strong>94.2</strong></td>
427
+ <td align="center">92.4</td>
428
+ <td align="center">90.6</td>
429
+ </tr>
430
+
431
+ <tr>
432
+ <td align="center">MMLU-Pro</td>
433
+ <td align="center">EM</td>
434
+ <td align="center">81.1</td>
435
+ <td align="center"><ins><strong>81.2</strong></ins><sup>*</sup></td>
436
+ <td align="center">77.3</td>
437
+ <td align="center">83.7</td>
438
+ <td align="center"><strong>86.6</strong></td>
439
+ <td align="center">81.8</td>
440
+ <td align="center">79.4</td>
441
+ </tr>
442
+
443
+ <tr>
444
+ <td align="center">IFEval</td>
445
+ <td align="center">Prompt Strict</td>
446
+ <td align="center"><strong>89.8</strong></td>
447
+ <td align="center">81.1</td>
448
+ <td align="center">83.2<sup>*</sup></td>
449
+ <td align="center">87.6</td>
450
+ <td align="center">87.4</td>
451
+ <td align="center">88.0</td>
452
+ <td align="center">84.3</td>
453
+ </tr>
454
+
455
+ <tr>
456
+ <td align="center">Multi-Challenge</td>
457
+ <td align="center">Acc</td>
458
+ <td align="center"><strong>54.1</strong></td>
459
+ <td align="center">31.4</td>
460
+ <td align="center">34.0</td>
461
+ <td align="center">46.8</td>
462
+ <td align="center">49.0</td>
463
+ <td align="center">36.4</td>
464
+ <td align="center">39.5</td>
465
+ </tr>
466
+
467
+ <tr>
468
+ <td align="center">SimpleQA</td>
469
+ <td align="center">Correct</td>
470
+ <td align="center"><ins><strong>31.0</strong></ins></td>
471
+ <td align="center">27.7</td>
472
+ <td align="center">13.2</td>
473
+ <td align="center">15.9</td>
474
+ <td align="center">22.8</td>
475
+ <td align="center"><strong>42.3</strong></td>
476
+ <td align="center">23.3</td>
477
+ </tr>
478
+
479
+ <tr>
480
+ <td align="center">Livebench</td>
481
+ <td align="center">Pass@1</td>
482
+ <td align="center"><strong>76.4</strong></td>
483
+ <td align="center">72.4</td>
484
+ <td align="center">67.6</td>
485
+ <td align="center">74.8</td>
486
+ <td align="center">74.6</td>
487
+ <td align="center">69.8</td>
488
+ <td align="center">67.8</td>
489
+ </tr>
490
+ </tbody>
491
+ </table>
492
+ </div>
493
+ <sup>
494
+ • Bold denotes global SOTA, and underlined denotes open-source SOTA.
495
+ </sup><br/><sup>
496
+ • Data points marked with * are taken directly from the model's tech report or blog.
497
+ </sup><br/><sup>
498
+ • All metrics, except for SWE-bench Verified (Agentless), are evaluated with an 8k output token length. SWE-bench Verified (Agentless) is limited to a 16k output token length.
499
+ </sup><br/><sup>
500
+ • Kimi K2 achieves 65.8% pass@1 on the SWE-bench Verified tests with bash/editor tools (single-attempt patches, no test-time compute). It also achieves a 47.3% pass@1 on the SWE-bench Multilingual tests under the same conditions. Additionally, we report results on SWE-bench Verified tests (71.6%) that leverage parallel test-time compute by sampling multiple sequences and selecting the single best via an internal scoring model.
501
+ </sup><br/><sup>
502
+ • To ensure the stability of the evaluation, we employed avg@k on the AIME, HMMT, CNMO, PolyMath-en, GPQA-Diamond, EvalPlus, Tau2.
503
+ </sup><br/><sup>
504
+ • Some data points have been omitted due to prohibitively expensive evaluation costs.
505
+ </sup>
506
+
507
+ ---
508
+
509
+ #### Base model evaluation results
510
+
511
+ <div align="center">
512
+
513
+ <table>
514
+ <thead>
515
+ <tr>
516
+ <th align="center">Benchmark</th>
517
+ <th align="center">Metric</th>
518
+ <th align="center">Shot</th>
519
+ <th align="center">Kimi K2 Base</th>
520
+ <th align="center">Deepseek-V3-Base</th>
521
+ <th align="center">Qwen2.5-72B</th>
522
+ <th align="center">Llama 4 Maverick</th>
523
+ </tr>
524
+ </thead>
525
+ <tbody>
526
+ <tr>
527
+ <td align="center" colspan="7"><strong>General Tasks</strong></td>
528
+ </tr>
529
+ <tr>
530
+ <td align="center">MMLU</td>
531
+ <td align="center">EM</td>
532
+ <td align="center">5-shot</td>
533
+ <td align="center"><strong>87.8</strong></td>
534
+ <td align="center">87.1</td>
535
+ <td align="center">86.1</td>
536
+ <td align="center">84.9</td>
537
+ </tr>
538
+ <tr>
539
+ <td align="center">MMLU-pro</td>
540
+ <td align="center">EM</td>
541
+ <td align="center">5-shot</td>
542
+ <td align="center"><strong>69.2</strong></td>
543
+ <td align="center">60.6</td>
544
+ <td align="center">62.8</td>
545
+ <td align="center">63.5</td>
546
+ </tr>
547
+ <tr>
548
+ <td align="center">MMLU-redux-2.0</td>
549
+ <td align="center">EM</td>
550
+ <td align="center">5-shot</td>
551
+ <td align="center"><strong>90.2</strong></td>
552
+ <td align="center">89.5</td>
553
+ <td align="center">87.8</td>
554
+ <td align="center">88.2</td>
555
+ </tr>
556
+ <tr>
557
+ <td align="center">SimpleQA</td>
558
+ <td align="center">Correct</td>
559
+ <td align="center">5-shot</td>
560
+ <td align="center"><strong>35.3</strong></td>
561
+ <td align="center">26.5</td>
562
+ <td align="center">10.3</td>
563
+ <td align="center">23.7</td>
564
+ </tr>
565
+ <tr>
566
+ <td align="center">TriviaQA</td>
567
+ <td align="center">EM</td>
568
+ <td align="center">5-shot</td>
569
+ <td align="center"><strong>85.1</strong></td>
570
+ <td align="center">84.1</td>
571
+ <td align="center">76.0</td>
572
+ <td align="center">79.3</td>
573
+ </tr>
574
+ <tr>
575
+ <td align="center">GPQA-Diamond</td>
576
+ <td align="center">Avg@8</td>
577
+ <td align="center">5-shot</td>
578
+ <td align="center">48.1</td>
579
+ <td align="center"><strong>50.5</strong></td>
580
+ <td align="center">40.8</td>
581
+ <td align="center">49.4</td>
582
+ </tr>
583
+ <tr>
584
+ <td align="center">SuperGPQA</td>
585
+ <td align="center">EM</td>
586
+ <td align="center">5-shot</td>
587
+ <td align="center"><strong>44.7</strong></td>
588
+ <td align="center">39.2</td>
589
+ <td align="center">34.2</td>
590
+ <td align="center">38.8</td>
591
+ </tr>
592
+ <tr>
593
+ <td align="center" colspan="7"><strong>Coding Tasks</strong></td>
594
+ </tr>
595
+ <tr>
596
+ <td align="center">LiveCodeBench v6</td>
597
+ <td align="center">Pass@1</td>
598
+ <td align="center">1-shot</td>
599
+ <td align="center"><strong>26.3</strong></td>
600
+ <td align="center">22.9</td>
601
+ <td align="center">21.1</td>
602
+ <td align="center">25.1</td>
603
+ </tr>
604
+ <tr>
605
+ <td align="center">EvalPlus</td>
606
+ <td align="center">Pass@1</td>
607
+ <td align="center">-</td>
608
+ <td align="center"><strong>80.3</strong></td>
609
+ <td align="center">65.6</td>
610
+ <td align="center">66.0</td>
611
+ <td align="center">65.5</td>
612
+ </tr>
613
+ <tr>
614
+ <td align="center" colspan="7"><strong>Mathematics Tasks</strong></td>
615
+ </tr>
616
+ <tr>
617
+ <td align="center">MATH</td>
618
+ <td align="center">EM</td>
619
+ <td align="center">4-shot</td>
620
+ <td align="center"><strong>70.2</strong></td>
621
+ <td align="center">60.1</td>
622
+ <td align="center">61.0</td>
623
+ <td align="center">63.0</td>
624
+ </tr>
625
+ <tr>
626
+ <td align="center">GSM8k</td>
627
+ <td align="center">EM</td>
628
+ <td align="center">8-shot</td>
629
+ <td align="center"><strong>92.1</strong></td>
630
+ <td align="center">91.7</td>
631
+ <td align="center">90.4</td>
632
+ <td align="center">86.3</td>
633
+ </tr>
634
+ <tr>
635
+ <td align="center" colspan="7"><strong>Chinese Tasks</strong></td>
636
+ </tr>
637
+ <tr>
638
+ <td align="center">C-Eval</td>
639
+ <td align="center">EM</td>
640
+ <td align="center">5-shot</td>
641
+ <td align="center"><strong>92.5</strong></td>
642
+ <td align="center">90.0</td>
643
+ <td align="center">90.9</td>
644
+ <td align="center">80.9</td>
645
+ </tr>
646
+ <tr>
647
+ <td align="center">CSimpleQA</td>
648
+ <td align="center">Correct</td>
649
+ <td align="center">5-shot</td>
650
+ <td align="center"><strong>77.6</strong></td>
651
+ <td align="center">72.1</td>
652
+ <td align="center">50.5</td>
653
+ <td align="center">53.5</td>
654
+ </tr>
655
+ </tbody>
656
+ </table>
657
+ </div>
658
+ <sup>
659
+ • We only evaluate open-source pretrained models in this work. We report results for Qwen2.5-72B because the base checkpoint for Qwen3-235B-A22B was not open-sourced at the time of our study.
660
+ </sup><br/><sup>
661
+ • All models are evaluated using the same evaluation protocol.
662
+
663
+ </sup>
664
+
665
+
666
+ ## 4. Deployment
667
+ > [!Note]
668
+ > You can access Kimi K2's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
669
+ >
670
+ > The Anthropic-compatible API maps temperature by `real_temperature = request_temperature * 0.6` for better compatible with existing applications.
671
+
672
+ Our model checkpoints are stored in the block-fp8 format, you can find it on [Huggingface](https://huggingface.co/moonshotai/Kimi-K2-Instruct).
673
+
674
+ Currently, Kimi-K2 is recommended to run on the following inference engines:
675
+
676
+ * vLLM
677
+ * SGLang
678
+ * KTransformers
679
+ * TensorRT-LLM
680
+
681
+ Deployment examples for vLLM and SGLang can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
682
+
683
+ ---
684
+
685
+ ## 5. Model Usage
686
+
687
+ ### Chat Completion
688
+
689
+ Once the local inference service is up, you can interact with it through the chat endpoint:
690
+
691
+ ```python
692
+ def simple_chat(client: OpenAI, model_name: str):
693
+ messages = [
694
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
695
+ {"role": "user", "content": [{"type": "text", "text": "Please give a brief self-introduction."}]},
696
+ ]
697
+ response = client.chat.completions.create(
698
+ model=model_name,
699
+ messages=messages,
700
+ stream=False,
701
+ temperature=0.6,
702
+ max_tokens=256
703
+ )
704
+ print(response.choices[0].message.content)
705
+ ```
706
+
707
+ > [!NOTE]
708
+ > The recommended temperature for Kimi-K2-Instruct is `temperature = 0.6`.
709
+ > If no special instructions are required, the system prompt above is a good default.
710
+
711
+ ---
712
+
713
+ ### Tool Calling
714
+
715
+ Kimi-K2-Instruct has strong tool-calling capabilities.
716
+ To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
717
+
718
+ The following example demonstrates calling a weather tool end-to-end:
719
+
720
+ ```python
721
+ # Your tool implementation
722
+ def get_weather(city: str) -> dict:
723
+ return {"weather": "Sunny"}
724
+
725
+ # Tool schema definition
726
+ tools = [{
727
+ "type": "function",
728
+ "function": {
729
+ "name": "get_weather",
730
+ "description": "Retrieve current weather information. Call this when the user asks about the weather.",
731
+ "parameters": {
732
+ "type": "object",
733
+ "required": ["city"],
734
+ "properties": {
735
+ "city": {
736
+ "type": "string",
737
+ "description": "Name of the city"
738
+ }
739
+ }
740
+ }
741
+ }
742
+ }]
743
+
744
+ # Map tool names to their implementations
745
+ tool_map = {
746
+ "get_weather": get_weather
747
+ }
748
+
749
+ def tool_call_with_client(client: OpenAI, model_name: str):
750
+ messages = [
751
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
752
+ {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
753
+ ]
754
+ finish_reason = None
755
+ while finish_reason is None or finish_reason == "tool_calls":
756
+ completion = client.chat.completions.create(
757
+ model=model_name,
758
+ messages=messages,
759
+ temperature=0.6,
760
+ tools=tools, # tool list defined above
761
+ tool_choice="auto"
762
+ )
763
+ choice = completion.choices[0]
764
+ finish_reason = choice.finish_reason
765
+ if finish_reason == "tool_calls":
766
+ messages.append(choice.message)
767
+ for tool_call in choice.message.tool_calls:
768
+ tool_call_name = tool_call.function.name
769
+ tool_call_arguments = json.loads(tool_call.function.arguments)
770
+ tool_function = tool_map[tool_call_name]
771
+ tool_result = tool_function(**tool_call_arguments)
772
+ print("tool_result:", tool_result)
773
+
774
+ messages.append({
775
+ "role": "tool",
776
+ "tool_call_id": tool_call.id,
777
+ "name": tool_call_name,
778
+ "content": json.dumps(tool_result)
779
+ })
780
+ print("-" * 100)
781
+ print(choice.message.content)
782
+ ```
783
+
784
+ The `tool_call_with_client` function implements the pipeline from user query to tool execution.
785
+ This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
786
+ For streaming output and manual tool-parsing, see the [Tool Calling Guide](docs/tool_call_guidance.md).
787
+
788
+ ---
789
+
790
+ ## 6. License
791
+
792
+ Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
793
+
794
+ ---
795
+
796
+ ## 7. Third Party Notices
797
+
798
+ See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
799
+
800
+ ---
801
+
802
+ ## 7. Contact Us
803
+
804
+ If you have any questions, please reach out at [[email protected]](mailto:[email protected]).
THIRD_PARTY_NOTICES.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # THIRD_PARTY_NOTICES
2
+
3
+ This file lists third-party software contained in Kimi-K2 along with their licenses, in compliance with the redistribution clauses of those licenses.
4
+
5
+ ---
6
+
7
+ ## 1. DeepSeek-V3
8
+
9
+ Our model archietecture is DeepSeek-V3-like. Some of modeling codes are copied from the source repository.
10
+
11
+ - **Source Repository**
12
+ https://huggingface.co/deepseek-ai/DeepSeek-V3
13
+
14
+ - **Files / Directories Used**
15
+ - configuration_deepseek.py
16
+ - modeling_deepseek.py
17
+
18
+ - **License Type**
19
+ MIT License
20
+
21
+ - **Copyright Notice**
22
+ Copyright (c) 2023 DeepSeek
23
+
24
+ - **Full License Text**
25
+ ```
26
+ MIT License
27
+
28
+ Copyright (c) 2023 DeepSeek
29
+
30
+ Permission is hereby granted, free of charge, to any person obtaining a copy
31
+ of this software and associated documentation files (the "Software"), to deal
32
+ in the Software without restriction, including without limitation the rights
33
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
34
+ copies of the Software, and to permit persons to whom the Software is
35
+ furnished to do so, subject to the following conditions:
36
+
37
+ The above copyright notice and this permission notice shall be included in all
38
+ copies or substantial portions of the Software.
39
+
40
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
41
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
42
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
43
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
44
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
45
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
46
+ SOFTWARE.
47
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools -%}
2
+ <|im_system|>tool_declare<|im_middle|>
3
+ # Tools
4
+ {{ tools | tojson }}<|im_end|>
5
+ {%- endif -%}
6
+ {%- for message in messages -%}
7
+ {%- if loop.first and messages[0]['role'] != 'system' -%}
8
+ <|im_system|>system<|im_middle|>You are Kimi, an AI assistant created by Moonshot AI.<|im_end|>
9
+ {%- endif -%}
10
+
11
+ {%- set role_name = message.get('name') or message['role'] -%}
12
+ {%- if message['role'] == 'user' -%}
13
+ <|im_user|>{{role_name}}<|im_middle|>
14
+ {%- elif message['role'] == 'assistant' -%}
15
+ <|im_assistant|>{{role_name}}<|im_middle|>
16
+ {%- else -%}
17
+ <|im_system|>{{role_name}}<|im_middle|>
18
+ {%- endif -%}
19
+
20
+ {%- if message['role'] == 'assistant' and message.get('tool_calls') -%}
21
+ {%- if message['content'] -%}{{ message['content'] }}{%- endif -%}
22
+ <|tool_calls_section_begin|>
23
+ {%- for tool_call in message['tool_calls'] -%}
24
+ {%- set formatted_id = tool_call['id'] -%}
25
+ <|tool_call_begin|>{{ formatted_id }}<|tool_call_argument_begin|>{% if tool_call['function']['arguments'] is string %}{{ tool_call['function']['arguments'] }}{% else %}{{ tool_call['function']['arguments'] | tojson }}{% endif %}<|tool_call_end|>
26
+ {%- endfor -%}
27
+ <|tool_calls_section_end|>
28
+ {%- elif message['role'] == 'tool' -%}
29
+ ## Return of {{ message.tool_call_id }}
30
+ {{ message['content'] }}
31
+ {%- elif message['content'] is string -%}
32
+ {{ message['content'] }}
33
+ {%- elif message['content'] is not none -%}
34
+ {% for content in message['content'] -%}
35
+ {% if content['type'] == 'image' or 'image' in content or 'image_url' in content -%}
36
+ <|media_start|>image<|media_content|><|media_pad|><|media_end|>
37
+ {% else -%}
38
+ {{ content['text'] }}
39
+ {%- endif -%}
40
+ {%- endfor -%}
41
+ {%- endif -%}
42
+ <|im_end|>
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ <|im_assistant|>assistant<|im_middle|>
46
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "vocab_size": 163840,
3
+ "max_position_embeddings": 131072,
4
+ "hidden_size": 7168,
5
+ "intermediate_size": 18432,
6
+ "moe_intermediate_size": 2048,
7
+ "num_hidden_layers": 103,
8
+ "num_attention_heads": 64,
9
+ "n_shared_experts": 1,
10
+ "n_routed_experts": 768,
11
+ "routed_scaling_factor": 2.827,
12
+ "kv_lora_rank": 512,
13
+ "q_lora_rank": 1536,
14
+ "qk_rope_head_dim": 64,
15
+ "v_head_dim": 128,
16
+ "qk_nope_head_dim": 128,
17
+ "qk_head_dim": 192,
18
+ "head_dim": 64,
19
+ "n_group": 1,
20
+ "topk_group": 1,
21
+ "num_experts_per_tok": 16,
22
+ "first_k_dense_replace": 1,
23
+ "norm_topk_prob": true,
24
+ "rope_interleave": true,
25
+ "num_key_value_heads": 64,
26
+ "hidden_act": "silu",
27
+ "initializer_range": 0.02,
28
+ "rms_norm_eps": 1e-06,
29
+ "pretraining_tp": 1,
30
+ "use_cache": true,
31
+ "rope_theta": 50000.0,
32
+ "rope_scaling": {
33
+ "beta_fast": 1.0,
34
+ "beta_slow": 1.0,
35
+ "factor": 32.0,
36
+ "mscale": 1.0,
37
+ "mscale_all_dim": 1.0,
38
+ "original_max_position_embeddings": 4096,
39
+ "type": "yarn",
40
+ "rope_type": "yarn"
41
+ },
42
+ "attention_bias": false,
43
+ "attention_dropout": 0.0,
44
+ "return_dict": true,
45
+ "output_hidden_states": false,
46
+ "torchscript": false,
47
+ "torch_dtype": "bfloat16",
48
+ "pruned_heads": {},
49
+ "tie_word_embeddings": false,
50
+ "chunk_size_feed_forward": 0,
51
+ "is_encoder_decoder": false,
52
+ "is_decoder": false,
53
+ "cross_attention_hidden_size": null,
54
+ "add_cross_attention": false,
55
+ "tie_encoder_decoder": false,
56
+ "architectures": [
57
+ "DeepseekV3ForCausalLM"
58
+ ],
59
+ "finetuning_task": null,
60
+ "id2label": {
61
+ "0": "LABEL_0",
62
+ "1": "LABEL_1"
63
+ },
64
+ "label2id": {
65
+ "LABEL_0": 0,
66
+ "LABEL_1": 1
67
+ },
68
+ "task_specific_params": null,
69
+ "problem_type": null,
70
+ "tokenizer_class": null,
71
+ "prefix": null,
72
+ "bos_token_id": 163584,
73
+ "pad_token_id": null,
74
+ "eos_token_id": 163585,
75
+ "sep_token_id": null,
76
+ "decoder_start_token_id": null,
77
+ "max_length": 20,
78
+ "min_length": 0,
79
+ "do_sample": false,
80
+ "early_stopping": false,
81
+ "num_beams": 1,
82
+ "num_beam_groups": 1,
83
+ "diversity_penalty": 0.0,
84
+ "temperature": 1.0,
85
+ "top_k": 50,
86
+ "top_p": 1.0,
87
+ "typical_p": 1.0,
88
+ "repetition_penalty": 1.0,
89
+ "length_penalty": 1.0,
90
+ "no_repeat_ngram_size": 0,
91
+ "encoder_no_repeat_ngram_size": 0,
92
+ "bad_words_ids": null,
93
+ "num_return_sequences": 1,
94
+ "output_scores": false,
95
+ "return_dict_in_generate": false,
96
+ "forced_bos_token_id": null,
97
+ "forced_eos_token_id": null,
98
+ "remove_invalid_values": false,
99
+ "exponential_decay_length_penalty": null,
100
+ "suppress_tokens": null,
101
+ "begin_suppress_tokens": null,
102
+ "_name_or_path": "/mnt/weka/home/ggb/mergekit/kimi-k2-merged",
103
+ "transformers_version": "4.55.4",
104
+ "num_nextn_predict_layers": 0,
105
+ "ep_size": 1,
106
+ "topk_method": "noaux_tc",
107
+ "moe_layer_freq": 1,
108
+ "scoring_func": "sigmoid",
109
+ "aux_loss_alpha": 0.001,
110
+ "seq_aux": true,
111
+ "auto_map": {
112
+ "AutoConfig": "configuration_deepseek.DeepseekV3Config",
113
+ "AutoModel": "modeling_deepseek.DeepseekV3Model",
114
+ "AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
115
+ },
116
+ "model_type": "deepseek_v3",
117
+ "tf_legacy_loss": false,
118
+ "use_bfloat16": false,
119
+ "output_attentions": false,
120
+ "quantization_config": {
121
+ "quant_method": "fp8",
122
+ "activation_scheme": "dynamic",
123
+ "fmt": "e4m3",
124
+ "weight_block_size": [
125
+ 128,
126
+ 128
127
+ ]
128
+ }
129
+ }
configuration_deepseek.py ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copy from https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/configuration_deepseek.py
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers.utils import logging
5
+
6
+ logger = logging.get_logger(__name__)
7
+
8
+ DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
9
+ class DeepseekV3Config(PretrainedConfig):
10
+ r"""
11
+ This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
12
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
13
+ defaults will yield a similar configuration to that of the DeepSeek-V3.
14
+
15
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
16
+ documentation from [`PretrainedConfig`] for more information.
17
+
18
+
19
+ Args:
20
+ vocab_size (`int`, *optional*, defaults to 129280):
21
+ Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
22
+ `inputs_ids` passed when calling [`DeepseekV3Model`]
23
+ hidden_size (`int`, *optional*, defaults to 4096):
24
+ Dimension of the hidden representations.
25
+ intermediate_size (`int`, *optional*, defaults to 11008):
26
+ Dimension of the MLP representations.
27
+ moe_intermediate_size (`int`, *optional*, defaults to 1407):
28
+ Dimension of the MoE representations.
29
+ num_hidden_layers (`int`, *optional*, defaults to 32):
30
+ Number of hidden layers in the Transformer decoder.
31
+ num_nextn_predict_layers (`int`, *optional*, defaults to 1):
32
+ Number of nextn predict layers in the DeepSeekV3 Model.
33
+ num_attention_heads (`int`, *optional*, defaults to 32):
34
+ Number of attention heads for each attention layer in the Transformer decoder.
35
+ n_shared_experts (`int`, *optional*, defaults to None):
36
+ Number of shared experts, None means dense model.
37
+ n_routed_experts (`int`, *optional*, defaults to None):
38
+ Number of routed experts, None means dense model.
39
+ routed_scaling_factor (`float`, *optional*, defaults to 1.0):
40
+ Scaling factor or routed experts.
41
+ topk_method (`str`, *optional*, defaults to `gready`):
42
+ Topk method used in routed gate.
43
+ n_group (`int`, *optional*, defaults to None):
44
+ Number of groups for routed experts.
45
+ topk_group (`int`, *optional*, defaults to None):
46
+ Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
47
+ num_experts_per_tok (`int`, *optional*, defaults to None):
48
+ Number of selected experts, None means dense model.
49
+ moe_layer_freq (`int`, *optional*, defaults to 1):
50
+ The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
51
+ first_k_dense_replace (`int`, *optional*, defaults to 0):
52
+ Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
53
+ \--k dense layers--/
54
+ norm_topk_prob (`bool`, *optional*, defaults to False):
55
+ Whether to normalize the weights of the routed experts.
56
+ scoring_func (`str`, *optional*, defaults to 'softmax'):
57
+ Method of computing expert weights.
58
+ aux_loss_alpha (`float`, *optional*, defaults to 0.001):
59
+ Auxiliary loss weight coefficient.
60
+ seq_aux = (`bool`, *optional*, defaults to True):
61
+ Whether to compute the auxiliary loss for each individual sample.
62
+ num_key_value_heads (`int`, *optional*):
63
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
64
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
65
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
66
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
67
+ by meanpooling all the original heads within that group. For more details checkout [this
68
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
69
+ `num_attention_heads`.
70
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
71
+ The non-linear activation function (function or string) in the decoder.
72
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
73
+ The maximum sequence length that this model might ever be used with.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
77
+ The epsilon used by the rms normalization layers.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`.
81
+ pad_token_id (`int`, *optional*):
82
+ Padding token id.
83
+ bos_token_id (`int`, *optional*, defaults to 1):
84
+ Beginning of stream token id.
85
+ eos_token_id (`int`, *optional*, defaults to 2):
86
+ End of stream token id.
87
+ pretraining_tp (`int`, *optional*, defaults to 1):
88
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
89
+ document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
90
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
91
+ issue](https://github.com/pytorch/pytorch/issues/76232).
92
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
93
+ Whether to tie weight embeddings
94
+ rope_theta (`float`, *optional*, defaults to 10000.0):
95
+ The base period of the RoPE embeddings.
96
+ rope_scaling (`Dict`, *optional*):
97
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
98
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
99
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
100
+ `max_position_embeddings` to the expected new maximum.
101
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
102
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
103
+ attention_dropout (`float`, *optional*, defaults to 0.0):
104
+ The dropout ratio for the attention probabilities.
105
+
106
+ ```python
107
+ >>> from transformers import DeepseekV3Model, DeepseekV3Config
108
+
109
+ >>> # Initializing a Deepseek-V3 style configuration
110
+ >>> configuration = DeepseekV3Config()
111
+
112
+ >>> # Accessing the model configuration
113
+ >>> configuration = model.config
114
+ ```"""
115
+
116
+ model_type = "deepseek_v3"
117
+ keys_to_ignore_at_inference = ["past_key_values"]
118
+
119
+ def __init__(
120
+ self,
121
+ vocab_size=129280,
122
+ hidden_size=7168,
123
+ intermediate_size=18432,
124
+ moe_intermediate_size = 2048,
125
+ num_hidden_layers=61,
126
+ num_nextn_predict_layers=1,
127
+ num_attention_heads=128,
128
+ num_key_value_heads=128,
129
+ n_shared_experts = 1,
130
+ n_routed_experts = 256,
131
+ ep_size = 1,
132
+ routed_scaling_factor = 2.5,
133
+ kv_lora_rank = 512,
134
+ q_lora_rank = 1536,
135
+ qk_rope_head_dim = 64,
136
+ v_head_dim = 128,
137
+ qk_nope_head_dim = 128,
138
+ topk_method = 'noaux_tc',
139
+ n_group = 8,
140
+ topk_group = 4,
141
+ num_experts_per_tok = 8,
142
+ moe_layer_freq = 1,
143
+ first_k_dense_replace = 3,
144
+ norm_topk_prob = True,
145
+ scoring_func = 'sigmoid',
146
+ aux_loss_alpha = 0.001,
147
+ seq_aux = True,
148
+ hidden_act="silu",
149
+ max_position_embeddings=4096,
150
+ initializer_range=0.02,
151
+ rms_norm_eps=1e-6,
152
+ use_cache=True,
153
+ pad_token_id=None,
154
+ bos_token_id=0,
155
+ eos_token_id=1,
156
+ pretraining_tp=1,
157
+ tie_word_embeddings=False,
158
+ rope_theta=10000.0,
159
+ rope_scaling=None,
160
+ attention_bias=False,
161
+ attention_dropout=0.0,
162
+ **kwargs,
163
+ ):
164
+ self.vocab_size = vocab_size
165
+ self.max_position_embeddings = max_position_embeddings
166
+ self.hidden_size = hidden_size
167
+ self.intermediate_size = intermediate_size
168
+ self.moe_intermediate_size = moe_intermediate_size
169
+ self.num_hidden_layers = num_hidden_layers
170
+ self.num_nextn_predict_layers = num_nextn_predict_layers
171
+ self.num_attention_heads = num_attention_heads
172
+ self.n_shared_experts = n_shared_experts
173
+ self.n_routed_experts = n_routed_experts
174
+ self.ep_size = ep_size
175
+ self.routed_scaling_factor = routed_scaling_factor
176
+ self.kv_lora_rank = kv_lora_rank
177
+ self.q_lora_rank = q_lora_rank
178
+ self.qk_rope_head_dim = qk_rope_head_dim
179
+ self.v_head_dim = v_head_dim
180
+ self.qk_nope_head_dim = qk_nope_head_dim
181
+ self.topk_method = topk_method
182
+ self.n_group = n_group
183
+ self.topk_group = topk_group
184
+ self.num_experts_per_tok = num_experts_per_tok
185
+ self.moe_layer_freq = moe_layer_freq
186
+ self.first_k_dense_replace = first_k_dense_replace
187
+ self.norm_topk_prob = norm_topk_prob
188
+ self.scoring_func = scoring_func
189
+ self.aux_loss_alpha = aux_loss_alpha
190
+ self.seq_aux = seq_aux
191
+ # for backward compatibility
192
+ if num_key_value_heads is None:
193
+ num_key_value_heads = num_attention_heads
194
+
195
+ self.num_key_value_heads = num_key_value_heads
196
+ self.hidden_act = hidden_act
197
+ self.initializer_range = initializer_range
198
+ self.rms_norm_eps = rms_norm_eps
199
+ self.pretraining_tp = pretraining_tp
200
+ self.use_cache = use_cache
201
+ self.rope_theta = rope_theta
202
+ self.rope_scaling = rope_scaling
203
+ self.attention_bias = attention_bias
204
+ self.attention_dropout = attention_dropout
205
+
206
+ super().__init__(
207
+ pad_token_id=pad_token_id,
208
+ bos_token_id=bos_token_id,
209
+ eos_token_id=eos_token_id,
210
+ tie_word_embeddings=tie_word_embeddings,
211
+ **kwargs,
212
+ )
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_length": 131072,
3
+ "eos_token_id": 163586
4
+ }
kimi-logo.png ADDED
model-00174-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49f900f0ba2ee48828b81121f9340403e900b58154c14ec472b29b867facfd84
3
+ size 9994464779
model-00175-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:928985d2c8b3cd839a446f079eb5a982bbafbe94aa21635beb138dc94d87cf7c
3
+ size 9999741935
model-00176-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a50350beee5b2ec91400558d9e0008ee39f00587b4c3f35ddf55b9afa2b847a
3
+ size 9998120697
model-00177-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b17de8449fb8706eeca95ce50e95d1d86c88e7fdfaa1dc7125dda8c14159dd3a
3
+ size 9999741079
model-00178-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9afbd8782278730d6746a6b4f52f6cfb3b6aaf3ef78d547adf48bff74ee92028
3
+ size 9996085485
model-00179-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cc348070852316ea3d4b4534260fb9887da35ac0653db231bf2d98342a0bd16
3
+ size 9999741393
model-00180-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50068ec84938312602c32b2764808410b783c1671b53cafe066dc60a93a6f918
3
+ size 9999740939
model-00181-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d80a4c28c23579fbe20dc4a0f16fe9646fddd036b8b039f66fd7db3865569ad5
3
+ size 9994465345
model-00182-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bf00e8fc7fff62aa7ed1490ede1e1100a1cf72b65b176dd14dd6958c5c629fb
3
+ size 9999741387
model-00183-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bf6f6432300bc94b986b341c417895f058dab62b0fa9bbb255183e1713a813b
3
+ size 9999741393
model-00184-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a11196c0794346371960c3949e7d69f36d378f6a0912b7dfd38067ed46a0c88f
3
+ size 9994464987
model-00185-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7637e683f43fb3d5ed1f9fde647ff5618af65831953ad9582a5a0e6eed1ed939
3
+ size 9999742007
model-00186-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:058c054f97ed0b2342379080cd19665435cdfc93cd5c273bb743f5aa4e5ccbb4
3
+ size 9998120645
model-00187-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e5e73b4d8ad2877e6fd5bb426366c44de282014b47944e2cda60d82152f8f94
3
+ size 9999741029
model-00188-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:731e2d0ce3d2ae77af5bfc74cd0ef41b7357772396ac64bae88a3af8a0c25ff2
3
+ size 9996085313
model-00189-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28034a4eb75c9254f4607815cd1ec9e8ceaa79ed2254206f793cb99d06c7e39
3
+ size 9999741233
model-00190-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:861f25a80e3a380cd3466d7bb0c88b4dc6a504afaad28011e06defe2a5fc4793
3
+ size 9999740903
model-00191-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77facb480e3d505bbc51567f42b12af60ea6c1e45953ac46613f6a2454c5683
3
+ size 9998137352
model-00192-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ed01589fcd8ce965e1dc1ca9529f26c25a11cdb76338329c06b9095954266de
3
+ size 9994465836
model-00193-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f90137baaec2f4dfb5345128a8a20e5eb52b24ef3a0d2db46b394de673d29f4
3
+ size 9999741393
model-00194-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7df659a2aab56b6207064c1a19de7f02ba85f1a991ed11dca195f56ae5d99ce6
3
+ size 9999741387
model-00195-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89a39f7cda7fffcb0f22cbae940b0d5e398de19f1d6f1f35af37b16ec3491de3
3
+ size 9999741341
model-00196-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eb2ea827add386924d476c6b913f8842501b1d0f0ec66e218c7aba357ac5d4f
3
+ size 9994451341
model-00197-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:595fd5606d32f2edbf33a44ecedc32b4221a261016d90c75355f7b7386011679
3
+ size 9999742295
model-00198-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57f10f89f00e55f8553b873b89c580b3789117e4fc13a0522f1460ff95d8a64f
3
+ size 9996086729
model-00199-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdb778404294951eb42b5bb7326cc39c50eb55ab970b0016d849085f5a11d1c5
3
+ size 9999742395
model-00200-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb9c0ab1bbc2dc69ac5ed1d211851981ebcdf725a9d033b680e48a0c28059237
3
+ size 9999742267
model-00201-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01d6923f784703860840d096fc4bb935556e82c86226e8a8b49e4ea578873d7b
3
+ size 9994467108
model-00202-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:510bbd56dfc345a0394d3a92e8264ac9de8a4e1cefc3c0bdef007edf100c281f
3
+ size 9999742749
model-00203-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8644b3896497a91c9e29890c3aad0254710e23d6dddd867f864d61ba1be0ab48
3
+ size 9999742755
model-00204-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a0026b3add7b214ea30d1e3505d5783e9bf13126ce74045e74806ef4ca857da
3
+ size 9999742749
model-00205-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9af3cf486b5a2f583d7ed6cc25eccaac32bb2d6049f51d1831bbddeb53ded0fc
3
+ size 9994465943
model-00206-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec026e129f3fd64483ba43b51da02fe728de988216ef2d7acfdf0f46dffa7957
3
+ size 9998120491
model-00207-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4f7897ff76064dc3e566021c4f56ef8770518c39302347fe0d8f232a4ec052b
3
+ size 9999741117
model-00208-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ee003b06705479799509e962d45296f1a4d7bfc25464a9fb8f783d6e4f5809
3
+ size 9996085231
model-00209-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a5bb6cf8d6165ef70893e3debf97fba3679711fbb3bd0975646ecc0596bffc9
3
+ size 9999740935
model-00210-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16457f0dd9deeab7da062aad52e1ad8460c350752ca62ff03e01721b2cffb33e
3
+ size 9999740911
model-00211-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49bf93fab43910d196758a5da0bd34b4f527bd490b731876ace346a4ca499056
3
+ size 9998121779
model-00212-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3901fea9c523c06e1b929f5e1d7c9627b14de4a53b9b150a336e614738c6e47
3
+ size 9996085507
model-00213-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07df263340b663fc050231973eba43e955e7ce69a1e8301fa419ff000dfc0950
3
+ size 9999741393
model-00214-of-00348.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:750488911cb5c3e7d330bf9bb343d5f6d67055386975855fd6be1069b94ff09f
3
+ size 9999741375