csabakecskemeti commited on
Commit
b83bb97
·
verified ·
1 Parent(s): 008bb31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -273
README.md CHANGED
@@ -1,281 +1,18 @@
1
  ---
2
- license: other
3
- license_name: modified-mit
4
- library_name: transformers
5
  ---
6
- <div align="center">
7
- <picture>
8
- <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
9
- </picture>
10
- </div>
11
- <hr>
12
 
13
- <div align="center" style="line-height:1">
14
- <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
15
- <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
16
- </div>
17
 
18
- <div align="center" style="line-height: 1;">
19
- <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
20
- <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
21
- <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
22
- </div>
23
- <div align="center" style="line-height: 1;">
24
- <a href="https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
25
- </div>
26
 
27
- <p align="center">
28
- <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/thinking.html">Tech Blog</a></b>
29
- </p>
30
 
 
 
31
 
32
- ## 1. Model Introduction
33
 
34
- Kimi K2 Thinking is the latest, most capable version of open-source thinking model. Starting with Kimi K2, we built it as a thinking agent that reasons step-by-step while dynamically invoking tools. It sets a new state-of-the-art on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks by dramatically scaling multi-step reasoning depth and maintaining stable tool-use across 200–300 sequential calls. At the same time, K2 Thinking is a native INT4 quantization model with 256k context window, achieving lossless reductions in inference latency and GPU memory usage.
35
-
36
- ### Key Features
37
- - **Deep Thinking & Tool Orchestration**: End-to-end trained to interleave chain-of-thought reasoning with function calls, enabling autonomous research, coding, and writing workflows that last hundreds of steps without drift.
38
- - **Native INT4 Quantization**: Quantization-Aware Training (QAT) is employed in post-training stage to achieve lossless 2x speed-up in low-latency mode.
39
- - **Stable Long-Horizon Agency**: Maintains coherent goal-directed behavior across up to 200–300 consecutive tool invocations, surpassing prior models that degrade after 30–50 steps.
40
-
41
-
42
- ## 2. Model Summary
43
-
44
- <div align="center">
45
-
46
-
47
- | | |
48
- |:---:|:---:|
49
- | **Architecture** | Mixture-of-Experts (MoE) |
50
- | **Total Parameters** | 1T |
51
- | **Activated Parameters** | 32B |
52
- | **Number of Layers** (Dense layer included) | 61 |
53
- | **Number of Dense Layers** | 1 |
54
- | **Attention Hidden Dimension** | 7168 |
55
- | **MoE Hidden Dimension** (per Expert) | 2048 |
56
- | **Number of Attention Heads** | 64 |
57
- | **Number of Experts** | 384 |
58
- | **Selected Experts per Token** | 8 |
59
- | **Number of Shared Experts** | 1 |
60
- | **Vocabulary Size** | 160K |
61
- | **Context Length** | 256K |
62
- | **Attention Mechanism** | MLA |
63
- | **Activation Function** | SwiGLU |
64
- </div>
65
-
66
- ## 3. Evaluation Results
67
-
68
- **Reasoning Tasks**
69
- | Benchmark | Setting | K2 Thinking | GPT-5 | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 | Grok-4 |
70
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|:-------:|
71
- | **HLE (Text-only)** | no tools | 23.9 | 26.3 | 19.8* | 7.9 | 19.8 | 25.4 |
72
- | | w/ tools | 44.9 | 41.7* | 32.0* | 21.7 | 20.3* | 41.0 |
73
- | | heavy | 51.0 | 42.0 | - | - | - | 50.7 |
74
- | **AIME25** | no tools | 94.5 | 94.6 | 87.0 | 51.0 | 89.3 | 91.7 |
75
- | | w/ python | 99.1 | 99.6 | 100.0 | 75.2 | 58.1* | 98.8 |
76
- | | heavy | 100.0 | 100.0 | - | - | - | 100.0 |
77
- | **HMMT25** | no tools | 89.4 | 93.3 | 74.6* | 38.8 | 83.6 | 90.0 |
78
- | | w/ python | 95.1 | 96.7 | 88.8* | 70.4 | 49.5* | 93.9 |
79
- | | heavy | 97.5 | 100.0 | - | - | - | 96.7 |
80
- | **IMO-AnswerBench** | no tools | 78.6 | 76.0* | 65.9* | 45.8 | 76.0* | 73.1 |
81
- | **GPQA** | no tools | 84.5 | 85.7 | 83.4 | 74.2 | 79.9 | 87.5 |
82
-
83
- **General Tasks**
84
- | Benchmark | Setting | K2 Thinking | GPT-5 | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
85
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
86
- | **MMLU-Pro** | no tools | 84.6 | 87.1 | 87.5 | 81.9 | 85.0 |
87
- | **MMLU-Redux** | no tools | 94.4 | 95.3 | 95.6 | 92.7 | 93.7 |
88
- | **Longform Writing** | no tools | 73.8 | 71.4 | 79.8 | 62.8 | 72.5 |
89
- | **HealthBench** | no tools | 58.0 | 67.2 | 44.2 | 43.8 | 46.9 |
90
-
91
- **Agentic Search Tasks**
92
- | Benchmark | Setting | K2 Thinking | GPT-5 | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
93
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
94
- | **BrowseComp** | w/ tools | 60.2 | 54.9 | 24.1 | 7.4 | 40.1 |
95
- | **BrowseComp-ZH** | w/ tools | 62.3 | 63.0* | 42.4* | 22.2 | 47.9 |
96
- | **Seal-0** | w/ tools | 56.3 | 51.4* | 53.4* | 25.2 | 38.5* |
97
- | **FinSearchComp-T3** | w/ tools | 47.4 | 48.5* | 44.0* | 10.4 | 27.0* |
98
- | **Frames** | w/ tools | 87.0 | 86.0* | 85.0* | 58.1 | 80.2* |
99
-
100
- **Coding Tasks**
101
- | Benchmark | Setting | K2 Thinking | GPT-5 | Claude Sonnet 4.5<br> (Thinking) | K2 0905 | DeepSeek-V3.2 |
102
- |:----------:|:--------:|:------------:|:------:|:----------------------------:|:--------:|:--------------:|
103
- | **SWE-bench Verified** | w/ tools | 71.3 | 74.9 | 77.2 | 69.2 | 67.8 |
104
- | **SWE-bench Multilingual** | w/ tools | 61.1 | 55.3* | 68.0 | 55.9 | 57.9 |
105
- | **Multi-SWE-bench** | w/ tools | 41.9 | 39.3* | 44.3 | 33.5 | 30.6 |
106
- | **SciCode** | no tools | 44.8 | 42.9 | 44.7 | 30.7 | 37.7 |
107
- | **LiveCodeBenchV6** | no tools | 83.1 | 87.0* | 64.0* | 56.1* | 74.1 |
108
- | **OJ-Bench (cpp)** | no tools | 48.7 | 56.2* | 30.4* | 25.5* | 38.2* |
109
- | **Terminal-Bench** | w/ simulated tools (JSON) | 47.1 | 43.8 | 51.0 | 44.5 | 37.7 |
110
- <details>
111
- <summary><b>Footnotes</b></summary>
112
-
113
- 1. To ensure a fast, lightweight experience, we selectively employ a subset of tools and reduce the number of tool call steps under the chat mode on kimi.com. As a result, chatting on kimi.com may not reproduce our benchmark scores. Our agentic mode will be updated soon to reflect the full capabilities of K2 Thinking.
114
-
115
- 2. **Testing Details**:
116
-  2.1. All benchmarks were evaluated at temperature = 1.0 and 256 k context length for K2 Thinking, except for SciCode, for which we followed the official temperature setting of 0.0.
117
-  2.2. HLE (no tools), AIME25, HMMT25, and GPQA were capped at a 96k thinking-token budget, while IMO-Answer Bench, LiveCodeBench and OJ-Bench were capped at a 128k thinking-token budget. Longform Writing was capped at a 32k completion-token budget.
118
-  2.3. For AIME and HMMT (no tools), we report the average of 32 runs (avg@32). For AIME and HMMT (with Python), we report the average of 16 runs (avg@16). For IMO-AnswerBench, we report the average of 8 runs (avg@8).
119
-
120
- 3. **Baselines**:
121
-  3.1 GPT-5, Claude-4.5-sonnet, Grok-4 results and DeepSeek-V3.2 results are quoted from the [GPT-5 post](https://openai.com/index/introducing-gpt-5/), [GPT-5 for Developers post](https://openai.com/index/introducing-gpt-5-for-developers/), [GPT-5 system card](https://openai.com/index/gpt-5-system-card/), [claude-sonnet-4-5 post](https://www.anthropic.com/news/claude-sonnet-4-5), [grok-4 post](https://x.ai/news/grok-4), [deepseek-v3.2 post](https://api-docs.deepseek.com/news/news250929), the [public Terminal-Bench leaderboard](https://www.tbench.ai/leaderboard) (Terminus-2), the [public Vals AI leaderboard](https://vals.ai/) and [artificialanalysis](https://artificialanalysis.ai/). Benchmarks for which no available public scores were re-tested under the same conditions used for k2 thinking and are marked with an asterisk(*).
122
-  3.2 The GPT-5 and Grok-4 on the HLE full set with tools are 35.2 and 38.6 from the official posts. In our internal evaluation on the HLE text-only subset, GPT-5 scores 41.7 and Grok-4 scores 38.6 (Grok-4’s launch cited 41.0 on the text-only subset). For GPT-5's HLE text-only w/o tool, we use score from <a href="https://scale.com/leaderboard/humanitys_last_exam_text_only" target="_blank">Scale.ai</a>. The official GPT5 HLE full set w/o tool is 24.8.
123
-  3.3 For <a href="https://aclanthology.org/2025.emnlp-main.1794.pdf" target="_blank">IMO-AnswerBench</a>: GPT-5 scored 65.6 in the benchmark paper. We re-evaluated GPT-5 with official API and obtained a score of 76.
124
-
125
- 4. **For HLE (w/ tools) and the agentic-search benchmarks**:
126
-  4.1. K2 Thinking was equipped with search, code-interpreter, and web-browsing tools.
127
-  4.2. BrowseComp-ZH, Seal-0 and FinSearchComp-T3 were run 4 times independently and the average is reported (avg@4).
128
-  4.3. The evaluation used o3-mini as judge, configured identically to the official HLE setting; judge prompts were taken verbatim from the official repository.
129
-  4.4. On HLE, the maximum step limit was 120, with a 48 k-token reasoning budget per step; on agentic-search tasks, the limit was 300 steps with a 24 k-token reasoning budget per step.
130
-  4.5. When tool execution results cause the accumulated input to exceed the model's context limit (256k), we employ a simple context management strategy that hides all previous tool outputs.
131
-  4.6. The web access to Hugging Face may lead to data leakage in certain benchmark tests, such as HLE. K2 Thinking can achieve a score of 51.3 on HLE without blocking Hugging Face. To ensure a fair and rigorous comparison, we blocked access to Hugging Face during testing.
132
-
133
- 5. **For Coding Tasks**:
134
-  5.1. Terminal-Bench scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser.
135
-  5.2. For other coding tasks, the result was produced with our in-house evaluation harness. The harness is derived from SWE-agent, but we clamp the context windows of the Bash and Edit tools and rewrite the system prompt to match the task semantics.
136
-  5.3. All reported scores of coding tasks are averaged over 5 independent runs.
137
-
138
- 6. **Heavy Mode**: K2 Thinking Heavy Mode employs an efficient parallel strategy: it first rolls out eight trajectories simultaneously, then reflectively aggregates all outputs to generate the final result. Heavy mode for GPT-5 denotes the official GPT-5 Pro score.
139
- </details>
140
-
141
- ## 4. Native INT4 Quantization
142
-
143
- Low-bit quantization is an effective way to reduce inference latency and GPU memory usage on large-scale inference servers. However, thinking models use excessive decoding lengths, and thus quantization often results in substantial performance drops.
144
-
145
- To overcome this challenge, we adopt Quantization-Aware Training (QAT) during the post-training phase, applying INT4 weight-only quantization to the MoE components. It allows K2 Thinking to support native INT4 inference with a roughly 2x generation speed improvement while achieving state-of-the-art performance. All benchmark results are reported under INT4 precision.
146
-
147
- The checkpoints are saved in compressed-tensors format, supported by most of mainstream inference engine. If you need the checkpoints in higher precision such as FP8 or BF16, you can refer to [official repo of compressed-tensors](https://github.com/vllm-project/compressed-tensors) to unpack the int4 weights and convert to any higher precision.
148
-
149
- ## 5. Deployment
150
- > [!Note]
151
- > You can access K2 Thinking's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
152
-
153
- Currently, Kimi-K2-Thinking is recommended to run on the following inference engines:
154
-
155
- * vLLM
156
- * SGLang
157
- * KTransformers
158
-
159
- Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
160
-
161
- ---
162
-
163
- ## 6. Model Usage
164
-
165
- ### Chat Completion
166
-
167
- Once the local inference service is up, you can interact with it through the chat endpoint:
168
-
169
- ```python
170
- def simple_chat(client: openai.OpenAI, model_name: str):
171
- messages = [
172
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
173
- {"role": "user", "content": [{"type": "text", "text": "which one is bigger, 9.11 or 9.9? think carefully."}]},
174
- ]
175
- response = client.chat.completions.create(
176
- model=model_name,
177
- messages=messages,
178
- stream=False,
179
- temperature=1.0,
180
- max_tokens=4096
181
- )
182
- print(f"k2 answer: {response.choices[0].message.content}")
183
- print("=====below is reasoning content======")
184
- print(f"reasoning content: {response.choices[0].message.reasoning_content}")
185
- ```
186
-
187
- > [!NOTE]
188
- > The recommended temperature for Kimi-K2-Thinking is `temperature = 1.0`.
189
- > If no special instructions are required, the system prompt above is a good default.
190
-
191
- ---
192
-
193
- ### Tool Calling
194
-
195
- Kimi-K2-Thinking has the same tool calling settings as Kimi-K2-Instruct.
196
-
197
- To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
198
-
199
- The following example demonstrates calling a weather tool end-to-end:
200
-
201
- ```python
202
- # Your tool implementation
203
- def get_weather(city: str) -> dict:
204
- return {"weather": "Sunny"}
205
- # Tool schema definition
206
- tools = [{
207
- "type": "function",
208
- "function": {
209
- "name": "get_weather",
210
- "description": "Retrieve current weather information. Call this when the user asks about the weather.",
211
- "parameters": {
212
- "type": "object",
213
- "required": ["city"],
214
- "properties": {
215
- "city": {
216
- "type": "string",
217
- "description": "Name of the city"
218
- }
219
- }
220
- }
221
- }
222
- }]
223
- # Map tool names to their implementations
224
- tool_map = {
225
- "get_weather": get_weather
226
- }
227
- def tool_call_with_client(client: OpenAI, model_name: str):
228
- messages = [
229
- {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
230
- {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
231
- ]
232
- finish_reason = None
233
- while finish_reason is None or finish_reason == "tool_calls":
234
- completion = client.chat.completions.create(
235
- model=model_name,
236
- messages=messages,
237
- temperature=1.0,
238
- tools=tools, # tool list defined above
239
- tool_choice="auto"
240
- )
241
- choice = completion.choices[0]
242
- finish_reason = choice.finish_reason
243
- if finish_reason == "tool_calls":
244
- messages.append(choice.message)
245
- for tool_call in choice.message.tool_calls:
246
- tool_call_name = tool_call.function.name
247
- tool_call_arguments = json.loads(tool_call.function.arguments)
248
- tool_function = tool_map[tool_call_name]
249
- tool_result = tool_function(**tool_call_arguments)
250
- print("tool_result:", tool_result)
251
- messages.append({
252
- "role": "tool",
253
- "tool_call_id": tool_call.id,
254
- "name": tool_call_name,
255
- "content": json.dumps(tool_result)
256
- })
257
- print("-" * 100)
258
- print(choice.message.content)
259
- ```
260
-
261
- The `tool_call_with_client` function implements the pipeline from user query to tool execution.
262
- This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
263
- For more information, see the [Tool Calling Guide](docs/tool_call_guidance.md).
264
-
265
- ---
266
-
267
- ## 7. License
268
-
269
- Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
270
-
271
- ---
272
-
273
- ## 8. Third Party Notices
274
-
275
- See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
276
-
277
- ---
278
-
279
- ## 9. Contact Us
280
-
281
- If you have any questions, please reach out at [[email protected]](mailto:[email protected]).
 
1
  ---
2
+ base_model:
3
+ - moonshotai/Kimi-K2-Thinking
4
+ pipeline_tag: text-generation
5
  ---
 
 
 
 
 
 
6
 
7
+ [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
 
 
 
8
 
9
+ 'Make knowledge free for everyone'
 
 
 
 
 
 
 
10
 
11
+ Original INT4 model has been dequantized with my own custom script:
 
 
12
 
13
+ [DQ_int4-to-bf16_dequant](https://github.com/csabakecskemeti/DQ_int4-to-bf16_dequant)
14
+ (inspired by the deepseek V3 dequant script)
15
 
 
16
 
17
+ BF version of: [moonshotai/Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking)
18
+ <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>