mackenzietechdocs commited on
Commit
ecea77f
·
verified ·
1 Parent(s): a7e62ac

Add Artificial Analysis evaluations for deepseek-v3-2

Browse files

This commit adds structured evaluation results to the model card. The results are formatted using the model-index specification and will be displayed in the model card's evaluation widget.

Files changed (1) hide show
  1. README.md +178 -127
README.md CHANGED
@@ -1,127 +1,178 @@
1
- ---
2
- license: mit
3
- library_name: transformers
4
- base_model:
5
- - deepseek-ai/DeepSeek-V3.2-Exp-Base
6
- base_model_relation: finetune
7
- ---
8
- # DeepSeek-V3.2: Efficient Reasoning & Agentic AI
9
-
10
- <!-- markdownlint-disable first-line-h1 -->
11
- <!-- markdownlint-disable html -->
12
- <!-- markdownlint-disable no-duplicate-header -->
13
-
14
- <div align="center">
15
- <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
16
- </div>
17
- <hr>
18
- <div align="center" style="line-height: 1;">
19
- <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
20
- <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
21
- </a>
22
- <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
23
- <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
24
- </a>
25
- <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
26
- <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
27
- </a>
28
- </div>
29
- <div align="center" style="line-height: 1;">
30
- <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
31
- <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
32
- </a>
33
- <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
34
- <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
35
- </a>
36
- <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
37
- <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
38
- </a>
39
- </div>
40
- <div align="center" style="line-height: 1;">
41
- <a href="LICENSE" style="margin: 2px;">
42
- <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
43
- </a>
44
- </div>
45
-
46
- <p align="center">
47
- <a href="https://huggingface.co/deepseek-ai/DeepSeek-V3.2/blob/main/assets/paper.pdf"><b>Technical Report</b>👁️</a>
48
- </p>
49
-
50
- ## Introduction
51
-
52
- We introduce **DeepSeek-V3.2**, a model that harmonizes high computational efficiency with superior reasoning and agent performance. Our approach is built upon three key technical breakthroughs:
53
-
54
- 1. **DeepSeek Sparse Attention (DSA):** We introduce DSA, an efficient attention mechanism that substantially reduces computational complexity while preserving model performance, specifically optimized for long-context scenarios.
55
- 2. **Scalable Reinforcement Learning Framework:** By implementing a robust RL protocol and scaling post-training compute, *DeepSeek-V3.2* performs comparably to GPT-5. Notably, our high-compute variant, **DeepSeek-V3.2-Speciale**, **surpasses GPT-5** and exhibits reasoning proficiency on par with Gemini-3.0-Pro.
56
- - *Achievement:* 🥇 **Gold-medal performance** in the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).
57
- 3. **Large-Scale Agentic Task Synthesis Pipeline:** To integrate **reasoning into tool-use** scenarios, we developed a novel synthesis pipeline that systematically generates training data at scale. This facilitates scalable agentic post-training, improving compliance and generalization in complex interactive environments.
58
-
59
- <div align="center">
60
- <img src="assets/benchmark.png" >
61
- </div>
62
-
63
- We have also released the final submissions for IOI 2025, ICPC World Finals, IMO 2025 and CMO 2025, which were selected based on our designed pipeline. These materials are provided for the community to conduct secondary verification. The files can be accessed at `assets/olympiad_cases`.
64
-
65
- ## Chat Template
66
-
67
- DeepSeek-V3.2 introduces significant updates to its chat template compared to prior versions. The primary changes involve a revised format for tool calling and the introduction of a "thinking with tools" capability.
68
-
69
- To assist the community in understanding and adapting to this new template, we have provided a dedicated `encoding` folder, which contains Python scripts and test cases demonstrating how to encode messages in OpenAI-compatible format into input strings for the model and how to parse the model's text output.
70
-
71
- A brief example is illustrated below:
72
-
73
- ```python
74
- import transformers
75
- # encoding/encoding_dsv32.py
76
- from encoding_dsv32 import encode_messages, parse_message_from_completion_text
77
-
78
- tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.2")
79
-
80
- messages = [
81
- {"role": "user", "content": "hello"},
82
- {"role": "assistant", "content": "Hello! I am DeepSeek.", "reasoning_content": "thinking..."},
83
- {"role": "user", "content": "1+1=?"}
84
- ]
85
- encode_config = dict(thinking_mode="thinking", drop_thinking=True, add_default_bos_token=True)
86
-
87
- # messages -> string
88
- prompt = encode_messages(messages, **encode_config)
89
- # Output: "<|begin▁of▁sentence|><|User|>hello<|Assistant|></think>Hello! I am DeepSeek.<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>"
90
-
91
- # string -> tokens
92
- tokens = tokenizer.encode(prompt)
93
- # Output: [0, 128803, 33310, 128804, 128799, 19923, 3, 342, 1030, 22651, 4374, 1465, 16, 1, 128803, 19, 13, 19, 127252, 128804, 128798]
94
- ```
95
-
96
- Important Notes:
97
-
98
- 1. This release does not include a Jinja-format chat template. Please refer to the Python code mentioned above.
99
- 2. The output parsing function included in the code is designed to handle well-formatted strings only. It does not attempt to correct or recover from malformed output that the model might occasionally generate. It is not suitable for production use without robust error handling.
100
- 3. A new role named `developer` has been introduced in the chat template. This role is dedicated exclusively to search agent scenarios and is designated for no other tasks. The official API does not accept messages assigned to `developer`.
101
-
102
- ## How to Run Locally
103
-
104
- The model structure of DeepSeek-V3.2 and DeepSeek-V3.2-Speciale are the same as DeepSeek-V3.2-Exp. Please visit [DeepSeek-V3.2-Exp](https://github.com/deepseek-ai/DeepSeek-V3.2-Exp) repo for more information about running this model locally.
105
-
106
- Usage Recommendations:
107
-
108
- 1. For local deployment, we recommend setting the sampling parameters to `temperature = 1.0, top_p = 0.95`.
109
- 2. Please note that the DeepSeek-V3.2-Speciale variant is designed exclusively for deep reasoning tasks and does not support the tool-calling functionality.
110
-
111
- ## License
112
-
113
- This repository and the model weights are licensed under the [MIT License](LICENSE).
114
-
115
- ## Citation
116
-
117
- ```
118
- @misc{deepseekai2025deepseekv32,
119
- title={DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models},
120
- author={DeepSeek-AI},
121
- year={2025},
122
- }
123
- ```
124
-
125
- ## Contact
126
-
127
- If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ base_model:
5
+ - deepseek-ai/DeepSeek-V3.2-Exp-Base
6
+ base_model_relation: finetune
7
+ model-index:
8
+ - name: DeepSeek-V3.2
9
+ results:
10
+ - task:
11
+ type: evaluation
12
+ dataset:
13
+ name: Artificial Analysis Benchmarks
14
+ type: artificial_analysis
15
+ metrics:
16
+ - name: Artificial Analysis Intelligence Index
17
+ type: artificial_analysis_intelligence_index
18
+ value: 52.4
19
+ - name: Artificial Analysis Coding Index
20
+ type: artificial_analysis_coding_index
21
+ value: 42.8
22
+ - name: Artificial Analysis Math Index
23
+ type: artificial_analysis_math_index
24
+ value: 59
25
+ - name: Mmlu Pro
26
+ type: mmlu_pro
27
+ value: 0.837
28
+ - name: Gpqa
29
+ type: gpqa
30
+ value: 0.751
31
+ - name: Hle
32
+ type: hle
33
+ value: 0.105
34
+ - name: Livecodebench
35
+ type: livecodebench
36
+ value: 0.593
37
+ - name: Scicode
38
+ type: scicode
39
+ value: 0.387
40
+ - name: Aime 25
41
+ type: aime_25
42
+ value: 0.59
43
+ - name: Ifbench
44
+ type: ifbench
45
+ value: 0.49
46
+ - name: Lcr
47
+ type: lcr
48
+ value: 0.39
49
+ - name: Terminalbench Hard
50
+ type: terminalbench_hard
51
+ value: 0.305
52
+ - name: Tau2
53
+ type: tau2
54
+ value: 0.789
55
+ source:
56
+ name: Artificial Analysis API
57
+ url: https://artificialanalysis.ai
58
+ ---
59
+ # DeepSeek-V3.2: Efficient Reasoning & Agentic AI
60
+
61
+ <!-- markdownlint-disable first-line-h1 -->
62
+ <!-- markdownlint-disable html -->
63
+ <!-- markdownlint-disable no-duplicate-header -->
64
+
65
+ <div align="center">
66
+ <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
67
+ </div>
68
+ <hr>
69
+ <div align="center" style="line-height: 1;">
70
+ <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
71
+ <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
72
+ </a>
73
+ <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
74
+ <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
75
+ </a>
76
+ <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
77
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
78
+ </a>
79
+ </div>
80
+ <div align="center" style="line-height: 1;">
81
+ <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
82
+ <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
83
+ </a>
84
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
85
+ <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
86
+ </a>
87
+ <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
88
+ <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
89
+ </a>
90
+ </div>
91
+ <div align="center" style="line-height: 1;">
92
+ <a href="LICENSE" style="margin: 2px;">
93
+ <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
94
+ </a>
95
+ </div>
96
+
97
+ <p align="center">
98
+ <a href="https://huggingface.co/deepseek-ai/DeepSeek-V3.2/blob/main/assets/paper.pdf"><b>Technical Report</b>👁️</a>
99
+ </p>
100
+
101
+ ## Introduction
102
+
103
+ We introduce **DeepSeek-V3.2**, a model that harmonizes high computational efficiency with superior reasoning and agent performance. Our approach is built upon three key technical breakthroughs:
104
+
105
+ 1. **DeepSeek Sparse Attention (DSA):** We introduce DSA, an efficient attention mechanism that substantially reduces computational complexity while preserving model performance, specifically optimized for long-context scenarios.
106
+ 2. **Scalable Reinforcement Learning Framework:** By implementing a robust RL protocol and scaling post-training compute, *DeepSeek-V3.2* performs comparably to GPT-5. Notably, our high-compute variant, **DeepSeek-V3.2-Speciale**, **surpasses GPT-5** and exhibits reasoning proficiency on par with Gemini-3.0-Pro.
107
+ - *Achievement:* 🥇 **Gold-medal performance** in the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).
108
+ 3. **Large-Scale Agentic Task Synthesis Pipeline:** To integrate **reasoning into tool-use** scenarios, we developed a novel synthesis pipeline that systematically generates training data at scale. This facilitates scalable agentic post-training, improving compliance and generalization in complex interactive environments.
109
+
110
+ <div align="center">
111
+ <img src="assets/benchmark.png" >
112
+ </div>
113
+
114
+ We have also released the final submissions for IOI 2025, ICPC World Finals, IMO 2025 and CMO 2025, which were selected based on our designed pipeline. These materials are provided for the community to conduct secondary verification. The files can be accessed at `assets/olympiad_cases`.
115
+
116
+ ## Chat Template
117
+
118
+ DeepSeek-V3.2 introduces significant updates to its chat template compared to prior versions. The primary changes involve a revised format for tool calling and the introduction of a "thinking with tools" capability.
119
+
120
+ To assist the community in understanding and adapting to this new template, we have provided a dedicated `encoding` folder, which contains Python scripts and test cases demonstrating how to encode messages in OpenAI-compatible format into input strings for the model and how to parse the model's text output.
121
+
122
+ A brief example is illustrated below:
123
+
124
+ ```python
125
+ import transformers
126
+ # encoding/encoding_dsv32.py
127
+ from encoding_dsv32 import encode_messages, parse_message_from_completion_text
128
+
129
+ tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.2")
130
+
131
+ messages = [
132
+ {"role": "user", "content": "hello"},
133
+ {"role": "assistant", "content": "Hello! I am DeepSeek.", "reasoning_content": "thinking..."},
134
+ {"role": "user", "content": "1+1=?"}
135
+ ]
136
+ encode_config = dict(thinking_mode="thinking", drop_thinking=True, add_default_bos_token=True)
137
+
138
+ # messages -> string
139
+ prompt = encode_messages(messages, **encode_config)
140
+ # Output: "<|begin▁of▁sentence|><|User|>hello<|Assistant|></think>Hello! I am DeepSeek.<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>"
141
+
142
+ # string -> tokens
143
+ tokens = tokenizer.encode(prompt)
144
+ # Output: [0, 128803, 33310, 128804, 128799, 19923, 3, 342, 1030, 22651, 4374, 1465, 16, 1, 128803, 19, 13, 19, 127252, 128804, 128798]
145
+ ```
146
+
147
+ Important Notes:
148
+
149
+ 1. This release does not include a Jinja-format chat template. Please refer to the Python code mentioned above.
150
+ 2. The output parsing function included in the code is designed to handle well-formatted strings only. It does not attempt to correct or recover from malformed output that the model might occasionally generate. It is not suitable for production use without robust error handling.
151
+ 3. A new role named `developer` has been introduced in the chat template. This role is dedicated exclusively to search agent scenarios and is designated for no other tasks. The official API does not accept messages assigned to `developer`.
152
+
153
+ ## How to Run Locally
154
+
155
+ The model structure of DeepSeek-V3.2 and DeepSeek-V3.2-Speciale are the same as DeepSeek-V3.2-Exp. Please visit [DeepSeek-V3.2-Exp](https://github.com/deepseek-ai/DeepSeek-V3.2-Exp) repo for more information about running this model locally.
156
+
157
+ Usage Recommendations:
158
+
159
+ 1. For local deployment, we recommend setting the sampling parameters to `temperature = 1.0, top_p = 0.95`.
160
+ 2. Please note that the DeepSeek-V3.2-Speciale variant is designed exclusively for deep reasoning tasks and does not support the tool-calling functionality.
161
+
162
+ ## License
163
+
164
+ This repository and the model weights are licensed under the [MIT License](LICENSE).
165
+
166
+ ## Citation
167
+
168
+ ```
169
+ @misc{deepseekai2025deepseekv32,
170
+ title={DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models},
171
+ author={DeepSeek-AI},
172
+ year={2025},
173
+ }
174
+ ```
175
+
176
+ ## Contact
177
+
178
+ If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).