bullerwins
commited on
Commit
•
7942be2
1
Parent(s):
66c50fc
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: deepseek
|
4 |
+
license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL
|
5 |
+
---
|
6 |
+
GGUF quantidez version with llama.cpp
|
7 |
+
|
8 |
+
<!-- markdownlint-disable first-line-h1 -->
|
9 |
+
<!-- markdownlint-disable html -->
|
10 |
+
<!-- markdownlint-disable no-duplicate-header -->
|
11 |
+
|
12 |
+
<div align="center">
|
13 |
+
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
|
14 |
+
</div>
|
15 |
+
<hr>
|
16 |
+
<div align="center" style="line-height: 1;">
|
17 |
+
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
|
18 |
+
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
|
19 |
+
</a>
|
20 |
+
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
|
21 |
+
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
22 |
+
</a>
|
23 |
+
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
|
24 |
+
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
25 |
+
</a>
|
26 |
+
</div>
|
27 |
+
|
28 |
+
<div align="center" style="line-height: 1;">
|
29 |
+
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
|
30 |
+
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
|
31 |
+
</a>
|
32 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
|
33 |
+
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
34 |
+
</a>
|
35 |
+
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
|
36 |
+
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
37 |
+
</a>
|
38 |
+
</div>
|
39 |
+
|
40 |
+
<div align="center" style="line-height: 1;">
|
41 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;">
|
42 |
+
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
43 |
+
</a>
|
44 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;">
|
45 |
+
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
46 |
+
</a>
|
47 |
+
</div>
|
48 |
+
|
49 |
+
<p align="center">
|
50 |
+
<a href="#2-model-downloads">Model Download</a> |
|
51 |
+
<a href="#3-evaluation-results">Evaluation Results</a> |
|
52 |
+
<a href="#4-model-architecture">Model Architecture</a> |
|
53 |
+
<a href="#6-api-platform">API Platform</a> |
|
54 |
+
<a href="#8-license">License</a> |
|
55 |
+
<a href="#9-citation">Citation</a>
|
56 |
+
</p>
|
57 |
+
|
58 |
+
<p align="center">
|
59 |
+
<a href="https://arxiv.org/abs/2405.04434"><b>Paper Link</b>👁️</a>
|
60 |
+
</p>
|
61 |
+
|
62 |
+
# DeepSeek-V2-Chat-0628
|
63 |
+
|
64 |
+
## 1. Introduction
|
65 |
+
|
66 |
+
DeepSeek-V2-Chat-0628 is an improved version of DeepSeek-V2-Chat. For model details, please visit [DeepSeek-V2 page](https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat) for more information.
|
67 |
+
|
68 |
+
DeepSeek-V2-Chat-0628 has achieved remarkable performance on the LMSYS Chatbot Arena Leaderboard:
|
69 |
+
|
70 |
+
Overall Ranking: #11, outperforming all other open-source models.
|
71 |
+
|
72 |
+
<p align="center">
|
73 |
+
<img width="90%" src="figures/arena1.jpeg" />
|
74 |
+
</p>
|
75 |
+
|
76 |
+
Coding Arena Ranking: #3, showcasing exceptional capabilities in coding tasks.
|
77 |
+
|
78 |
+
<p align="center">
|
79 |
+
<img width="90%" src="figures/arena2.png" />
|
80 |
+
</p>
|
81 |
+
|
82 |
+
Hard Prompts Arena Ranking: #3, demonstrating strong performance on challenging prompts.
|
83 |
+
|
84 |
+
<p align="center">
|
85 |
+
<img width="90%" src="figures/arena3.png" />
|
86 |
+
</p>
|
87 |
+
|
88 |
+
## 2. Improvement
|
89 |
+
|
90 |
+
Compared to the previous version DeepSeek-V2-Chat, the new version has made the following improvements:
|
91 |
+
|
92 |
+
| **Benchmark** | **DeepSeek-V2-Chat** | **DeepSeek-V2-Chat-0628** | **Improvement** |
|
93 |
+
|:-----------:|:------------:|:---------------:|:-------------------------:|
|
94 |
+
| **HumanEval** | 81.1 | 84.8 | +3.7 |
|
95 |
+
| **MATH** | 53.9 | 71.0 | +17.1 |
|
96 |
+
| **BBH** | 79.7 | 83.4 | +3.7 |
|
97 |
+
| **IFEval** | 63.8 | 77.6 | +13.8 |
|
98 |
+
| **Arena-Hard** | 41.6 | 68.3 | +26.7 |
|
99 |
+
| **JSON Output (Internal)** | 78 | 85 | +7 |
|
100 |
+
|
101 |
+
Furthermore, the instruction following capability in the "system" area has been optimized, significantly enhancing the user experience for immersive translation, RAG, and other tasks.
|
102 |
+
|
103 |
+
## 3. How to run locally
|
104 |
+
|
105 |
+
**To utilize DeepSeek-V2-Chat-0628 in BF16 format for inference, 80GB*8 GPUs are required.**
|
106 |
+
### Inference with Huggingface's Transformers
|
107 |
+
You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
|
108 |
+
|
109 |
+
```python
|
110 |
+
import torch
|
111 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
|
112 |
+
|
113 |
+
model_name = "deepseek-ai/DeepSeek-V2-Chat-0628"
|
114 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
115 |
+
# `max_memory` should be set based on your devices
|
116 |
+
max_memory = {i: "75GB" for i in range(8)}
|
117 |
+
# `device_map` cannot be set to `auto`
|
118 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")
|
119 |
+
model.generation_config = GenerationConfig.from_pretrained(model_name)
|
120 |
+
model.generation_config.pad_token_id = model.generation_config.eos_token_id
|
121 |
+
|
122 |
+
messages = [
|
123 |
+
{"role": "user", "content": "Write a piece of quicksort code in C++"}
|
124 |
+
]
|
125 |
+
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
|
126 |
+
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
|
127 |
+
|
128 |
+
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
|
129 |
+
print(result)
|
130 |
+
```
|
131 |
+
|
132 |
+
The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
|
133 |
+
|
134 |
+
**Note: The chat template has been updated compared to the previous DeepSeek-V2-Chat version.**
|
135 |
+
|
136 |
+
An example of chat template is as belows:
|
137 |
+
|
138 |
+
```bash
|
139 |
+
<|begin▁of▁sentence|><|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end▁of▁sentence|><|User|>{user_message_2}<|Assistant|>
|
140 |
+
```
|
141 |
+
|
142 |
+
You can also add an optional system message:
|
143 |
+
|
144 |
+
```bash
|
145 |
+
<|begin▁of▁sentence|>{system_message}
|
146 |
+
|
147 |
+
<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end▁of▁sentence|><|User|>{user_message_2}<|Assistant|>
|
148 |
+
```
|
149 |
+
|
150 |
+
### Inference with vLLM (recommended)
|
151 |
+
To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
|
152 |
+
|
153 |
+
```python
|
154 |
+
from transformers import AutoTokenizer
|
155 |
+
from vllm import LLM, SamplingParams
|
156 |
+
|
157 |
+
max_model_len, tp_size = 8192, 8
|
158 |
+
model_name = "deepseek-ai/DeepSeek-V2-Chat-0628"
|
159 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
160 |
+
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
|
161 |
+
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
|
162 |
+
|
163 |
+
messages_list = [
|
164 |
+
[{"role": "user", "content": "Who are you?"}],
|
165 |
+
[{"role": "user", "content": "Translate the following content into Chinese directly: DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference."}],
|
166 |
+
[{"role": "user", "content": "Write a piece of quicksort code in C++."}],
|
167 |
+
]
|
168 |
+
|
169 |
+
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
|
170 |
+
|
171 |
+
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
|
172 |
+
|
173 |
+
generated_text = [output.outputs[0].text for output in outputs]
|
174 |
+
print(generated_text)
|
175 |
+
```
|
176 |
+
|
177 |
+
## 4. License
|
178 |
+
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V2 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V2 series (including Base and Chat) supports commercial use.
|
179 |
+
|
180 |
+
## 5. Citation
|
181 |
+
```
|
182 |
+
@misc{deepseekv2,
|
183 |
+
title={DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model},
|
184 |
+
author={DeepSeek-AI},
|
185 |
+
year={2024},
|
186 |
+
eprint={2405.04434},
|
187 |
+
archivePrefix={arXiv},
|
188 |
+
primaryClass={cs.CL}
|
189 |
+
}
|
190 |
+
```
|
191 |
+
|
192 |
+
## 6. Contact
|
193 |
+
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|