Lin-K76 commited on
Commit
85b508f
1 Parent(s): 518f085

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ ---
6
+
7
+ # DeepSeek-Coder-V2-Lite-Instruct-FP8
8
+
9
+ ## Model Overview
10
+ - **Model Architecture:** DeepSeek-Coder-V2-Lite-Instruct
11
+ - **Input:** Text
12
+ - **Output:** Text
13
+ - **Model Optimizations:**
14
+ - **Weight quantization:** FP8
15
+ - **Activation quantization:** FP8
16
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-7B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-7B-Instruct), this models is intended for assistant-like chat.
17
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
18
+ - **Release Date:** 7/18/2024
19
+ - **Version:** 1.0
20
+ - **Model Developers:** Neural Magic
21
+
22
+ Quantized version of [DeepSeek-Coder-V2-Lite-Instruct](deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
23
+ <!-- It achieves an average score of 73.19 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.48. -->
24
+ It achieves an average score of 79.60 on the [HumanEval](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark, whereas the unquantized model achieves 79.33.
25
+
26
+ ### Model Optimizations
27
+
28
+ This model was obtained by quantizing the weights and activations of [DeepSeek-Coder-V2-Lite-Instruct](deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
29
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
30
+
31
+ Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations.
32
+ [AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with 512 sequences of UltraChat.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm import LLM, SamplingParams
42
+ from transformers import AutoTokenizer
43
+
44
+ model_id = "neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8"
45
+
46
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
47
+
48
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
49
+
50
+ messages = [
51
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
52
+ {"role": "user", "content": "Who are you?"},
53
+ ]
54
+
55
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
56
+
57
+ llm = LLM(model=model_id, trust_remote_code=True, max_model_len=4096)
58
+
59
+ outputs = llm.generate(prompts, sampling_params)
60
+
61
+ generated_text = outputs[0].outputs[0].text
62
+ print(generated_text)
63
+ ```
64
+
65
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
66
+
67
+ ## Creation
68
+
69
+ This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py) with expert gates kept at original precision, as presented in the code snipet below.
70
+ Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+ from transformers import AutoTokenizer
75
+
76
+ from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
77
+
78
+ pretrained_model_dir = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
79
+ quantized_model_dir = "DeepSeek-Coder-V2-Lite-Instruct-FP8"
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=4096)
82
+ tokenizer.pad_token = tokenizer.eos_token
83
+
84
+ ds = load_dataset("mgoin/ultrachat_2k", split="train_sft").select(range(512))
85
+ examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
86
+ examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda")
87
+
88
+ quantize_config = BaseQuantizeConfig(
89
+ quant_method="fp8",
90
+ activation_scheme="static"
91
+ ignore_patterns=["re:.*lm_head"],
92
+ )
93
+
94
+ model = AutoFP8ForCausalLM.from_pretrained(
95
+ pretrained_model_dir, quantize_config=quantize_config
96
+ )
97
+ model.quantize(examples)
98
+ model.save_quantized(quantized_model_dir)
99
+ ```
100
+
101
+ ## Evaluation
102
+
103
+ The model was evaluated on the [HumanEval](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark with the [Neural Magic fork](https://github.com/neuralmagic/evalplus) of the [EvalPlus implementation of HumanEval](https://github.com/evalplus/evalplus) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
104
+ ```
105
+ python codegen/generate.py --model neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8 --temperature 0.2 --n_samples 50 --resume --root ~ --dataset humaneval
106
+ python evalplus/sanitize.py ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Lite-Instruct-FP8_vllm_temp_0.2
107
+ evalplus.evaluate --dataset humaneval --samples ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Lite-Instruct-FP8_vllm_temp_0.2-sanitized
108
+ ```
109
+
110
+ ### Accuracy
111
+
112
+ #### Open LLM Leaderboard evaluation scores
113
+ <table>
114
+ <tr>
115
+ <td><strong>Benchmark</strong>
116
+ </td>
117
+ <td><strong>DeepSeek-Coder-V2-Lite-Instruct</strong>
118
+ </td>
119
+ <td><strong>DeepSeek-Coder-V2-Lite-Instruct-FP8(this model)</strong>
120
+ </td>
121
+ <td><strong>Recovery</strong>
122
+ </td>
123
+ </tr>
124
+ <tr>
125
+ <td>base pass@1
126
+ </td>
127
+ <td>80.8
128
+ </td>
129
+ <td>79.3
130
+ </td>
131
+ <td>98.14%
132
+ </td>
133
+ </tr>
134
+ <tr>
135
+ <td>base pass@10
136
+ </td>
137
+ <td>83.4
138
+ </td>
139
+ <td>84.6
140
+ </td>
141
+ <td>101.4%
142
+ </td>
143
+ </tr>
144
+ <tr>
145
+ <td>base+extra pass@1
146
+ </td>
147
+ <td>75.8
148
+ </td>
149
+ <td>74.9
150
+ </td>
151
+ <td>98.81%
152
+ </td>
153
+ </tr>
154
+ <tr>
155
+ <td>base+extra pass@10
156
+ </td>
157
+ <td>77.3
158
+ </td>
159
+ <td>79.6
160
+ </td>
161
+ <td>102.9%
162
+ </td>
163
+ </tr>
164
+ <tr>
165
+ <td><strong>Average</strong>
166
+ </td>
167
+ <td><strong>79.33</strong>
168
+ </td>
169
+ <td><strong>79.60</strong>
170
+ </td>
171
+ <td><strong>100.3%</strong>
172
+ </td>
173
+ </tr>
174
+ </table>