GGUF
code
CISCai commited on
Commit
cd4f4cd
1 Parent(s): 1ca5c05

Upload 13 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat filter=lfs diff=lfs merge=lfs -text
37
+ DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
42
+ DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
43
+ DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
46
+ DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
47
+ DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2db3987649c0c306e33e677c2e763990dec3cb0f90645dd8f4620c670ac5659c
3
+ size 5236564832
DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54b1ec4b3275209203ea0515e9d27cc1d1ebfb05f214702bc6ec7e1a9c80d76f
3
+ size 4994131808
DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e01dafb76ff2f6e780f23a6e58f401b84cc100b28c21f1f4d12d5de470203e0b
3
+ size 6328457056
DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a5d61a56206b00f131291f76e00970c3a1573bcc5f25852783fbb9dcfb10c97
3
+ size 6005213024
DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51898f3d6f17d88d8b6a7de0bbab926c29b4871222937ae18957d101b5cf9285
3
+ size 5967402848
DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ded772822a1a745587b3c0a43a1a2919d2ba4dfc98b5f679166ccd7c83a4ebbf
3
+ size 5640619872
DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5819e9b5e5b4ca4a125f0488049f4956b54fb7852d65433fd70999d15c2b93e0
3
+ size 7553175392
DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daf8aa5e71d59e3610a855cb27b5a90a66b1c250faefdc21ae61819cc61d5966
3
+ size 7487663968
DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa74fd31b9c0fdf615be9d9304c95bfe86b54e1791944ab8a60f9d0b8fc48ca5
3
+ size 7122857824
DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd716cb24a9d4e978430ca27ee67395de68a3766c37b681f609ef4bcf28a4f48
3
+ size 6964057952
DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be9b3d5b29c561b38d6c41e81c0be12244bf6a4cb1d2b9a45175d3339ae040f4
3
+ size 8905110368
DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7853ddc8576c89f434a99960b5eff2718284b599b09c10b166bb17469a77c47
3
+ size 38356411
README.md CHANGED
@@ -1,5 +1,425 @@
1
- ---
2
- license: other
3
- license_name: deepseek-license
4
- license_link: https://github.com/deepseek-ai/DeepSeek-Coder-V2/raw/main/LICENSE-MODEL
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: deepseek-license
4
+ license_link: https://github.com/deepseek-ai/DeepSeek-Coder-V2/raw/main/LICENSE-MODEL
5
+ tags:
6
+ - code
7
+ language:
8
+ - code
9
+ base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
10
+ model_creator: DeepSeek AI
11
+ model_name: DeepSeek-Coder-V2-Lite-Instruct
12
+ model_type: deepseek2
13
+ datasets:
14
+ - m-a-p/CodeFeedback-Filtered-Instruction
15
+ quantized_by: CISC
16
+ ---
17
+
18
+ # DeepSeek-Coder-V2-Lite-Instruct - SOTA GGUF
19
+ - Model creator: [DeepSeek AI](https://huggingface.co/deepseek-ai)
20
+ - Original model: [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct)
21
+
22
+ <!-- description start -->
23
+ ## Description
24
+
25
+ This repo contains State Of The Art quantized GGUF format model files for [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
26
+
27
+ Quantization was done with an importance matrix that was trained for ~250K tokens (64 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset.
28
+
29
+ Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
30
+
31
+ NOTE: Due to some of the tensors in this model being oddly shaped a consequential portion of the quantization fell back to IQ4_NL instead of the specified method, causing somewhat larger (and "smarter"; even IQ1_M is quite usable) model files than usual!
32
+
33
+ <!-- description end -->
34
+
35
+
36
+ <!-- prompt-template start -->
37
+ ## Prompt template: DeepSeek v2
38
+
39
+ ```
40
+ User: {prompt}
41
+
42
+ Assistant:
43
+ ```
44
+
45
+ <!-- prompt-template end -->
46
+
47
+
48
+ <!-- compatibility_gguf start -->
49
+ ## Compatibility
50
+
51
+ These quantised GGUFv3 files are compatible with llama.cpp from May 29th 2024 onwards, as of commit [fb76ec2](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c)
52
+
53
+ They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.
54
+
55
+ ## Explanation of quantisation methods
56
+
57
+ <details>
58
+ <summary>Click to see details</summary>
59
+
60
+ The new methods available are:
61
+
62
+ * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw)
63
+ * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw
64
+ * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw
65
+ * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw
66
+ * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw
67
+ * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw
68
+ * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw
69
+ * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
70
+ * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
71
+ * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
72
+ * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
73
+ * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw
74
+
75
+ Refer to the Provided Files table below to see what files use which methods, and how.
76
+ </details>
77
+ <!-- compatibility_gguf end -->
78
+
79
+ <!-- README_GGUF.md-provided-files start -->
80
+ ## Provided files
81
+
82
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
83
+ | ---- | ---- | ---- | ---- | ---- | ----- |
84
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf) | IQ1_S | 1 | 4.5 GB| 5.5 GB | smallest, significant quality loss |
85
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf) | IQ1_M | 1 | 4.7 GB| 5.7 GB | very small, significant quality loss |
86
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 5.1 GB| 6.1 GB | very small, high quality loss |
87
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 5.4 GB| 6.4 GB | very small, high quality loss |
88
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf) | IQ2_S | 2 | 5.4 GB| 6.4 GB | small, substantial quality loss |
89
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf) | IQ2_M | 2 | 5.7 GB| 5.7 GB | small, greater quality loss |
90
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 6.3 GB| 7.3 GB | very small, high quality loss |
91
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 6.5 GB| 7.5 GB | small, substantial quality loss |
92
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf) | IQ3_S | 3 | 6.8 GB| 7.8 GB | small, greater quality loss |
93
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf) | IQ3_M | 3 | 6.9 GB| 7.9 GB | medium, balanced quality - recommended |
94
+ | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf) | IQ4_NL | 4 | 8.1 GB| 9.1 GB | small, substantial quality loss |
95
+
96
+ Generated importance matrix file: [DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat)
97
+
98
+ **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
99
+
100
+ <!-- README_GGUF.md-provided-files end -->
101
+
102
+ <!-- README_GGUF.md-how-to-run start -->
103
+ ## Example `llama.cpp` command
104
+
105
+ Make sure you are using `llama.cpp` from commit [fb76ec3](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c) or later.
106
+
107
+ ```shell
108
+ ./llama-cli -ngl 28 -m DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf --color -c 131072 --temp 0 --repeat-penalty 1.1 -p "User: {prompt}\n\nAssistant:"
109
+ ```
110
+
111
+ Change `-ngl 28` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
112
+
113
+ Change `-c 131072` to the desired sequence length.
114
+
115
+ If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size).
116
+ There is a similar option for V-cache (`-ctv`), however that requires Flash Attention [which is not working yet with this model](https://github.com/ggerganov/llama.cpp/issues/7343).
117
+
118
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
119
+
120
+ ## How to run from Python code
121
+
122
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module.
123
+
124
+ ### How to load this model in Python code, using llama-cpp-python
125
+
126
+ For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
127
+
128
+ #### First install the package
129
+
130
+ Run one of the following commands, according to your system:
131
+
132
+ ```shell
133
+ # Prebuilt wheel with basic CPU support
134
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
135
+ # Prebuilt wheel with NVidia CUDA acceleration
136
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.)
137
+ # Prebuilt wheel with Metal GPU acceleration
138
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
139
+ # Build base version with no GPU acceleration
140
+ pip install llama-cpp-python
141
+ # With NVidia CUDA acceleration
142
+ CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
143
+ # Or with OpenBLAS acceleration
144
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
145
+ # Or with CLBLast acceleration
146
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
147
+ # Or with AMD ROCm GPU acceleration (Linux only)
148
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
149
+ # Or with Metal GPU acceleration for macOS systems only
150
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
151
+ # Or with Vulkan acceleration
152
+ CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python
153
+ # Or with Kompute acceleration
154
+ CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python
155
+ # Or with SYCL acceleration
156
+ CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python
157
+
158
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
159
+ $env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
160
+ pip install llama-cpp-python
161
+ ```
162
+
163
+ #### Simple llama-cpp-python example code
164
+
165
+ ```python
166
+ from llama_cpp import Llama
167
+
168
+ # Chat Completion API
169
+
170
+ llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072)
171
+ print(llm.create_chat_completion(
172
+ repeat_penalty = 1.1,
173
+ messages = [
174
+ {
175
+ "role": "user",
176
+ "content": "Pick a LeetCode challenge and solve it in Python."
177
+ }
178
+ ]
179
+ ))
180
+ ```
181
+
182
+ #### Simple llama-cpp-python example fill-in-middle code
183
+
184
+ ```python
185
+ from llama_cpp import Llama
186
+
187
+ # Completion API
188
+
189
+ prompt = "def add("
190
+ suffix = "\n return sum\n\n"
191
+
192
+ llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072)
193
+ output = llm.create_completion(
194
+ temperature = 0.0,
195
+ repeat_penalty = 1.0,
196
+ prompt = prompt,
197
+ suffix = suffix
198
+ )
199
+
200
+ # Models sometimes repeat suffix in response, attempt to filter that
201
+ response = output["choices"][0]["text"]
202
+ response_stripped = response.rstrip()
203
+ unwanted_response_suffix = suffix.rstrip()
204
+ unwanted_response_length = len(unwanted_response_suffix)
205
+
206
+ filtered = False
207
+ if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix:
208
+ response = response_stripped[:-unwanted_response_length]
209
+ filtered = True
210
+
211
+ print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{suffix}\033[0m")
212
+ ```
213
+
214
+ <!-- README_GGUF.md-how-to-run end -->
215
+
216
+ <!-- original-model-card start -->
217
+ <!-- markdownlint-disable first-line-h1 -->
218
+ <!-- markdownlint-disable html -->
219
+ <!-- markdownlint-disable no-duplicate-header -->
220
+
221
+ <div align="center">
222
+ <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
223
+ </div>
224
+ <hr>
225
+ <div align="center" style="line-height: 1;">
226
+ <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
227
+ <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
228
+ </a>
229
+ <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
230
+ <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
231
+ </a>
232
+ <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
233
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
234
+ </a>
235
+ </div>
236
+
237
+ <div align="center" style="line-height: 1;">
238
+ <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
239
+ <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
240
+ </a>
241
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
242
+ <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
243
+ </a>
244
+ <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
245
+ <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
246
+ </a>
247
+ </div>
248
+
249
+ <div align="center" style="line-height: 1;">
250
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;">
251
+ <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
252
+ </a>
253
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;">
254
+ <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
255
+ </a>
256
+ </div>
257
+ <p align="center">
258
+ <a href="#4-api-platform">API Platform</a> |
259
+ <a href="#5-how-to-run-locally">How to Use</a> |
260
+ <a href="#6-license">License</a> |
261
+ </p>
262
+
263
+
264
+ <p align="center">
265
+ <a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a>
266
+ </p>
267
+
268
+ # DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
269
+
270
+ ## 1. Introduction
271
+ We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
272
+
273
+ <p align="center">
274
+ <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true">
275
+ </p>
276
+
277
+ In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper.
278
+
279
+ ## 2. Model Downloads
280
+
281
+ We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public.
282
+
283
+ <div align="center">
284
+
285
+ | **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** |
286
+ | :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: |
287
+ | DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) |
288
+ | DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |
289
+ | DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) |
290
+ | DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) |
291
+
292
+ </div>
293
+
294
+
295
+ ## 3. Chat Website
296
+
297
+ You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in)
298
+
299
+ ## 4. API Platform
300
+ We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/). Sign up for over millions of free tokens. And you can also pay-as-you-go at an unbeatable price.
301
+
302
+ <p align="center">
303
+ <img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true">
304
+ </p>
305
+
306
+
307
+ ## 5. How to run locally
308
+ **Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.**
309
+
310
+ ### Inference with Huggingface's Transformers
311
+ You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
312
+
313
+ #### Code Completion
314
+ ```python
315
+ from transformers import AutoTokenizer, AutoModelForCausalLM
316
+ import torch
317
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
318
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
319
+ input_text = "#write a quick sort algorithm"
320
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
321
+ outputs = model.generate(**inputs, max_length=128)
322
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
323
+ ```
324
+
325
+ #### Code Insertion
326
+ ```python
327
+ from transformers import AutoTokenizer, AutoModelForCausalLM
328
+ import torch
329
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
330
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
331
+ input_text = """<|fim▁begin|>def quick_sort(arr):
332
+ if len(arr) <= 1:
333
+ return arr
334
+ pivot = arr[0]
335
+ left = []
336
+ right = []
337
+ <|fim▁hole|>
338
+ if arr[i] < pivot:
339
+ left.append(arr[i])
340
+ else:
341
+ right.append(arr[i])
342
+ return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
343
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
344
+ outputs = model.generate(**inputs, max_length=128)
345
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
346
+ ```
347
+
348
+ #### Chat Completion
349
+
350
+ ```python
351
+ from transformers import AutoTokenizer, AutoModelForCausalLM
352
+ import torch
353
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True)
354
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
355
+ messages=[
356
+ { 'role': 'user', 'content': "write a quick sort algorithm in python."}
357
+ ]
358
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
359
+ # tokenizer.eos_token_id is the id of <|EOT|> token
360
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
361
+ print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
362
+ ```
363
+
364
+
365
+
366
+ The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
367
+
368
+ An example of chat template is as belows:
369
+
370
+ ```bash
371
+ <|begin▁of▁sentence|>User: {user_message_1}
372
+
373
+ Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
374
+
375
+ Assistant:
376
+ ```
377
+
378
+ You can also add an optional system message:
379
+
380
+ ```bash
381
+ <|begin▁of▁sentence|>{system_message}
382
+
383
+ User: {user_message_1}
384
+
385
+ Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
386
+
387
+ Assistant:
388
+ ```
389
+
390
+ ### Inference with vLLM (recommended)
391
+ To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
392
+
393
+ ```python
394
+ from transformers import AutoTokenizer
395
+ from vllm import LLM, SamplingParams
396
+
397
+ max_model_len, tp_size = 8192, 1
398
+ model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
399
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
400
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
401
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
402
+
403
+ messages_list = [
404
+ [{"role": "user", "content": "Who are you?"}],
405
+ [{"role": "user", "content": "write a quick sort algorithm in python."}],
406
+ [{"role": "user", "content": "Write a piece of quicksort code in C++."}],
407
+ ]
408
+
409
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
410
+
411
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
412
+
413
+ generated_text = [output.outputs[0].text for output in outputs]
414
+ print(generated_text)
415
+ ```
416
+
417
+
418
+
419
+ ## 6. License
420
+
421
+ This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use.
422
+
423
+
424
+ ## 7. Contact
425
+ If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).