legraphista commited on
Commit
b561d2a
β€’
1 Parent(s): 2ef20ea

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: deepseek-ai/DeepSeek-V2-Lite-Chat
3
+ inference: false
4
+ library_name: gguf
5
+ pipeline_tag: text-generation
6
+ quantized_by: legraphista
7
+ tags:
8
+ - quantized
9
+ - GGUF
10
+ - imatrix
11
+ - quantization
12
+ - imat
13
+ - imatrix
14
+ - static
15
+ ---
16
+
17
+ # DeepSeek-V2-Lite-Chat-IMat-GGUF
18
+ _Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-V2-Lite-Chat_
19
+
20
+ Original Model: [deepseek-ai/DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat)
21
+ Original dtype: `BF16` (`bfloat16`)
22
+ Quantized by: llama.cpp [https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2](https://github.com/ggerganov/llama.cpp/releases/tag/https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2)
23
+ IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
24
+
25
+ ## Files
26
+
27
+ ### IMatrix
28
+ Status: ⏳ Processing
29
+ Link: [here](https://huggingface.co/legraphista/DeepSeek-V2-Lite-Chat-IMat-GGUF/blob/main/imatrix.dat)
30
+
31
+ ### Common Quants
32
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
33
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
34
+ | DeepSeek-V2-Lite-Chat.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ No | -
35
+ | DeepSeek-V2-Lite-Chat.Q6_K | Q6_K | - | ⏳ Processing | βšͺ No | -
36
+ | DeepSeek-V2-Lite-Chat.Q4_K | Q4_K | - | ⏳ Processing | 🟒 Yes | -
37
+ | DeepSeek-V2-Lite-Chat.Q3_K | Q3_K | - | ⏳ Processing | 🟒 Yes | -
38
+ | DeepSeek-V2-Lite-Chat.Q2_K | Q2_K | - | ⏳ Processing | 🟒 Yes | -
39
+
40
+
41
+ ### All Quants
42
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
43
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
44
+ | DeepSeek-V2-Lite-Chat.FP16 | F16 | - | ⏳ Processing | βšͺ No | -
45
+ | DeepSeek-V2-Lite-Chat.BF16 | BF16 | - | ⏳ Processing | βšͺ No | -
46
+ | DeepSeek-V2-Lite-Chat.Q5_K | Q5_K | - | ⏳ Processing | βšͺ No | -
47
+ | DeepSeek-V2-Lite-Chat.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ No | -
48
+ | DeepSeek-V2-Lite-Chat.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 Yes | -
49
+ | DeepSeek-V2-Lite-Chat.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 Yes | -
50
+ | DeepSeek-V2-Lite-Chat.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 Yes | -
51
+ | DeepSeek-V2-Lite-Chat.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 Yes | -
52
+ | DeepSeek-V2-Lite-Chat.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 Yes | -
53
+ | DeepSeek-V2-Lite-Chat.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 Yes | -
54
+ | DeepSeek-V2-Lite-Chat.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 Yes | -
55
+ | DeepSeek-V2-Lite-Chat.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 Yes | -
56
+ | DeepSeek-V2-Lite-Chat.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 Yes | -
57
+ | DeepSeek-V2-Lite-Chat.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 Yes | -
58
+ | DeepSeek-V2-Lite-Chat.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 Yes | -
59
+ | DeepSeek-V2-Lite-Chat.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 Yes | -
60
+ | DeepSeek-V2-Lite-Chat.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 Yes | -
61
+ | DeepSeek-V2-Lite-Chat.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 Yes | -
62
+ | DeepSeek-V2-Lite-Chat.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 Yes | -
63
+ | DeepSeek-V2-Lite-Chat.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 Yes | -
64
+
65
+
66
+ ## Downloading using huggingface-cli
67
+ First, make sure you have hugginface-cli installed:
68
+ ```
69
+ pip install -U "huggingface_hub[cli]"
70
+ ```
71
+ Then, you can target the specific file you want:
72
+ ```
73
+ huggingface-cli download legraphista/DeepSeek-V2-Lite-Chat-IMat-GGUF --include "DeepSeek-V2-Lite-Chat.Q8_0.gguf" --local-dir ./
74
+ ```
75
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
76
+ ```
77
+ huggingface-cli download legraphista/DeepSeek-V2-Lite-Chat-IMat-GGUF --include "DeepSeek-V2-Lite-Chat.Q8_0/*" --local-dir DeepSeek-V2-Lite-Chat.Q8_0
78
+ # see FAQ for merging GGUF's
79
+ ```
80
+
81
+ ## FAQ
82
+
83
+ ### Why is the IMatrix not applied everywhere?
84
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
85
+
86
+ ### How do I merge a split GGUF?
87
+ 1. Make sure you have `gguf-split` available
88
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
89
+ - Download the appropriate zip for your system from the latest release
90
+ - Unzip the archive and you should be able to find `gguf-split`
91
+ 2. Locate your GGUF chunks folder (ex: `DeepSeek-V2-Lite-Chat.Q8_0`)
92
+ 3. Run `gguf-split --merge DeepSeek-V2-Lite-Chat.Q8_0/DeepSeek-V2-Lite-Chat.Q8_0-00001-of-XXXXX.gguf DeepSeek-V2-Lite-Chat.Q8_0.gguf`
93
+ - Make sure to point `gguf-split` to the first chunk of the split.
94
+
95
+ ---
96
+
97
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!