legraphista commited on
Commit
9bcd559
β€’
1 Parent(s): 4d2a615

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +172 -0
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated
3
+ datasets:
4
+ - cognitivecomputations/Dolphin-2.9.2
5
+ - teknium/OpenHermes-2.5
6
+ - m-a-p/CodeFeedback-Filtered-Instruction
7
+ - cognitivecomputations/dolphin-coder
8
+ - cognitivecomputations/samantha-data
9
+ - microsoft/orca-math-word-problems-200k
10
+ - internlm/Agent-FLAN
11
+ - cognitivecomputations/SystemChat-2.0
12
+ inference: false
13
+ language:
14
+ - en
15
+ library_name: gguf
16
+ license: mit
17
+ pipeline_tag: text-generation
18
+ quantized_by: legraphista
19
+ tags:
20
+ - quantized
21
+ - GGUF
22
+ - imatrix
23
+ - quantization
24
+ - imat
25
+ - imatrix
26
+ - static
27
+ - 16bit
28
+ - 8bit
29
+ - 6bit
30
+ - 5bit
31
+ - 4bit
32
+ - 3bit
33
+ - 2bit
34
+ - 1bit
35
+ ---
36
+
37
+ # dolphin-2.9.2-Phi-3-Medium-abliterated-IMat-GGUF
38
+ _Llama.cpp imatrix quantization of cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated_
39
+
40
+ Original Model: [cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated](https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated)
41
+ Original dtype: `BF16` (`bfloat16`)
42
+ Quantized by: llama.cpp [b3072](https://github.com/ggerganov/llama.cpp/releases/tag/b3072)
43
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
44
+
45
+ - [Files](#files)
46
+ - [IMatrix](#imatrix)
47
+ - [Common Quants](#common-quants)
48
+ - [All Quants](#all-quants)
49
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
50
+ - [Inference](#inference)
51
+ - [Simple chat template](#simple-chat-template)
52
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
53
+ - [Llama.cpp](#llama-cpp)
54
+ - [FAQ](#faq)
55
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
56
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
57
+
58
+ ---
59
+
60
+ ## Files
61
+
62
+ ### IMatrix
63
+ Status: ⏳ Processing
64
+ Link: [here](https://huggingface.co/legraphista/dolphin-2.9.2-Phi-3-Medium-abliterated-IMat-GGUF/blob/main/imatrix.dat)
65
+
66
+ ### Common Quants
67
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
68
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
69
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
70
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
71
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
72
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
73
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
74
+
75
+
76
+ ### All Quants
77
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
78
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
79
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
80
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
81
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
82
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
83
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
84
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
85
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
86
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
87
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
88
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
89
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
90
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
91
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
92
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
93
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
94
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
95
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
96
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
97
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
98
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
99
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
100
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
101
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
102
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
103
+ | dolphin-2.9.2-Phi-3-Medium-abliterated.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
104
+
105
+
106
+ ## Downloading using huggingface-cli
107
+ If you do not have hugginface-cli installed:
108
+ ```
109
+ pip install -U "huggingface_hub[cli]"
110
+ ```
111
+ Download the specific file you want:
112
+ ```
113
+ huggingface-cli download legraphista/dolphin-2.9.2-Phi-3-Medium-abliterated-IMat-GGUF --include "dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0.gguf" --local-dir ./
114
+ ```
115
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
116
+ ```
117
+ huggingface-cli download legraphista/dolphin-2.9.2-Phi-3-Medium-abliterated-IMat-GGUF --include "dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0/*" --local-dir ./
118
+ # see FAQ for merging GGUF's
119
+ ```
120
+
121
+ ---
122
+
123
+ ## Inference
124
+
125
+ ### Simple chat template
126
+ ```
127
+ <|im_start|>user
128
+ {user_prompt}<|im_end|>
129
+ <|im_start|>assistant
130
+ {assistant_response}<|im_end|>
131
+ <|im_start|>user
132
+ {next_user_prompt}<|im_end|>
133
+
134
+ ```
135
+
136
+ ### Chat template with system prompt
137
+ ```
138
+ <|im_start|>system
139
+ {system_prompt}<|im_end|>
140
+ <|im_start|>user
141
+ {user_prompt}<|im_end|>
142
+ <|im_start|>assistant
143
+ {assistant_response}<|im_end|>
144
+ <|im_start|>user
145
+ {next_user_prompt}<|im_end|>
146
+
147
+ ```
148
+
149
+ ### Llama.cpp
150
+ ```
151
+ llama.cpp/main -m dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
152
+ ```
153
+
154
+ ---
155
+
156
+ ## FAQ
157
+
158
+ ### Why is the IMatrix not applied everywhere?
159
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
160
+
161
+ ### How do I merge a split GGUF?
162
+ 1. Make sure you have `gguf-split` available
163
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
164
+ - Download the appropriate zip for your system from the latest release
165
+ - Unzip the archive and you should be able to find `gguf-split`
166
+ 2. Locate your GGUF chunks folder (ex: `dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0`)
167
+ 3. Run `gguf-split --merge dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0/dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0-00001-of-XXXXX.gguf dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0.gguf`
168
+ - Make sure to point `gguf-split` to the first chunk of the split.
169
+
170
+ ---
171
+
172
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!