legraphista commited on
Commit
be31468
β€’
1 Parent(s): a397dd3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +168 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: cognitivecomputations/dolphin-2.9.1-mixtral-1x22b
3
+ datasets:
4
+ - cognitivecomputations/Dolphin-2.9
5
+ - teknium/OpenHermes-2.5
6
+ - m-a-p/CodeFeedback-Filtered-Instruction
7
+ - cognitivecomputations/dolphin-coder
8
+ - cognitivecomputations/samantha-data
9
+ - microsoft/orca-math-word-problems-200k
10
+ - abacusai/SystemChat-1.1
11
+ - Locutusque/function-calling-chatml
12
+ - internlm/Agent-FLAN
13
+ inference: false
14
+ language:
15
+ - en
16
+ library_name: gguf
17
+ license: apache-2.0
18
+ model-index:
19
+ - name: out
20
+ results: []
21
+ pipeline_tag: text-generation
22
+ quantized_by: legraphista
23
+ tags:
24
+ - quantized
25
+ - GGUF
26
+ - imatrix
27
+ - quantization
28
+ - imat
29
+ - imatrix
30
+ - static
31
+ ---
32
+
33
+ # dolphin-2.9.1-mixtral-1x22b-IMat-GGUF
34
+ _Llama.cpp imatrix quantization of cognitivecomputations/dolphin-2.9.1-mixtral-1x22b_
35
+
36
+ Original Model: [cognitivecomputations/dolphin-2.9.1-mixtral-1x22b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b)
37
+ Original dtype: `BF16` (`bfloat16`)
38
+ Quantized by: llama.cpp [b3008](https://github.com/ggerganov/llama.cpp/releases/tag/b3008)
39
+ IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
40
+
41
+ - [dolphin-2.9.1-mixtral-1x22b-IMat-GGUF](#dolphin-2-9-1-mixtral-1x22b-imat-gguf)
42
+ - [Files](#files)
43
+ - [IMatrix](#imatrix)
44
+ - [Common Quants](#common-quants)
45
+ - [All Quants](#all-quants)
46
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
47
+ - [Inference](#inference)
48
+ - [Simple chat template](#simple-chat-template)
49
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
50
+ - [Llama.cpp](#llama-cpp)
51
+ - [FAQ](#faq)
52
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
53
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
54
+
55
+ ---
56
+
57
+ ## Files
58
+
59
+ ### IMatrix
60
+ Status: ⏳ Processing
61
+ Link: [here](https://huggingface.co/legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF/blob/main/imatrix.dat)
62
+
63
+ ### Common Quants
64
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
65
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
66
+ | dolphin-2.9.1-mixtral-1x22b.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
67
+ | dolphin-2.9.1-mixtral-1x22b.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
68
+ | dolphin-2.9.1-mixtral-1x22b.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
69
+ | dolphin-2.9.1-mixtral-1x22b.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
70
+ | dolphin-2.9.1-mixtral-1x22b.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
71
+
72
+
73
+ ### All Quants
74
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
75
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
76
+ | dolphin-2.9.1-mixtral-1x22b.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
77
+ | dolphin-2.9.1-mixtral-1x22b.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
78
+ | dolphin-2.9.1-mixtral-1x22b.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
79
+ | dolphin-2.9.1-mixtral-1x22b.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
80
+ | dolphin-2.9.1-mixtral-1x22b.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
81
+ | dolphin-2.9.1-mixtral-1x22b.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
82
+ | dolphin-2.9.1-mixtral-1x22b.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
83
+ | dolphin-2.9.1-mixtral-1x22b.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
84
+ | dolphin-2.9.1-mixtral-1x22b.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
85
+ | dolphin-2.9.1-mixtral-1x22b.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
86
+ | dolphin-2.9.1-mixtral-1x22b.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
87
+ | dolphin-2.9.1-mixtral-1x22b.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
88
+ | dolphin-2.9.1-mixtral-1x22b.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
89
+ | dolphin-2.9.1-mixtral-1x22b.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
90
+ | dolphin-2.9.1-mixtral-1x22b.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
91
+ | dolphin-2.9.1-mixtral-1x22b.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
92
+ | dolphin-2.9.1-mixtral-1x22b.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
93
+ | dolphin-2.9.1-mixtral-1x22b.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
94
+ | dolphin-2.9.1-mixtral-1x22b.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
95
+ | dolphin-2.9.1-mixtral-1x22b.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
96
+
97
+
98
+ ## Downloading using huggingface-cli
99
+ If you do not have hugginface-cli installed:
100
+ ```
101
+ pip install -U "huggingface_hub[cli]"
102
+ ```
103
+ Download the specific file you want:
104
+ ```
105
+ huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.Q8_0.gguf" --local-dir ./
106
+ ```
107
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
108
+ ```
109
+ huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.Q8_0/*" --local-dir dolphin-2.9.1-mixtral-1x22b.Q8_0
110
+ # see FAQ for merging GGUF's
111
+ ```
112
+
113
+ ---
114
+
115
+ ## Inference
116
+
117
+ ### Simple chat template
118
+ ```
119
+ <|im_start|>user
120
+ Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
121
+ <|im_start|>assistant
122
+ Sure! Here are some ways to eat bananas and dragonfruits together:
123
+ 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
124
+ 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
125
+ <|im_start|>user
126
+ What about solving an 2x + 3 = 7 equation?<|im_end|>
127
+
128
+ ```
129
+
130
+ ### Chat template with system prompt
131
+ ```
132
+ <|im_start|>system
133
+ You are a helpful AI.<|im_end|>
134
+ <|im_start|>user
135
+ Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
136
+ <|im_start|>assistant
137
+ Sure! Here are some ways to eat bananas and dragonfruits together:
138
+ 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
139
+ 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
140
+ <|im_start|>user
141
+ What about solving an 2x + 3 = 7 equation?<|im_end|>
142
+
143
+ ```
144
+
145
+ ### Llama.cpp
146
+ ```
147
+ llama.cpp/main -m dolphin-2.9.1-mixtral-1x22b.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
148
+ ```
149
+
150
+ ---
151
+
152
+ ## FAQ
153
+
154
+ ### Why is the IMatrix not applied everywhere?
155
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
156
+
157
+ ### How do I merge a split GGUF?
158
+ 1. Make sure you have `gguf-split` available
159
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
160
+ - Download the appropriate zip for your system from the latest release
161
+ - Unzip the archive and you should be able to find `gguf-split`
162
+ 2. Locate your GGUF chunks folder (ex: `dolphin-2.9.1-mixtral-1x22b.Q8_0`)
163
+ 3. Run `gguf-split --merge dolphin-2.9.1-mixtral-1x22b.Q8_0/dolphin-2.9.1-mixtral-1x22b.Q8_0-00001-of-XXXXX.gguf dolphin-2.9.1-mixtral-1x22b.Q8_0.gguf`
164
+ - Make sure to point `gguf-split` to the first chunk of the split.
165
+
166
+ ---
167
+
168
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!