legraphista commited on
Commit
9ddec2a
β€’
1 Parent(s): 6202cd8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +157 -0
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
3
+ inference: false
4
+ library_name: gguf
5
+ pipeline_tag: text-generation
6
+ quantized_by: legraphista
7
+ tags:
8
+ - quantized
9
+ - GGUF
10
+ - imatrix
11
+ - quantization
12
+ - imat
13
+ - imatrix
14
+ - static
15
+ ---
16
+
17
+ # Llama-3-Instruct-8B-SimPO-IMat-GGUF
18
+ _Llama.cpp imatrix quantization of princeton-nlp/Llama-3-Instruct-8B-SimPO_
19
+
20
+ Original Model: [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)
21
+ Original dtype: `BF16` (`bfloat16`)
22
+ Quantized by: llama.cpp [b3023](https://github.com/ggerganov/llama.cpp/releases/tag/b3023)
23
+ IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
24
+
25
+ - [Llama-3-Instruct-8B-SimPO-IMat-GGUF](#llama-3-instruct-8b-simpo-imat-gguf)
26
+ - [Files](#files)
27
+ - [IMatrix](#imatrix)
28
+ - [Common Quants](#common-quants)
29
+ - [All Quants](#all-quants)
30
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
31
+ - [Inference](#inference)
32
+ - [Simple chat template](#simple-chat-template)
33
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
34
+ - [Llama.cpp](#llama-cpp)
35
+ - [FAQ](#faq)
36
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
37
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
38
+
39
+ ---
40
+
41
+ ## Files
42
+
43
+ ### IMatrix
44
+ Status: ⏳ Processing
45
+ Link: [here](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/imatrix.dat)
46
+
47
+ ### Common Quants
48
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
49
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
50
+ | Llama-3-Instruct-8B-SimPO.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
51
+ | Llama-3-Instruct-8B-SimPO.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
52
+ | Llama-3-Instruct-8B-SimPO.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
53
+ | Llama-3-Instruct-8B-SimPO.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
54
+ | Llama-3-Instruct-8B-SimPO.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
55
+
56
+
57
+ ### All Quants
58
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
59
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
60
+ | Llama-3-Instruct-8B-SimPO.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
61
+ | Llama-3-Instruct-8B-SimPO.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
62
+ | Llama-3-Instruct-8B-SimPO.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
63
+ | Llama-3-Instruct-8B-SimPO.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
64
+ | Llama-3-Instruct-8B-SimPO.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
65
+ | Llama-3-Instruct-8B-SimPO.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
66
+ | Llama-3-Instruct-8B-SimPO.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
67
+ | Llama-3-Instruct-8B-SimPO.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
68
+ | Llama-3-Instruct-8B-SimPO.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
69
+ | Llama-3-Instruct-8B-SimPO.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
70
+ | Llama-3-Instruct-8B-SimPO.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
71
+ | Llama-3-Instruct-8B-SimPO.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
72
+ | Llama-3-Instruct-8B-SimPO.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
73
+ | Llama-3-Instruct-8B-SimPO.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
74
+ | Llama-3-Instruct-8B-SimPO.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
75
+ | Llama-3-Instruct-8B-SimPO.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
76
+ | Llama-3-Instruct-8B-SimPO.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
77
+ | Llama-3-Instruct-8B-SimPO.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
78
+ | Llama-3-Instruct-8B-SimPO.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
79
+ | Llama-3-Instruct-8B-SimPO.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
80
+ | Llama-3-Instruct-8B-SimPO.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
81
+ | Llama-3-Instruct-8B-SimPO.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
82
+ | Llama-3-Instruct-8B-SimPO.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
83
+ | Llama-3-Instruct-8B-SimPO.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
84
+ | Llama-3-Instruct-8B-SimPO.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
85
+
86
+
87
+ ## Downloading using huggingface-cli
88
+ If you do not have hugginface-cli installed:
89
+ ```
90
+ pip install -U "huggingface_hub[cli]"
91
+ ```
92
+ Download the specific file you want:
93
+ ```
94
+ huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.Q8_0.gguf" --local-dir ./
95
+ ```
96
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
97
+ ```
98
+ huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.Q8_0/*" --local-dir ./
99
+ # see FAQ for merging GGUF's
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Inference
105
+
106
+ ### Simple chat template
107
+ ```
108
+ <|im_start|>user
109
+ Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
110
+ <|im_start|>assistant
111
+ Sure! Here are some ways to eat bananas and dragonfruits together:
112
+ 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
113
+ 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
114
+ <|im_start|>user
115
+ What about solving an 2x + 3 = 7 equation?<|im_end|>
116
+
117
+ ```
118
+
119
+ ### Chat template with system prompt
120
+ ```
121
+ <|im_start|>system
122
+ You are a helpful AI.<|im_end|>
123
+ <|im_start|>user
124
+ Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
125
+ <|im_start|>assistant
126
+ Sure! Here are some ways to eat bananas and dragonfruits together:
127
+ 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
128
+ 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
129
+ <|im_start|>user
130
+ What about solving an 2x + 3 = 7 equation?<|im_end|>
131
+
132
+ ```
133
+
134
+ ### Llama.cpp
135
+ ```
136
+ llama.cpp/main -m Llama-3-Instruct-8B-SimPO.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
137
+ ```
138
+
139
+ ---
140
+
141
+ ## FAQ
142
+
143
+ ### Why is the IMatrix not applied everywhere?
144
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
145
+
146
+ ### How do I merge a split GGUF?
147
+ 1. Make sure you have `gguf-split` available
148
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
149
+ - Download the appropriate zip for your system from the latest release
150
+ - Unzip the archive and you should be able to find `gguf-split`
151
+ 2. Locate your GGUF chunks folder (ex: `Llama-3-Instruct-8B-SimPO.Q8_0`)
152
+ 3. Run `gguf-split --merge Llama-3-Instruct-8B-SimPO.Q8_0/Llama-3-Instruct-8B-SimPO.Q8_0-00001-of-XXXXX.gguf Llama-3-Instruct-8B-SimPO.Q8_0.gguf`
153
+ - Make sure to point `gguf-split` to the first chunk of the split.
154
+
155
+ ---
156
+
157
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!