TheBloke commited on
Commit
ce7dc5c
1 Parent(s): a9aa610

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +63 -6
README.md CHANGED
@@ -19,7 +19,7 @@ license: other
19
 
20
  # Kaist AI's Selfee 13B GGML
21
 
22
- These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
@@ -69,16 +69,16 @@ Refer to the Provided Files table below to see what files use which methods, and
69
  | selfee-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
70
  | selfee-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
71
  | selfee-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
72
- | selfee-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.37 GB | 9.87 GB | Original llama.cpp quant method, 4-bit. |
73
- | selfee-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.17 GB | 10.67 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
74
  | selfee-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
75
  | selfee-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
76
- | selfee-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.97 GB | 11.47 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
77
- | selfee-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.78 GB | 12.28 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
78
  | selfee-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
79
  | selfee-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
80
  | selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
81
- | selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.79 GB | 16.29 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
82
 
83
 
84
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
@@ -130,6 +130,63 @@ Thank you to all my generous patrons and donaters!
130
 
131
  # Original model card: Kaist AI's Selfee 13B
132
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  <p align="center" width="100%">
134
  <a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
135
  </p>
 
19
 
20
  # Kaist AI's Selfee 13B GGML
21
 
22
+ These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/Selfee-13B-fp16).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
 
69
  | selfee-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
70
  | selfee-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
71
  | selfee-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
72
+ | selfee-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
73
+ | selfee-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
74
  | selfee-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
75
  | selfee-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
76
+ | selfee-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
77
+ | selfee-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
78
  | selfee-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
79
  | selfee-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
80
  | selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
81
+ | selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
82
 
83
 
84
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
130
 
131
  # Original model card: Kaist AI's Selfee 13B
132
 
133
+
134
+ <!-- header start -->
135
+ <div style="width: 100%;">
136
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
137
+ </div>
138
+ <div style="display: flex; justify-content: space-between; width: 100%;">
139
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
140
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
141
+ </div>
142
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
143
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
144
+ </div>
145
+ </div>
146
+ <!-- header end -->
147
+
148
+ # Kaist AI's Selfee 13B GGML
149
+
150
+ This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
151
+
152
+ It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16.
153
+
154
+ ## Repositories available
155
+
156
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
157
+ * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
158
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
159
+
160
+ <!-- footer start -->
161
+ ## Discord
162
+
163
+ For further support, and discussions on these models and AI in general, join us at:
164
+
165
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
166
+
167
+ ## Thanks, and how to contribute.
168
+
169
+ Thanks to the [chirper.ai](https://chirper.ai) team!
170
+
171
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
172
+
173
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
174
+
175
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
176
+
177
+ * Patreon: https://patreon.com/TheBlokeAI
178
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
179
+
180
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
181
+
182
+ **Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
183
+
184
+ Thank you to all my generous patrons and donaters!
185
+
186
+ <!-- footer end -->
187
+
188
+ # Original model card: Kaist AI's Selfee 13B
189
+
190
  <p align="center" width="100%">
191
  <a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
192
  </p>