Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: other
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -31,7 +31,8 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
|
34 |
-
* [4
|
|
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
@@ -41,13 +42,13 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
41 |
|
42 |
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
|
43 |
|
44 |
-
|
45 |
|
46 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
47 |
|
48 |
-
These new quantisation methods are
|
49 |
|
50 |
-
They
|
51 |
|
52 |
## Explanation of the new k-quant methods
|
53 |
|
@@ -105,7 +106,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
105 |
|
106 |
For further support, and discussions on these models and AI in general, join us at:
|
107 |
|
108 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
109 |
|
110 |
## Thanks, and how to contribute.
|
111 |
|
@@ -122,7 +123,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
122 |
|
123 |
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
124 |
|
125 |
-
**Patreon special mentions**:
|
126 |
|
127 |
Thank you to all my generous patrons and donaters!
|
128 |
|
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
|
34 |
+
* [2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
|
35 |
+
* [DOI Snapshot 2023/06/26 2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML-DOI)
|
36 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
|
37 |
|
38 |
<!-- compatibility_ggml start -->
|
|
|
42 |
|
43 |
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
|
44 |
|
45 |
+
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
|
46 |
|
47 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
48 |
|
49 |
+
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
|
50 |
|
51 |
+
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
|
52 |
|
53 |
## Explanation of the new k-quant methods
|
54 |
|
|
|
106 |
|
107 |
For further support, and discussions on these models and AI in general, join us at:
|
108 |
|
109 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
110 |
|
111 |
## Thanks, and how to contribute.
|
112 |
|
|
|
123 |
|
124 |
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
125 |
|
126 |
+
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire
|
127 |
|
128 |
Thank you to all my generous patrons and donaters!
|
129 |
|