TheBloke commited on
Commit
890b9dc
1 Parent(s): ec00dc6

Initial GGUF model commit

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -43,16 +43,16 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
43
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
44
 
45
  As of August 25th, here is a list of clients and libraries that are known to support GGUF:
46
- * [llama.cpp](https://github.com/ggerganov/llama.cpp)
47
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
48
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
 
49
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
50
  * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
51
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
52
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
53
 
54
  The clients and libraries below are expecting to add GGUF support shortly:
55
- * [LM Studio](https://lmstudio.ai/), should be updated by end August 25th.
56
  <!-- README_GGUF.md-about-gguf end -->
57
 
58
  <!-- repositories-available start -->
@@ -112,7 +112,7 @@ Refer to the Provided Files table below to see what files use which methods, and
112
  | [airoboros-c34b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
113
  | [airoboros-c34b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
114
  | [airoboros-c34b-2.1.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
115
- | [airoboros-c34b-2.1.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q8_0.gguf) | Q8_0 | 8 | 35.79 GB| 38.29 GB | very large, extremely low quality loss - not recommended |
116
 
117
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
118
  <!-- README_GGUF.md-provided-files end -->
@@ -202,6 +202,8 @@ This is an instruction fine-tuned llama-2 model, using synthetic data generated
202
  - these models just produce text, what you do with that text is your resonsibility
203
  - many people and industries deal with "sensitive" content; imagine if a court stenographer's eqipment filtered illegal content - it would be useless
204
 
 
 
205
  ### Prompt format
206
 
207
  The training code was updated to randomize newline vs space:
 
43
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
44
 
45
  As of August 25th, here is a list of clients and libraries that are known to support GGUF:
46
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp).
47
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
48
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
49
+ * [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
50
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
51
  * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
52
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
53
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
54
 
55
  The clients and libraries below are expecting to add GGUF support shortly:
 
56
  <!-- README_GGUF.md-about-gguf end -->
57
 
58
  <!-- repositories-available start -->
 
112
  | [airoboros-c34b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
113
  | [airoboros-c34b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
114
  | [airoboros-c34b-2.1.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
115
+ | [airoboros-c34b-2.1.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
116
 
117
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
118
  <!-- README_GGUF.md-provided-files end -->
 
202
  - these models just produce text, what you do with that text is your resonsibility
203
  - many people and industries deal with "sensitive" content; imagine if a court stenographer's eqipment filtered illegal content - it would be useless
204
 
205
+ Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
206
+
207
  ### Prompt format
208
 
209
  The training code was updated to randomize newline vs space: