TheBloke commited on
Commit
91ee2fa
1 Parent(s): 01f0c16

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -16
README.md CHANGED
@@ -46,23 +46,24 @@ This repo contains GGUF format model files for [YeungNLP's Firefly Mixtral 8X7B]
46
 
47
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
48
 
49
- ### Mixtral GGUF
50
-
51
- Support for Mixtral was merged into Llama.cpp on December 13th.
52
-
53
- These Mixtral GGUFs are known to work in:
54
-
55
- * llama.cpp as of December 13th
56
- * KoboldCpp 1.52 as later
57
- * LM Studio 0.2.9 and later
58
- * llama-cpp-python 0.2.23 and later
59
-
60
- Other clients/libraries, not listed above, may not yet work.
61
 
62
  <!-- README_GGUF.md-about-gguf end -->
63
  <!-- repositories-available start -->
64
  ## Repositories available
65
 
 
66
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/firefly-mixtral-8x7b-GPTQ)
67
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/firefly-mixtral-8x7b-GGUF)
68
  * [YeungNLP's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/YeungNLP/firefly-mixtral-8x7b)
@@ -82,7 +83,9 @@ Other clients/libraries, not listed above, may not yet work.
82
  <!-- compatibility_gguf start -->
83
  ## Compatibility
84
 
85
- These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet.
 
 
86
 
87
  ## Explanation of quantisation methods
88
 
@@ -198,13 +201,11 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
198
 
199
  ## How to run in `text-generation-webui`
200
 
201
- Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first.
202
-
203
  Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
204
 
205
  ## How to run from Python code
206
 
207
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.
208
 
209
  ### How to load this model in Python code, using llama-cpp-python
210
 
@@ -273,6 +274,7 @@ llm.create_chat_completion(
273
  Here are guides on using llama-cpp-python and ctransformers with LangChain:
274
 
275
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
 
276
 
277
  <!-- README_GGUF.md-how-to-run end -->
278
 
 
46
 
47
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
48
 
49
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
50
+
51
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
52
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
53
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
54
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
55
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
56
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
57
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
58
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
59
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
60
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
61
 
62
  <!-- README_GGUF.md-about-gguf end -->
63
  <!-- repositories-available start -->
64
  ## Repositories available
65
 
66
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/firefly-mixtral-8x7b-AWQ)
67
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/firefly-mixtral-8x7b-GPTQ)
68
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/firefly-mixtral-8x7b-GGUF)
69
  * [YeungNLP's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/YeungNLP/firefly-mixtral-8x7b)
 
83
  <!-- compatibility_gguf start -->
84
  ## Compatibility
85
 
86
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
87
+
88
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
89
 
90
  ## Explanation of quantisation methods
91
 
 
201
 
202
  ## How to run in `text-generation-webui`
203
 
 
 
204
  Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
205
 
206
  ## How to run from Python code
207
 
208
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
209
 
210
  ### How to load this model in Python code, using llama-cpp-python
211
 
 
274
  Here are guides on using llama-cpp-python and ctransformers with LangChain:
275
 
276
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
277
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
278
 
279
  <!-- README_GGUF.md-how-to-run end -->
280