TheBloke commited on
Commit
bce565c
1 Parent(s): e9923fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -81
README.md CHANGED
@@ -55,23 +55,22 @@ This repo contains GGUF format model files for [OpenOrca's Mixtral SlimOrca 8X7B
55
 
56
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
57
 
58
- Here is an incomplete list of clients and libraries that are known to support GGUF:
59
-
60
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
61
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
62
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
63
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
64
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
65
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
66
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
67
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
68
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
69
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
70
 
71
  <!-- README_GGUF.md-about-gguf end -->
72
  <!-- repositories-available start -->
73
  ## Repositories available
74
 
 
75
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-SlimOrca-8x7B-GPTQ)
76
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-SlimOrca-8x7B-GGUF)
77
  * [OpenOrca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/Mixtral-SlimOrca-8x7B)
@@ -86,7 +85,6 @@ Here is an incomplete list of clients and libraries that are known to support GG
86
  <|im_start|>user
87
  {prompt}<|im_end|>
88
  <|im_start|>assistant
89
-
90
  ```
91
 
92
  <!-- prompt-template end -->
@@ -95,7 +93,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
95
  <!-- compatibility_gguf start -->
96
  ## Compatibility
97
 
98
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
99
 
100
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
101
 
@@ -213,80 +211,16 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
213
 
214
  ## How to run in `text-generation-webui`
215
 
216
- Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
217
 
218
  ## How to run from Python code
219
 
220
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
221
 
222
  ### How to load this model in Python code, using llama-cpp-python
223
 
224
- For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
225
-
226
- #### First install the package
227
-
228
- Run one of the following commands, according to your system:
229
-
230
- ```shell
231
- # Base ctransformers with no GPU acceleration
232
- pip install llama-cpp-python
233
- # With NVidia CUDA acceleration
234
- CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
235
- # Or with OpenBLAS acceleration
236
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
237
- # Or with CLBLast acceleration
238
- CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
239
- # Or with AMD ROCm GPU acceleration (Linux only)
240
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
241
- # Or with Metal GPU acceleration for macOS systems only
242
- CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
243
-
244
- # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
245
- $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
246
- pip install llama-cpp-python
247
- ```
248
-
249
- #### Simple llama-cpp-python example code
250
-
251
- ```python
252
- from llama_cpp import Llama
253
-
254
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
255
- llm = Llama(
256
- model_path="./mixtral-slimorca-8x7b.Q4_K_M.gguf", # Download the model file first
257
- n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
258
- n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
259
- n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
260
- )
261
-
262
- # Simple inference example
263
- output = llm(
264
- "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
265
- max_tokens=512, # Generate up to 512 tokens
266
- stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
267
- echo=True # Whether to echo the prompt
268
- )
269
-
270
- # Chat Completion API
271
-
272
- llm = Llama(model_path="./mixtral-slimorca-8x7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
273
- llm.create_chat_completion(
274
- messages = [
275
- {"role": "system", "content": "You are a story writing assistant."},
276
- {
277
- "role": "user",
278
- "content": "Write a story about llamas."
279
- }
280
- ]
281
- )
282
- ```
283
-
284
- ## How to use with LangChain
285
-
286
- Here are guides on using llama-cpp-python and ctransformers with LangChain:
287
 
288
- * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
289
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
290
 
291
  <!-- README_GGUF.md-how-to-run end -->
292
 
 
55
 
56
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
57
 
58
+ **MIXTRAL GGUF SUPPORT**
59
+
60
+ Known to work in:
61
+ * llama.cpp as of December 13th
62
+ * KoboldCpp 1.52 as later
63
+ * LM Studio 0.2.9 and later
64
+
65
+ Support for Mixtral was merged into Llama.cpp on December 13th.
66
+
67
+ Other clients/libraries, not listed above, may not yet work.
 
 
68
 
69
  <!-- README_GGUF.md-about-gguf end -->
70
  <!-- repositories-available start -->
71
  ## Repositories available
72
 
73
+ * AWQ coming soon
74
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-SlimOrca-8x7B-GPTQ)
75
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-SlimOrca-8x7B-GGUF)
76
  * [OpenOrca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/Mixtral-SlimOrca-8x7B)
 
85
  <|im_start|>user
86
  {prompt}<|im_end|>
87
  <|im_start|>assistant
 
88
  ```
89
 
90
  <!-- prompt-template end -->
 
93
  <!-- compatibility_gguf start -->
94
  ## Compatibility
95
 
96
+ These quantised GGUFv2 files are compatible with llama.cpp from December 13th onwards.
97
 
98
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
99
 
 
211
 
212
  ## How to run in `text-generation-webui`
213
 
214
+ Not yet supported
215
 
216
  ## How to run from Python code
217
 
218
+ Not yet supported
219
 
220
  ### How to load this model in Python code, using llama-cpp-python
221
 
222
+ Not yet supported
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
223
 
 
 
224
 
225
  <!-- README_GGUF.md-how-to-run end -->
226