Update README.md
Browse files
README.md
CHANGED
@@ -44,18 +44,16 @@ This repo contains GGUF format model files for [Charles Goddard's Mixtralnt 4X7B
|
|
44 |
|
45 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
58 |
-
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
59 |
|
60 |
<!-- README_GGUF.md-about-gguf end -->
|
61 |
<!-- repositories-available start -->
|
@@ -77,12 +75,6 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
77 |
<!-- prompt-template end -->
|
78 |
|
79 |
|
80 |
-
<!-- compatibility_gguf start -->
|
81 |
-
## Compatibility
|
82 |
-
|
83 |
-
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
|
84 |
-
|
85 |
-
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
86 |
|
87 |
## Explanation of quantisation methods
|
88 |
|
@@ -130,17 +122,6 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
130 |
|
131 |
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
132 |
|
133 |
-
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
134 |
-
|
135 |
-
* LM Studio
|
136 |
-
* LoLLMS Web UI
|
137 |
-
* Faraday.dev
|
138 |
-
|
139 |
-
### In `text-generation-webui`
|
140 |
-
|
141 |
-
Under Download Model, you can enter the model repo: TheBloke/mixtralnt-4x7b-test-GGUF and below it, a specific filename to download, such as: mixtralnt-4x7b-test.Q4_K_M.gguf.
|
142 |
-
|
143 |
-
Then click Download.
|
144 |
|
145 |
### On the command line, including multiple files at once
|
146 |
|
@@ -202,82 +183,12 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
|
|
202 |
|
203 |
## How to run in `text-generation-webui`
|
204 |
|
205 |
-
|
206 |
|
207 |
## How to run from Python code
|
208 |
|
209 |
-
|
210 |
-
|
211 |
-
### How to load this model in Python code, using llama-cpp-python
|
212 |
-
|
213 |
-
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
|
214 |
-
|
215 |
-
#### First install the package
|
216 |
-
|
217 |
-
Run one of the following commands, according to your system:
|
218 |
-
|
219 |
-
```shell
|
220 |
-
# Base ctransformers with no GPU acceleration
|
221 |
-
pip install llama-cpp-python
|
222 |
-
# With NVidia CUDA acceleration
|
223 |
-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
|
224 |
-
# Or with OpenBLAS acceleration
|
225 |
-
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
|
226 |
-
# Or with CLBLast acceleration
|
227 |
-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
|
228 |
-
# Or with AMD ROCm GPU acceleration (Linux only)
|
229 |
-
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
|
230 |
-
# Or with Metal GPU acceleration for macOS systems only
|
231 |
-
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
|
232 |
-
|
233 |
-
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
|
234 |
-
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
|
235 |
-
pip install llama-cpp-python
|
236 |
-
```
|
237 |
-
|
238 |
-
#### Simple llama-cpp-python example code
|
239 |
-
|
240 |
-
```python
|
241 |
-
from llama_cpp import Llama
|
242 |
-
|
243 |
-
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
244 |
-
llm = Llama(
|
245 |
-
model_path="./mixtralnt-4x7b-test.Q4_K_M.gguf", # Download the model file first
|
246 |
-
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
247 |
-
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
248 |
-
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
249 |
-
)
|
250 |
-
|
251 |
-
# Simple inference example
|
252 |
-
output = llm(
|
253 |
-
"{prompt}", # Prompt
|
254 |
-
max_tokens=512, # Generate up to 512 tokens
|
255 |
-
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
|
256 |
-
echo=True # Whether to echo the prompt
|
257 |
-
)
|
258 |
-
|
259 |
-
# Chat Completion API
|
260 |
-
|
261 |
-
llm = Llama(model_path="./mixtralnt-4x7b-test.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
|
262 |
-
llm.create_chat_completion(
|
263 |
-
messages = [
|
264 |
-
{"role": "system", "content": "You are a story writing assistant."},
|
265 |
-
{
|
266 |
-
"role": "user",
|
267 |
-
"content": "Write a story about llamas."
|
268 |
-
}
|
269 |
-
]
|
270 |
-
)
|
271 |
-
```
|
272 |
-
|
273 |
-
## How to use with LangChain
|
274 |
-
|
275 |
-
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
276 |
-
|
277 |
-
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
278 |
-
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
279 |
|
280 |
-
<!-- README_GGUF.md-how-to-run end -->
|
281 |
|
282 |
<!-- footer start -->
|
283 |
<!-- 200823 -->
|
|
|
44 |
|
45 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
46 |
|
47 |
+
## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
|
48 |
+
|
49 |
+
These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
|
50 |
+
|
51 |
+
THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
|
52 |
+
|
53 |
+
To test these GGUFs, please build llama.cpp from the above PR.
|
54 |
+
|
55 |
+
I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
|
56 |
+
|
|
|
|
|
57 |
|
58 |
<!-- README_GGUF.md-about-gguf end -->
|
59 |
<!-- repositories-available start -->
|
|
|
75 |
<!-- prompt-template end -->
|
76 |
|
77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
## Explanation of quantisation methods
|
80 |
|
|
|
122 |
|
123 |
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
|
126 |
### On the command line, including multiple files at once
|
127 |
|
|
|
183 |
|
184 |
## How to run in `text-generation-webui`
|
185 |
|
186 |
+
Not currently supported
|
187 |
|
188 |
## How to run from Python code
|
189 |
|
190 |
+
Not currently supported
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
191 |
|
|
|
192 |
|
193 |
<!-- footer start -->
|
194 |
<!-- 200823 -->
|