Update README.md
Browse files
README.md
CHANGED
@@ -44,15 +44,17 @@ quantized_by: TheBloke
|
|
44 |
|
45 |
This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
|
46 |
|
47 |
-
|
48 |
|
49 |
-
|
|
|
|
|
|
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
|
55 |
-
I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
|
56 |
<!-- description end -->
|
57 |
|
58 |
<!-- repositories-available start -->
|
|
|
44 |
|
45 |
This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
|
46 |
|
47 |
+
**MIXTRAL GGUF SUPPORT**
|
48 |
|
49 |
+
Known to work in:
|
50 |
+
* llama.cpp as of December 13th
|
51 |
+
* KoboldCpp 1.52 as later
|
52 |
+
* LM Studio 0.2.9 and later
|
53 |
|
54 |
+
Support for Mixtral was merged into Llama.cpp on December 13th.
|
55 |
|
56 |
+
Other clients/libraries, not listed above, may not yet work.
|
57 |
|
|
|
58 |
<!-- description end -->
|
59 |
|
60 |
<!-- repositories-available start -->
|