TheBloke commited on
Commit
4d37e71
1 Parent(s): e6f0848

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -44,15 +44,17 @@ quantized_by: TheBloke
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
46
 
47
- ## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
48
 
49
- These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
 
 
 
50
 
51
- THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
52
 
53
- To test these GGUFs, please build llama.cpp from the above PR.
54
 
55
- I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
56
  <!-- description end -->
57
 
58
  <!-- repositories-available start -->
 
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
46
 
47
+ **MIXTRAL GGUF SUPPORT**
48
 
49
+ Known to work in:
50
+ * llama.cpp as of December 13th
51
+ * KoboldCpp 1.52 as later
52
+ * LM Studio 0.2.9 and later
53
 
54
+ Support for Mixtral was merged into Llama.cpp on December 13th.
55
 
56
+ Other clients/libraries, not listed above, may not yet work.
57
 
 
58
  <!-- description end -->
59
 
60
  <!-- repositories-available start -->