gobean commited on
Commit
b2c06e8
1 Parent(s): e61eda9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- Update: Someone requested q4_0, q5_0, and q6_k. Added, and q5_0 is my new favorite for this and any Mixtral derivative. Try it. Something about the 'k' process ever so slightly alters mixtrals. Compare if you don't believe me.
 
 
5
 
6
 
7
  These are the quantized GGUF files for [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
8
 
9
- They were converted from Mistral's safetensors and quantized on April 3, 2024.
10
  This matters because some of the GGUF files for Mixtral 8x7B were created as soon as llama.cpp supported MoE architecture, but there were still bugs at that time.
11
  Those bugs have since been patched.
12
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ Update: User (@concendo) asked if these were pre/post the 4/3 update to llama.cpp, everything was reqauntized with 4/18 version of llama.cpp since I wasn't sure.
5
+
6
+ Note: qx-k-m quants are not as good as the qx-0, something about the 'k' process doesn't play nice with mixtral.
7
 
8
 
9
  These are the quantized GGUF files for [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
10
 
11
+ They were converted from Mistral's safetensors and quantized on April 18, 2024.
12
  This matters because some of the GGUF files for Mixtral 8x7B were created as soon as llama.cpp supported MoE architecture, but there were still bugs at that time.
13
  Those bugs have since been patched.
14