OptimizeLLM commited on
Commit
bde709a
1 Parent(s): 1386567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -52,4 +52,4 @@ Extract the two .zip files directly into the llama.cpp folder you just git clone
52
  ## Windows command prompt - Quantize the fp16 model to q5_k_m:
53
  * D:\llama.cpp>quantize.exe D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf q5_k_m
54
 
55
- That's it. Load up the resulting .gguf file like you normally would.
 
52
  ## Windows command prompt - Quantize the fp16 model to q5_k_m:
53
  * D:\llama.cpp>quantize.exe D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf q5_k_m
54
 
55
+ That's it!