OptimizeLLM commited on
Commit
1386567
1 Parent(s): 194d9eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,11 +22,11 @@ This is Mistral AI's Mixtral Instruct v0.1 model, quantized on 02/24/2024. It wo
22
  # How to quantize your own models with Windows and an RTX GPU:
23
 
24
  ## Requirements:
25
- * Make sure you have git and python installed (if you use oobabooga etc you are probably good to go)
26
-
27
- The following example starts at the root of D drive and quantizes mistral's Mixtral-9x7B-Instruct-v0.1:
28
 
29
  # Instructions:
 
30
 
31
  ## Windows command prompt - folder setup and git clone llama.cpp
32
  * D:
 
22
  # How to quantize your own models with Windows and an RTX GPU:
23
 
24
  ## Requirements:
25
+ * git
26
+ * python
 
27
 
28
  # Instructions:
29
+ The following example starts at the root of D drive and quantizes mistral's Mixtral-9x7B-Instruct-v0.1.
30
 
31
  ## Windows command prompt - folder setup and git clone llama.cpp
32
  * D: