froggeric commited on
Commit
45733e6
1 Parent(s): 602aff3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+
6
+ # Input files for generating the Importance Matrix
7
+
8
+ ## How to quantize with an imatrix in llama.cpp
9
+
10
+ 1. Get one of the input files collected here, or eleswhere.
11
+ 2. Convert or download the model you want to quantise, in fp16 GGUF format.
12
+ 3. Generate an imatrix file specific to the model you want to quantise
13
+ ```
14
+ cd <llama.cpp directory>
15
+ ./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
16
+
17
+ # -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
18
+ # -t 12 : number of threads (should probably match no of cpu)
19
+ # -c 512 : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model)
20
+ # -b 200 : batch size (default=512)
21
+ # --chunks 100 (recommended)
22
+ # --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
23
+ ```
24
+ 4. Use the generated binary matrix file to quantise the model
25
+ ```
26
+ ./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS
27
+ ```
28
+ Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is
29
+ better for higher quantisation.