Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ)
|
34 |
-
* [
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
@@ -43,23 +43,14 @@ I have quantized these 'original' quantisation methods using an older version of
|
|
43 |
|
44 |
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
|
45 |
|
46 |
-
### New k-quant methods:
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
|
54 |
-
The new methods available are:
|
55 |
-
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
56 |
-
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
57 |
-
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
58 |
-
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
59 |
-
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
60 |
-
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
61 |
-
|
62 |
-
Refer to the Provided Files table below to see what files use which methods, and how.
|
63 |
<!-- compatibility_ggml end -->
|
64 |
|
65 |
## Provided files
|
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ)
|
34 |
+
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
|
|
43 |
|
44 |
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
|
45 |
|
46 |
+
### New k-quant methods: not supported at the moment due to model's vocab size
|
47 |
|
48 |
+
Unfortunately it is not possible to make the new k-quant format quantisations for this model at this time.
|
49 |
|
50 |
+
This is because the model uses a non-standard vocab size of 32,001, which is not divisible by 256.
|
51 |
|
52 |
+
This is being investigated by the llama.cpp team and may be fixed in future. You can read more about that here: https://github.com/ggerganov/llama.cpp/issues/1919
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
<!-- compatibility_ggml end -->
|
55 |
|
56 |
## Provided files
|