bartowski commited on
Commit
35e42d5
1 Parent(s): ffb5c47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -15
README.md CHANGED
@@ -17,26 +17,19 @@ pipeline_tag: text-generation
17
 
18
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
19
 
20
- ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
21
 
22
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
23
 
24
- Conversion was done using the default calibration dataset.
25
-
26
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
27
-
28
  Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1
29
 
30
-
31
- <a href="https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/8_0">8.0 bits per weight</a>
32
-
33
- <a href="https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/6_5">6.5 bits per weight</a>
34
-
35
- <a href="https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/5_0">5.0 bits per weight</a>
36
-
37
- <a href="https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/4_25">4.25 bits per weight</a>
38
-
39
- <a href="https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/3_5">3.5 bits per weight</a>
40
 
41
 
42
  ## Download instructions
 
17
 
18
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
19
 
20
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
21
 
22
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
23
 
 
 
 
 
24
  Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1
25
 
26
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
27
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
28
+ | [8_0](https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/8_0) | 8.0 | 8.0 | 16.6 GB | 17.5 GB | 18.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
29
+ | [6_5](https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. |
30
+ | [5_0](https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/5_0) | 5.0 | 6.0 | 11.2 GB | 12.2 GB | 13.5 GB | Slightly lower quality vs 6.5. |
31
+ | [4_25](https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/4_25) | 4.25 | 6.0 | 9.8 GB | 10.7 GB | 12.0 GB | GPTQ equivalent bits per weight. |
32
+ | [3_5](https://huggingface.co/bartowski/starchat2-15b-v0.1-exl2/tree/3_5) | 3.5 | 6.0 | 8.4 GB | 9.3 GB | 10.6 GB | Lower quality, not recommended. |
 
 
 
33
 
34
 
35
  ## Download instructions