bartowski commited on
Commit
0b5bb7d
1 Parent(s): 320dc8a

Update VRAM estimates

Browse files
Files changed (1) hide show
  1. README.md +9 -14
README.md CHANGED
@@ -12,26 +12,21 @@ pipeline_tag: text-generation
12
 
13
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
14
 
15
- ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
16
 
17
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
18
 
19
- Conversion was done using the default calibration dataset.
20
-
21
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
22
-
23
  Original model: https://huggingface.co/TIGER-Lab/StructLM-7B
24
 
 
25
 
26
- <a href="https://huggingface.co/bartowski/StructLM-7B-exl2/tree/8_0">8.0 bits per weight</a>
27
-
28
- <a href="https://huggingface.co/bartowski/StructLM-7B-exl2/tree/6_5">6.5 bits per weight</a>
29
-
30
- <a href="https://huggingface.co/bartowski/StructLM-7B-exl2/tree/5_0">5.0 bits per weight</a>
31
-
32
- <a href="https://huggingface.co/bartowski/StructLM-7B-exl2/tree/4_25">4.25 bits per weight</a>
33
-
34
- <a href="https://huggingface.co/bartowski/StructLM-7B-exl2/tree/3_5">3.5 bits per weight</a>
35
 
36
 
37
  ## Download instructions
 
12
 
13
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
14
 
15
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
16
 
17
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
18
 
 
 
 
 
19
  Original model: https://huggingface.co/TIGER-Lab/StructLM-7B
20
 
21
+ No GQA - VRAM requirements will be higher
22
 
23
+ | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
24
+ | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
25
+ | [8_0](https://huggingface.co/bartowski/StructLM-7B-exl2/tree/8_0) | 8.0 | 8.0 | 9.0 GB | 15.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
26
+ | [6_5](https://huggingface.co/bartowski/StructLM-7B-exl2/tree/6_5) | 6.5 | 8.0 | 8.2 GB | 14.4 GB | Near unquantized performance at vastly reduced size, **recommended**. |
27
+ | [5_0](https://huggingface.co/bartowski/StructLM-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.8 GB | 13.0 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
28
+ | [4_25](https://huggingface.co/bartowski/StructLM-7B-exl2/tree/4_25) | 4.25 | 6.0 | 6.1 GB | 12.3 GB | GPTQ equivalent bits per weight. |
29
+ | [3_5](https://huggingface.co/bartowski/StructLM-7B-exl2/tree/3_5) | 3.5 | 6.0 | 5.5 GB | 11.7 GB | Lower quality, not recommended. |
 
 
30
 
31
 
32
  ## Download instructions