bartowski commited on
Commit
3f1ed6e
1 Parent(s): 9433533

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -18
README.md CHANGED
@@ -15,29 +15,19 @@ pipeline_tag: text-generation
15
 
16
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
17
 
18
- ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
19
 
20
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
21
 
22
- Conversion was done using the default calibration dataset.
23
-
24
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
25
-
26
  Original model: https://huggingface.co/cognitivecomputations/mixtral-instruct-0.1-laser
27
 
28
-
29
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/6_5">6.5 bits per weight</a>
30
-
31
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/5_0">5.0 bits per weight</a>
32
-
33
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/4_25">4.25 bits per weight</a>
34
-
35
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_75">3.75 bits per weight</a>
36
-
37
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_5">3.5 bits per weight</a>
38
-
39
- <a href="https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_0">3.0 bits per weight</a>
40
-
41
 
42
  ## Download instructions
43
 
 
15
 
16
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
17
 
18
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
19
 
20
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
21
 
 
 
 
 
22
  Original model: https://huggingface.co/cognitivecomputations/mixtral-instruct-0.1-laser
23
 
24
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
25
+ | ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
26
+ | [6_5](https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/6_5) | 6.5 | 8.0 | 38.9 GB | 40.4 GB | 42.4 GB | Near unquantized performance at vastly reduced size, **recommended (if you can run it..)**. |
27
+ | [4_25](https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/4_25) | 4.25 | 6.0 | 25.9 GB | 27.4 GB | 29.4 GB | GPTQ equivalent bits per weight, slightly higher quality. |
28
+ | [3_75](https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_75) | 3.5 | 6.0 | 23.0 GB | 24.5 GB | 26.5 GB | Lower quality, but pretty usable. Good for 4k context on 24GB. |
29
+ | [3_5](https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_5) | 3.5 | 6.0 | 21.5 GB | 23.0 GB | 25.0 GB | Lower quality, only use if you need more context on 24GB. |
30
+ | [3_0](https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2/tree/3_0) | 3.0 | 6.0 | 18.9 GB | 20.4 GB | 22.4 GB | Very low quality, pushes context to max but likely unusable. |
 
 
 
 
 
 
31
 
32
  ## Download instructions
33