bartowski commited on
Commit
9027a60
1 Parent(s): e135a11

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +63 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ quantized_by: bartowski
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ ## Exllama v2 Quantizations of HuginnV5.5-12.6B
8
+
9
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
10
+
11
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
12
+
13
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
14
+
15
+ Original model: https://huggingface.co/The-Face-Of-Goonery/HuginnV5.5-12.6B
16
+
17
+ | Branch | Bits | lm_head bits | Size | Description |
18
+ | ----- | ---- | ------- | ------ | ------------ |
19
+ | [6_5](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Near unquantized performance at vastly reduced size, **recommended**. |
20
+ | [5_0](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5. |
21
+ | [4_25](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight. |
22
+ | [3_5](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, not recommended. |
23
+
24
+ All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
25
+
26
+ ## Download instructions
27
+
28
+ With git:
29
+
30
+ ```shell
31
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/HuginnV5.5-12.6B-exl2 HuginnV5.5-12.6B-exl2-6_5
32
+ ```
33
+
34
+ With huggingface hub (credit to TheBloke for instructions):
35
+
36
+ ```shell
37
+ pip3 install huggingface-hub
38
+ ```
39
+
40
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `HuginnV5.5-12.6B-exl2`:
41
+
42
+ ```shell
43
+ mkdir HuginnV5.5-12.6B-exl2
44
+ huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --local-dir HuginnV5.5-12.6B-exl2 --local-dir-use-symlinks False
45
+ ```
46
+
47
+ To download from a different branch, add the `--revision` parameter:
48
+
49
+ Linux:
50
+
51
+ ```shell
52
+ mkdir HuginnV5.5-12.6B-exl2-6_5
53
+ huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6_5 --local-dir-use-symlinks False
54
+ ```
55
+
56
+ Windows (which apparently doesn't like _ in folders sometimes?):
57
+
58
+ ```shell
59
+ mkdir HuginnV5.5-12.6B-exl2-6.5
60
+ huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6.5 --local-dir-use-symlinks False
61
+ ```
62
+
63
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff