bartowski commited on
Commit
e3cac03
1 Parent(s): 81636a5

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +75 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - labradorite
5
+ - llama
6
+ - llama-2
7
+ - ibm
8
+ license: llama2
9
+ license_link: https://ai.meta.com/llama/license/
10
+ language:
11
+ - en
12
+ quantized_by: bartowski
13
+ ---
14
+
15
+ ## Exllama v2 Quantizations of labradorite-13b
16
+
17
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
18
+
19
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
20
+
21
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
22
+
23
+ Original model: https://huggingface.co/ibm/labradorite-13b
24
+
25
+ No GQA - VRAM requirements will be higher
26
+
27
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | Description |
28
+ | ----- | ---- | ------- | ------ | ------ | ------------ |
29
+ | [8_0](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/8_0) | 8.0 | 8.0 | 16.8 GB | 26.2 GB | Near unquantized performance at vastly reduced size, **recommended**. |
30
+ | [6_5](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/6_5) | 6.5 | 8.0 | 14.4 GB | 24.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
31
+ | [5_0](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/5_0) | 5.0 | 6.0 | 12.1 GB | 21.7 GB | Slightly lower perplexity vs 6.5, can fit in 12 GB card with even lower context. |
32
+ | [4_25](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/4_25) | 4.25 | 6.0 | 10.9 GB | 20.5 GB | GPTQ equivalent bits per weight. |
33
+ | [3_75](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/3_75) | 3.75 | 6.0 | 10.1 GB | 19.7 GB | Lower quality but still generally usable. |
34
+ | [3_0](https://huggingface.co/bartowski/labradorite-13b-exl2/tree/3_0) | 3.0 | 6.0 | 9.1 GB | 18.7 GB | Very low quality, not recommended unless you have to. |
35
+
36
+ VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB)
37
+
38
+ ## Download instructions
39
+
40
+ With git:
41
+
42
+ ```shell
43
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/labradorite-13b-exl2 labradorite-13b-exl2-6_5
44
+ ```
45
+
46
+ With huggingface hub (credit to TheBloke for instructions):
47
+
48
+ ```shell
49
+ pip3 install huggingface-hub
50
+ ```
51
+
52
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `labradorite-13b-exl2`:
53
+
54
+ ```shell
55
+ mkdir labradorite-13b-exl2
56
+ huggingface-cli download bartowski/labradorite-13b-exl2 --local-dir labradorite-13b-exl2 --local-dir-use-symlinks False
57
+ ```
58
+
59
+ To download from a different branch, add the `--revision` parameter:
60
+
61
+ Linux:
62
+
63
+ ```shell
64
+ mkdir labradorite-13b-exl2-6_5
65
+ huggingface-cli download bartowski/labradorite-13b-exl2 --revision 6_5 --local-dir labradorite-13b-exl2-6_5 --local-dir-use-symlinks False
66
+ ```
67
+
68
+ Windows (which apparently doesn't like _ in folders sometimes?):
69
+
70
+ ```shell
71
+ mkdir labradorite-13b-exl2-6.5
72
+ huggingface-cli download bartowski/labradorite-13b-exl2 --revision 6_5 --local-dir labradorite-13b-exl2-6.5 --local-dir-use-symlinks False
73
+ ```
74
+
75
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff