bartowski commited on
Commit
4e55dbd
1 Parent(s): c2ee45a

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +71 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - TheBloke/Llama-2-13B-fp16
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ license: cc-by-nc-4.0
8
+ quantized_by: bartowski
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ ## Exllama v2 Quantizations of LLaMA2-13B-Estopia
13
+
14
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
15
+
16
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
17
+
18
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
19
+
20
+ Original model: https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia
21
+
22
+ No GQA - VRAM requirements will be higher
23
+
24
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | Description |
25
+ | ----- | ---- | ------- | ------ | ------ | ------------ |
26
+ | [6_5](https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2/tree/6_5) | 6.5 | 8.0 | 14.4 GB | 24.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
27
+ | [5_0](https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2/tree/5_0) | 5.0 | 6.0 | 12.1 GB | 21.7 GB | Slightly lower perplexity vs 6.5, can fit in 12 GB card with even lower context. |
28
+ | [4_25](https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2/tree/4_25) | 4.25 | 6.0 | 10.9 GB | 20.5 GB | GPTQ equivalent bits per weight. |
29
+ | [3_75](https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2/tree/3_75) | 3.75 | 6.0 | 10.1 GB | 19.7 GB | Lower quality but still generally usable. |
30
+ | [3_0](https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2/tree/3_0) | 3.0 | 6.0 | 9.1 GB | 18.7 GB | Very low quality, not recommended unless you have to. |
31
+
32
+ VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB)
33
+
34
+ ## Download instructions
35
+
36
+ With git:
37
+
38
+ ```shell
39
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/LLaMA2-13B-Estopia-exl2 LLaMA2-13B-Estopia-exl2-6_5
40
+ ```
41
+
42
+ With huggingface hub (credit to TheBloke for instructions):
43
+
44
+ ```shell
45
+ pip3 install huggingface-hub
46
+ ```
47
+
48
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `LLaMA2-13B-Estopia-exl2`:
49
+
50
+ ```shell
51
+ mkdir LLaMA2-13B-Estopia-exl2
52
+ huggingface-cli download bartowski/LLaMA2-13B-Estopia-exl2 --local-dir LLaMA2-13B-Estopia-exl2 --local-dir-use-symlinks False
53
+ ```
54
+
55
+ To download from a different branch, add the `--revision` parameter:
56
+
57
+ Linux:
58
+
59
+ ```shell
60
+ mkdir LLaMA2-13B-Estopia-exl2-6_5
61
+ huggingface-cli download bartowski/LLaMA2-13B-Estopia-exl2 --revision 6_5 --local-dir LLaMA2-13B-Estopia-exl2-6_5 --local-dir-use-symlinks False
62
+ ```
63
+
64
+ Windows (which apparently doesn't like _ in folders sometimes?):
65
+
66
+ ```shell
67
+ mkdir LLaMA2-13B-Estopia-exl2-6.5
68
+ huggingface-cli download bartowski/LLaMA2-13B-Estopia-exl2 --revision 6_5 --local-dir LLaMA2-13B-Estopia-exl2-6.5 --local-dir-use-symlinks False
69
+ ```
70
+
71
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff