bartowski commited on
Commit
346d1a0
1 Parent(s): 48d3d85

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: google/gemma-7b-it
4
+ tags:
5
+ - generated_from_trainer
6
+ - axolotl
7
+ - gemma
8
+ - instruct
9
+ - finetune
10
+ - chatml
11
+ - gpt4
12
+ - synthetic data
13
+ - distillation
14
+ model-index:
15
+ - name: gemma-7b-openhermes
16
+ results: []
17
+ datasets:
18
+ - mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
19
+ language:
20
+ - en
21
+ library_name: transformers
22
+ pipeline_tag: text-generation
23
+ quantized_by: bartowski
24
+ ---
25
+
26
+ ## Exllama v2 Quantizations of gemma-7b-openhermes
27
+
28
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
29
+
30
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
31
+
32
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
33
+
34
+ Original model: https://huggingface.co/abideen/gemma-7b-openhermes
35
+
36
+ No GQA - VRAM requirements will be higher
37
+
38
+ | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
39
+ | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
40
+ | [8_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
41
+ | [6_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
42
+ | [5_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
43
+ | [4_25](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
44
+ | [3_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
45
+
46
+ ## Download instructions
47
+
48
+ With git:
49
+
50
+ ```shell
51
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/gemma-7b-openhermes-exl2 gemma-7b-openhermes-exl2-6_5
52
+ ```
53
+
54
+ With huggingface hub (credit to TheBloke for instructions):
55
+
56
+ ```shell
57
+ pip3 install huggingface-hub
58
+ ```
59
+
60
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `gemma-7b-openhermes-exl2`:
61
+
62
+ ```shell
63
+ mkdir gemma-7b-openhermes-exl2
64
+ huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --local-dir gemma-7b-openhermes-exl2 --local-dir-use-symlinks False
65
+ ```
66
+
67
+ To download from a different branch, add the `--revision` parameter:
68
+
69
+ Linux:
70
+
71
+ ```shell
72
+ mkdir gemma-7b-openhermes-exl2-6_5
73
+ huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --revision 6_5 --local-dir gemma-7b-openhermes-exl2-6_5 --local-dir-use-symlinks False
74
+ ```
75
+
76
+ Windows (which apparently doesn't like _ in folders sometimes?):
77
+
78
+ ```shell
79
+ mkdir gemma-7b-openhermes-exl2-6.5
80
+ huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --revision 6_5 --local-dir gemma-7b-openhermes-exl2-6.5 --local-dir-use-symlinks False
81
+ ```
82
+
83
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff