Text Generation
axolotl
Generated from Trainer
Mistral
instruct
finetune
chatml
gpt4
synthetic data
science
physics
chemistry
biology
math
bartowski commited on
Commit
ff7cbcc
1 Parent(s): c47b148

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +112 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - Mistral
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ base_model: alpindale/Mistral-7B-v0.2-hf
18
+ datasets:
19
+ - allenai/ai2_arc
20
+ - camel-ai/physics
21
+ - camel-ai/chemistry
22
+ - camel-ai/biology
23
+ - camel-ai/math
24
+ - metaeval/reclor
25
+ - openbookqa
26
+ - mandyyyyii/scibench
27
+ - derek-thomas/ScienceQA
28
+ - TIGER-Lab/ScienceEval
29
+ - jondurbin/airoboros-3.2
30
+ - LDJnr/Capybara
31
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
32
+ - STEM-AI-mtl/Electrical-engineering
33
+ - knowrohit07/saraswati-stem
34
+ - sablo/oasst2_curated
35
+ - lmsys/lmsys-chat-1m
36
+ - TIGER-Lab/MathInstruct
37
+ - bigbio/med_qa
38
+ - meta-math/MetaMathQA-40K
39
+ - openbookqa
40
+ - piqa
41
+ - metaeval/reclor
42
+ - derek-thomas/ScienceQA
43
+ - scibench
44
+ - sciq
45
+ - Open-Orca/SlimOrca
46
+ - migtissera/Synthia-v1.3
47
+ - TIGER-Lab/ScienceEval
48
+ - allenai/WildChat
49
+ - microsoft/orca-math-word-problems-200k
50
+ - openchat/openchat_sharegpt4_dataset
51
+ - teknium/GPTeacher-General-Instruct
52
+ - m-a-p/CodeFeedback-Filtered-Instruction
53
+ quantized_by: bartowski
54
+ pipeline_tag: text-generation
55
+ ---
56
+
57
+ ## Exllama v2 Quantizations of Einstein-v5-v0.2-7B
58
+
59
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
60
+
61
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
62
+
63
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
64
+
65
+ Original model: https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B
66
+
67
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
68
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
69
+ | [8_0](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
70
+ | [6_5](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
71
+ | [5_0](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
72
+ | [4_25](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
73
+ | [3_5](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
74
+
75
+ ## Download instructions
76
+
77
+ With git:
78
+
79
+ ```shell
80
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 Einstein-v5-v0.2-7B-exl2-6_5
81
+ ```
82
+
83
+ With huggingface hub (credit to TheBloke for instructions):
84
+
85
+ ```shell
86
+ pip3 install huggingface-hub
87
+ ```
88
+
89
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-v5-v0.2-7B-exl2`:
90
+
91
+ ```shell
92
+ mkdir Einstein-v5-v0.2-7B-exl2
93
+ huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --local-dir Einstein-v5-v0.2-7B-exl2 --local-dir-use-symlinks False
94
+ ```
95
+
96
+ To download from a different branch, add the `--revision` parameter:
97
+
98
+ Linux:
99
+
100
+ ```shell
101
+ mkdir Einstein-v5-v0.2-7B-exl2-6_5
102
+ huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --revision 6_5 --local-dir Einstein-v5-v0.2-7B-exl2-6_5 --local-dir-use-symlinks False
103
+ ```
104
+
105
+ Windows (which apparently doesn't like _ in folders sometimes?):
106
+
107
+ ```shell
108
+ mkdir Einstein-v5-v0.2-7B-exl2-6.5
109
+ huggingface-cli download bartowski/Einstein-v5-v0.2-7B-exl2 --revision 6_5 --local-dir Einstein-v5-v0.2-7B-exl2-6.5 --local-dir-use-symlinks False
110
+ ```
111
+
112
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff