bartowski commited on
Commit
b7bd28c
1 Parent(s): 8b8d8d0

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +194 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mistral-7B-v0.1
4
+ - berkeley-nest/Starling-LM-7B-alpha
5
+ - mlabonne/AlphaMonarch-7B
6
+ - cognitivecomputations/WestLake-7B-v2-laser
7
+ - senseable/garten2-7b
8
+
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+ license: cc-by-nc-4.0
14
+
15
+ model-index:
16
+ - name: Starling_Monarch_Westlake_Garten-7B-v0.1
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: EQ-Bench
23
+ type: eq-bench
24
+ config: EQ-Bench
25
+ split: v2.1
26
+ args:
27
+ num_few_shot: 3
28
+ metrics:
29
+ - type: acc_norm
30
+ value: 80.01
31
+ name: self-reported
32
+ source:
33
+ url: https://github.com/EQ-bench/EQ-Bench
34
+ name: EQ-Bench v2.1
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: AI2 Reasoning Challenge (25-Shot)
40
+ type: ai2_arc
41
+ config: ARC-Challenge
42
+ split: test
43
+ args:
44
+ num_few_shot: 25
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 71.76
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: HellaSwag (10-Shot)
57
+ type: hellaswag
58
+ split: validation
59
+ args:
60
+ num_few_shot: 10
61
+ metrics:
62
+ - type: acc_norm
63
+ value: 88.15
64
+ name: normalized accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: MMLU (5-Shot)
73
+ type: cais/mmlu
74
+ config: all
75
+ split: test
76
+ args:
77
+ num_few_shot: 5
78
+ metrics:
79
+ - type: acc
80
+ value: 65.07
81
+ name: accuracy
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: TruthfulQA (0-shot)
90
+ type: truthful_qa
91
+ config: multiple_choice
92
+ split: validation
93
+ args:
94
+ num_few_shot: 0
95
+ metrics:
96
+ - type: mc2
97
+ value: 67.92
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: Winogrande (5-shot)
106
+ type: winogrande
107
+ config: winogrande_xl
108
+ split: validation
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 82.16
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
117
+ name: Open LLM Leaderboard
118
+ - task:
119
+ type: text-generation
120
+ name: Text Generation
121
+ dataset:
122
+ name: GSM8k (5-shot)
123
+ type: gsm8k
124
+ config: main
125
+ split: test
126
+ args:
127
+ num_few_shot: 5
128
+ metrics:
129
+ - type: acc
130
+ value: 71.95
131
+ name: accuracy
132
+ source:
133
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
134
+ name: Open LLM Leaderboard
135
+ quantized_by: bartowski
136
+ pipeline_tag: text-generation
137
+ ---
138
+
139
+ ## Exllama v2 Quantizations of Starling_Monarch_Westlake_Garten-7B-v0.1
140
+
141
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
142
+
143
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
144
+
145
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
146
+
147
+ Original model: https://huggingface.co/giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
148
+
149
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
150
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
151
+ | [8_0](https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
152
+ | [6_5](https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
153
+ | [5_0](https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
154
+ | [4_25](https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
155
+ | [3_5](https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
156
+
157
+ ## Download instructions
158
+
159
+ With git:
160
+
161
+ ```shell
162
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2 Starling_Monarch_Westlake_Garten-7B-v0.1-exl2-6_5
163
+ ```
164
+
165
+ With huggingface hub (credit to TheBloke for instructions):
166
+
167
+ ```shell
168
+ pip3 install huggingface-hub
169
+ ```
170
+
171
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Starling_Monarch_Westlake_Garten-7B-v0.1-exl2`:
172
+
173
+ ```shell
174
+ mkdir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2
175
+ huggingface-cli download bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2 --local-dir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2 --local-dir-use-symlinks False
176
+ ```
177
+
178
+ To download from a different branch, add the `--revision` parameter:
179
+
180
+ Linux:
181
+
182
+ ```shell
183
+ mkdir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2-6_5
184
+ huggingface-cli download bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2 --revision 6_5 --local-dir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2-6_5 --local-dir-use-symlinks False
185
+ ```
186
+
187
+ Windows (which apparently doesn't like _ in folders sometimes?):
188
+
189
+ ```shell
190
+ mkdir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2-6.5
191
+ huggingface-cli download bartowski/Starling_Monarch_Westlake_Garten-7B-v0.1-exl2 --revision 6_5 --local-dir Starling_Monarch_Westlake_Garten-7B-v0.1-exl2-6.5 --local-dir-use-symlinks False
192
+ ```
193
+
194
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff