bartowski commited on
Commit
cef3864
1 Parent(s): f909dde

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +172 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ base_model:
8
+ - Sao10K/Fimbulvetr-10.7B-v1
9
+ - saishf/Kuro-Lotus-10.7B
10
+ model-index:
11
+ - name: Fimbulvetr-Kuro-Lotus-10.7B
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: AI2 Reasoning Challenge (25-Shot)
18
+ type: ai2_arc
19
+ config: ARC-Challenge
20
+ split: test
21
+ args:
22
+ num_few_shot: 25
23
+ metrics:
24
+ - type: acc_norm
25
+ value: 69.54
26
+ name: normalized accuracy
27
+ source:
28
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
29
+ name: Open LLM Leaderboard
30
+ - task:
31
+ type: text-generation
32
+ name: Text Generation
33
+ dataset:
34
+ name: HellaSwag (10-Shot)
35
+ type: hellaswag
36
+ split: validation
37
+ args:
38
+ num_few_shot: 10
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 87.87
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MMLU (5-Shot)
51
+ type: cais/mmlu
52
+ config: all
53
+ split: test
54
+ args:
55
+ num_few_shot: 5
56
+ metrics:
57
+ - type: acc
58
+ value: 66.99
59
+ name: accuracy
60
+ source:
61
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: TruthfulQA (0-shot)
68
+ type: truthful_qa
69
+ config: multiple_choice
70
+ split: validation
71
+ args:
72
+ num_few_shot: 0
73
+ metrics:
74
+ - type: mc2
75
+ value: 60.95
76
+ source:
77
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: Winogrande (5-shot)
84
+ type: winogrande
85
+ config: winogrande_xl
86
+ split: validation
87
+ args:
88
+ num_few_shot: 5
89
+ metrics:
90
+ - type: acc
91
+ value: 84.14
92
+ name: accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 66.87
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
112
+ name: Open LLM Leaderboard
113
+ quantized_by: bartowski
114
+ pipeline_tag: text-generation
115
+ ---
116
+
117
+ ## Exllama v2 Quantizations of Fimbulvetr-Kuro-Lotus-10.7B
118
+
119
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization.
120
+
121
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
122
+
123
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
124
+
125
+ Original model: https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B
126
+
127
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
128
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
129
+ | [8_0](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2/tree/8_0) | 8.0 | 8.0 | 11.9 GB | 13.3 GB | 15.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
130
+ | [6_5](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2/tree/6_5) | 6.5 | 8.0 | 10.3 GB | 11.7 GB | 13.7 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
131
+ | [5_0](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2/tree/5_0) | 5.0 | 6.0 | 8.3 GB | 9.7 GB | 11.7 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
132
+ | [4_25](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2/tree/4_25) | 4.25 | 6.0 | 7.4 GB | 8.6 GB | 10.6 GB | GPTQ equivalent bits per weight, slightly higher quality. |
133
+ | [3_5](https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 7.8 GB | 9.8 GB | Lower quality, only use if you have to. |
134
+
135
+ ## Download instructions
136
+
137
+ With git:
138
+
139
+ ```shell
140
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2 Fimbulvetr-Kuro-Lotus-10.7B-exl2-6_5
141
+ ```
142
+
143
+ With huggingface hub (credit to TheBloke for instructions):
144
+
145
+ ```shell
146
+ pip3 install huggingface-hub
147
+ ```
148
+
149
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Fimbulvetr-Kuro-Lotus-10.7B-exl2`:
150
+
151
+ ```shell
152
+ mkdir Fimbulvetr-Kuro-Lotus-10.7B-exl2
153
+ huggingface-cli download bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2 --local-dir Fimbulvetr-Kuro-Lotus-10.7B-exl2 --local-dir-use-symlinks False
154
+ ```
155
+
156
+ To download from a different branch, add the `--revision` parameter:
157
+
158
+ Linux:
159
+
160
+ ```shell
161
+ mkdir Fimbulvetr-Kuro-Lotus-10.7B-exl2-6_5
162
+ huggingface-cli download bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2 --revision 6_5 --local-dir Fimbulvetr-Kuro-Lotus-10.7B-exl2-6_5 --local-dir-use-symlinks False
163
+ ```
164
+
165
+ Windows (which apparently doesn't like _ in folders sometimes?):
166
+
167
+ ```shell
168
+ mkdir Fimbulvetr-Kuro-Lotus-10.7B-exl2-6.5
169
+ huggingface-cli download bartowski/Fimbulvetr-Kuro-Lotus-10.7B-exl2 --revision 6_5 --local-dir Fimbulvetr-Kuro-Lotus-10.7B-exl2-6.5 --local-dir-use-symlinks False
170
+ ```
171
+
172
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
measurement.json ADDED
The diff for this file is too large to render. See raw diff