Suparious commited on
Commit
a293ccd
1 Parent(s): 66864a3

Add model card

Browse files
Files changed (1) hide show
  1. README.md +256 -1
README.md CHANGED
@@ -1,3 +1,258 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - Mistral
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ base_model: mistralai/Mistral-7B-v0.1
18
+ datasets:
19
+ - allenai/ai2_arc
20
+ - camel-ai/physics
21
+ - camel-ai/chemistry
22
+ - camel-ai/biology
23
+ - camel-ai/math
24
+ - metaeval/reclor
25
+ - openbookqa
26
+ - mandyyyyii/scibench
27
+ - derek-thomas/ScienceQA
28
+ - TIGER-Lab/ScienceEval
29
+ - jondurbin/airoboros-3.2
30
+ - LDJnr/Capybara
31
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
32
+ - STEM-AI-mtl/Electrical-engineering
33
+ - knowrohit07/saraswati-stem
34
+ - sablo/oasst2_curated
35
+ - glaiveai/glaive-code-assistant
36
+ - lmsys/lmsys-chat-1m
37
+ - TIGER-Lab/MathInstruct
38
+ - bigbio/med_qa
39
+ - meta-math/MetaMathQA-40K
40
+ - openbookqa
41
+ - piqa
42
+ - metaeval/reclor
43
+ - derek-thomas/ScienceQA
44
+ - scibench
45
+ - sciq
46
+ - Open-Orca/SlimOrca
47
+ - migtissera/Synthia-v1.3
48
+ - TIGER-Lab/ScienceEval
49
+ model-index:
50
+ - name: Einstein-v4-7B
51
+ results:
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: AI2 Reasoning Challenge (25-Shot)
57
+ type: ai2_arc
58
+ config: ARC-Challenge
59
+ split: test
60
+ args:
61
+ num_few_shot: 25
62
+ metrics:
63
+ - type: acc_norm
64
+ value: 64.68
65
+ name: normalized accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
68
+ name: Open LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: HellaSwag (10-Shot)
74
+ type: hellaswag
75
+ split: validation
76
+ args:
77
+ num_few_shot: 10
78
+ metrics:
79
+ - type: acc_norm
80
+ value: 83.75
81
+ name: normalized accuracy
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: MMLU (5-Shot)
90
+ type: cais/mmlu
91
+ config: all
92
+ split: test
93
+ args:
94
+ num_few_shot: 5
95
+ metrics:
96
+ - type: acc
97
+ value: 62.31
98
+ name: accuracy
99
+ source:
100
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
101
+ name: Open LLM Leaderboard
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: TruthfulQA (0-shot)
107
+ type: truthful_qa
108
+ config: multiple_choice
109
+ split: validation
110
+ args:
111
+ num_few_shot: 0
112
+ metrics:
113
+ - type: mc2
114
+ value: 55.15
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
117
+ name: Open LLM Leaderboard
118
+ - task:
119
+ type: text-generation
120
+ name: Text Generation
121
+ dataset:
122
+ name: Winogrande (5-shot)
123
+ type: winogrande
124
+ config: winogrande_xl
125
+ split: validation
126
+ args:
127
+ num_few_shot: 5
128
+ metrics:
129
+ - type: acc
130
+ value: 76.24
131
+ name: accuracy
132
+ source:
133
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
134
+ name: Open LLM Leaderboard
135
+ - task:
136
+ type: text-generation
137
+ name: Text Generation
138
+ dataset:
139
+ name: GSM8k (5-shot)
140
+ type: gsm8k
141
+ config: main
142
+ split: test
143
+ args:
144
+ num_few_shot: 5
145
+ metrics:
146
+ - type: acc
147
+ value: 57.62
148
+ name: accuracy
149
+ source:
150
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
151
+ name: Open LLM Leaderboard
152
+ language:
153
+ - en
154
+ library_name: transformers
155
+ model_creator: Weyaxi
156
+ model_name: Einstein-v4-7B
157
+ model_type: mistral
158
+ pipeline_tag: text-generation
159
+ inference: false
160
+ prompt_template: '<|im_start|>system
161
+
162
+ {system_message}<|im_end|>
163
+
164
+ <|im_start|>user
165
+
166
+ {prompt}<|im_end|>
167
+
168
+ <|im_start|>assistant
169
+
170
+ '
171
+ quantized_by: Suparious
172
  ---
173
+ # Weyaxi/Einstein-v4-7B AWQ
174
+
175
+ - Model creator: [Weyaxi](https://huggingface.co/Weyaxi)
176
+ - Original model: [Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B)
177
+
178
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/U0zyXVGj-O8a7KP3BvPue.png)
179
+
180
+ ## Model Summary
181
+
182
+ This model is a full fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on diverse datasets.
183
+
184
+ This model is finetuned using `7xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
185
+
186
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
187
+
188
+ ## How to use
189
+
190
+ ### Install the necessary packages
191
+
192
+ ```bash
193
+ pip install --upgrade autoawq autoawq-kernels
194
+ ```
195
+
196
+ ### Example Python code
197
+
198
+ ```python
199
+ from awq import AutoAWQForCausalLM
200
+ from transformers import AutoTokenizer, TextStreamer
201
+
202
+ model_path = "solidrust/Einstein-v4-7B-AWQ"
203
+ system_message = "You are Senzu, incarnated as a powerful AI."
204
+
205
+ # Load model
206
+ model = AutoAWQForCausalLM.from_quantized(model_path,
207
+ fuse_layers=True)
208
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
209
+ trust_remote_code=True)
210
+ streamer = TextStreamer(tokenizer,
211
+ skip_prompt=True,
212
+ skip_special_tokens=True)
213
+
214
+ # Convert prompt to tokens
215
+ prompt_template = """\
216
+ <|im_start|>system
217
+ {system_message}<|im_end|>
218
+ <|im_start|>user
219
+ {prompt}<|im_end|>
220
+ <|im_start|>assistant"""
221
+
222
+ prompt = "You're standing on the surface of the Earth. "\
223
+ "You walk one mile south, one mile west and one mile north. "\
224
+ "You end up exactly where you started. Where are you?"
225
+
226
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
227
+ return_tensors='pt').input_ids.cuda()
228
+
229
+ # Generate output
230
+ generation_output = model.generate(tokens,
231
+ streamer=streamer,
232
+ max_new_tokens=512)
233
+
234
+ ```
235
+
236
+ ### About AWQ
237
+
238
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
239
+
240
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
241
+
242
+ It is supported by:
243
+
244
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
245
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
246
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
247
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
248
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
249
+
250
+ ## Prompt template: ChatML
251
+
252
+ ```plaintext
253
+ <|im_start|>system
254
+ {system_message}<|im_end|>
255
+ <|im_start|>user
256
+ {prompt}<|im_end|>
257
+ <|im_start|>assistant
258
+ ```