Weyaxi commited on
Commit
f3c4929
1 Parent(s): 6f70e1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -7
README.md CHANGED
@@ -4,6 +4,17 @@ base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
 
 
 
 
 
 
 
 
 
 
 
7
  datasets:
8
  - allenai/ai2_arc
9
  - camel-ai/physics
@@ -36,11 +47,14 @@ datasets:
36
  - migtissera/Synthia-v1.3
37
  - TIGER-Lab/ScienceEval
38
  ---
 
39
 
40
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
- should probably proofread and complete it, then remove this comment. -->
 
 
 
42
 
43
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
44
  <details><summary>See axolotl config</summary>
45
 
46
  axolotl version: `0.4.0`
@@ -149,8 +163,45 @@ resume_from_checkpoint: Einstein-v4-model/checkpoint-521
149
 
150
  </details><br>
151
 
152
- # Einstein-v4-7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
155
- It achieves the following results on the evaluation set:
156
- - Loss: 0.4902
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
+ - Mistral
8
+ - instruct
9
+ - finetune
10
+ - chatml
11
+ - gpt4
12
+ - synthetic data
13
+ - science
14
+ - physics
15
+ - chemistry
16
+ - biology
17
+ - math
18
  datasets:
19
  - allenai/ai2_arc
20
  - camel-ai/physics
 
47
  - migtissera/Synthia-v1.3
48
  - TIGER-Lab/ScienceEval
49
  ---
50
+ # 🔬 Einstein-v4-7B
51
 
52
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on diverse datasets.
53
+
54
+ This model is finetuned using `7xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
55
+
56
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
57
 
 
58
  <details><summary>See axolotl config</summary>
59
 
60
  axolotl version: `0.4.0`
 
163
 
164
  </details><br>
165
 
166
+ # 💬 Prompt Template
167
+
168
+ You can use this prompt template while using the model:
169
+
170
+ ### ChatML
171
+
172
+ ```
173
+ <|im_start|>system
174
+ {system}<|im_end|>
175
+ <|im_start|>user
176
+ {user}<|im_end|>
177
+ <|im_start|>assistant
178
+ {asistant}<|im_end|>
179
+ ```
180
+
181
+ This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
182
+ `tokenizer.apply_chat_template()` method:
183
+
184
+ ```python
185
+ messages = [
186
+ {"role": "system", "content": "You are helpful AI asistant."},
187
+ {"role": "user", "content": "Hello!"}
188
+ ]
189
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
190
+ model.generate(**gen_input)
191
+ ```
192
+
193
+ # 🤝 Acknowledgments
194
+
195
+ Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
196
+
197
+ Thanks to all the dataset authors mentioned in the datasets section.
198
+
199
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
200
+
201
+ Thanks to all open source AI community.
202
+
203
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
204
+
205
+ If you would like to support me:
206
 
207
+ [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)