|
--- |
|
license: other |
|
tags: |
|
- axolotl |
|
- generated_from_trainer |
|
- Mistral |
|
- instruct |
|
- finetune |
|
- chatml |
|
- gpt4 |
|
- synthetic data |
|
- science |
|
- physics |
|
- chemistry |
|
- biology |
|
- math |
|
base_model: alpindale/Mistral-7B-v0.2-hf |
|
datasets: |
|
- allenai/ai2_arc |
|
- camel-ai/physics |
|
- camel-ai/chemistry |
|
- camel-ai/biology |
|
- camel-ai/math |
|
- metaeval/reclor |
|
- openbookqa |
|
- mandyyyyii/scibench |
|
- derek-thomas/ScienceQA |
|
- TIGER-Lab/ScienceEval |
|
- jondurbin/airoboros-3.2 |
|
- LDJnr/Capybara |
|
- Cot-Alpaca-GPT4-From-OpenHermes-2.5 |
|
- STEM-AI-mtl/Electrical-engineering |
|
- knowrohit07/saraswati-stem |
|
- sablo/oasst2_curated |
|
- lmsys/lmsys-chat-1m |
|
- TIGER-Lab/MathInstruct |
|
- bigbio/med_qa |
|
- meta-math/MetaMathQA-40K |
|
- openbookqa |
|
- piqa |
|
- metaeval/reclor |
|
- derek-thomas/ScienceQA |
|
- scibench |
|
- sciq |
|
- Open-Orca/SlimOrca |
|
- migtissera/Synthia-v1.3 |
|
- TIGER-Lab/ScienceEval |
|
- allenai/WildChat |
|
- microsoft/orca-math-word-problems-200k |
|
- openchat/openchat_sharegpt4_dataset |
|
- teknium/GPTeacher-General-Instruct |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
--- |
|
|
|
|
|
# 🔬 Einstein-v5-v0.2-7B |
|
|
|
This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets. |
|
|
|
This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). |
|
|
|
This model's training was sponsored by [sablo.ai](https://sablo.ai). |
|
|
|
<details><summary>See axolotl config</summary> |
|
|
|
axolotl version: `0.4.0` |
|
```yaml |
|
base_model: alpindale/Mistral-7B-v0.2-hf |
|
model_type: MistralForCausalLM |
|
tokenizer_type: LlamaTokenizer |
|
is_mistral_derived_model: true |
|
|
|
load_in_8bit: false |
|
load_in_4bit: false |
|
strict: false |
|
|
|
chat_template: chatml |
|
datasets: |
|
- path: data/merged_all.json |
|
ds_type: json |
|
type: alpaca |
|
conversation: chatml |
|
|
|
- path: data/gpteacher-instruct-special-alpaca.json |
|
ds_type: json |
|
type: gpteacher |
|
conversation: chatml |
|
|
|
- path: data/capybara_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/synthia-v1.3_sharegpt_12500.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/slimorca_dedup_filtered_95k_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
strict: false |
|
conversation: chatml |
|
|
|
- path: data/pippa_bagel_repo_3k_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/gpt4_data_lmys_1m_sharegpt.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
- path: data/sharegpt_gpt4_english.json |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
dataset_prepared_path: last_run_prepared |
|
# val_set_size: 0.005 |
|
val_set_size: 0.0 |
|
|
|
do_bench_eval: true |
|
|
|
output_dir: ./Einstein-v5-Mistral-v0.2-beta-model |
|
|
|
sequence_len: 8192 |
|
sample_packing: true |
|
pad_to_sequence_len: true |
|
eval_sample_packing: false |
|
|
|
wandb_project: Einstein |
|
wandb_entity: |
|
wandb_watch: |
|
wandb_name: |
|
wandb_log_model: |
|
hub_model_id: Weyaxi/Einstein-v5-Mistral-v0.2-beta |
|
|
|
save_safetensors: true |
|
|
|
gradient_accumulation_steps: 4 |
|
micro_batch_size: 1 |
|
num_epochs: 2 |
|
optimizer: adamw_bnb_8bit |
|
lr_scheduler: cosine |
|
learning_rate: 0.000005 |
|
|
|
train_on_inputs: false |
|
group_by_length: false |
|
bf16: true |
|
fp16: false |
|
tf32: false |
|
|
|
gradient_checkpointing: true |
|
early_stopping_patience: |
|
resume_from_checkpoint: |
|
local_rank: |
|
logging_steps: 1 |
|
xformers_attention: |
|
flash_attention: true |
|
|
|
warmup_steps: 10 |
|
evals_per_epoch: 3 # changed |
|
eval_table_size: |
|
eval_table_max_new_tokens: 128 |
|
saves_per_epoch: 3 # changed |
|
debug: |
|
|
|
deepspeed: zero3_bf16.json |
|
weight_decay: 0.0 |
|
fsdp: |
|
fsdp_config: |
|
special_tokens: |
|
bos_token: "<s>" |
|
eos_token: "<|im_end|>" |
|
unk_token: "<unk>" |
|
tokens: |
|
- "<|im_start|>" |
|
``` |
|
|
|
</details><br> |
|
|
|
# 💬 Prompt Template |
|
|
|
You can use this prompt template while using the model: |
|
|
|
### ChatML |
|
|
|
``` |
|
<|im_start|>system |
|
{system}<|im_end|> |
|
<|im_start|>user |
|
{user}<|im_end|> |
|
<|im_start|>assistant |
|
{asistant}<|im_end|> |
|
``` |
|
|
|
This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the |
|
`tokenizer.apply_chat_template()` method: |
|
|
|
```python |
|
messages = [ |
|
{"role": "system", "content": "You are helpful AI asistant."}, |
|
{"role": "user", "content": "Hello!"} |
|
] |
|
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") |
|
model.generate(**gen_input) |
|
``` |
|
|
|
# 🔄 Quantizationed versions |
|
|
|
Quantizationed versions of this model is available. |
|
|
|
## GGUF [@bartowski](https://huggingface.co/bartowski) |
|
|
|
- https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF |
|
|
|
## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) |
|
|
|
- https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 |
|
|
|
|
|
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v5-v0.2-7B) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |x| |
|
|AI2 Reasoning Challenge (25-Shot)|x| |
|
|HellaSwag (10-Shot) |x| |
|
|MMLU (5-Shot) |x| |
|
|TruthfulQA (0-shot) |x| |
|
|Winogrande (5-shot) |x| |
|
|GSM8k (5-shot) |x| |
|
|
|
# 🤖 Additional information about training |
|
|
|
This model is full fine-tuned for 1 epoch. |
|
|
|
Total number of steps was 1124. |
|
|
|
<details><summary>Loss graph</summary> |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/TkzKdxZZHznGjYLWiSmLS.png) |
|
</details><br> |
|
|
|
# 🤝 Acknowledgments |
|
|
|
Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. |
|
|
|
Thanks to all the dataset authors mentioned in the datasets section. |
|
|
|
Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. |
|
|
|
Thanks to all open source AI community. |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
If you would like to support me: |
|
|
|
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) |