--- language: - en license: other tags: - axolotl - generated_from_trainer - phi - phi2 - einstein - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math base_model: microsoft/phi-2 datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - glaiveai/glaive-code-assistant - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval model-index: - name: Einstein-v4-phi2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 74.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.8 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 53.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2 name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/Z32gXhbukH-L7SB1TQ6Sb.png) # 🔬 Einstein-v4-phi2 This model is a full fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai).
See axolotl config axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.005 output_dir: ./Einstein-v4-phi2-model sequence_len: 2048 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: wandb_log_model: hub_model_id: Weyaxi/Einstein-v4-phi2 save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 2 optimizer: adamw_torch # adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 # changed eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 4 debug: deepspeed: zero3_bf16.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|endoftext|>" tokens: - "<|im_start|>" ```

# 💬 Prompt Template You can use this prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 🔄 Quantizationed versions Quantizationed versions of this model is available. ## GGUF [@bartowski](https://hf.co/bartowski): - https://huggingface.co/bartowski/Einstein-v4-phi2-GGUF ## Exl2 [@bartowski](https://hf.co/bartowski): - https://huggingface.co/bartowski/Einstein-v4-phi2-exl2 # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-phi2) | Metric |Value| |---------------------------------|----:| |Avg. |60.77| |AI2 Reasoning Challenge (25-Shot)|59.98| |HellaSwag (10-Shot) |74.07| |MMLU (5-Shot) |56.89| |TruthfulQA (0-shot) |45.80| |Winogrande (5-shot) |73.88| |GSM8k (5-shot) |53.98| # 🤖 Additional information about training This model is full fine-tuned for 2 epochs. Total number of steps was 2178.
Loss graph ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/qsoXp0z2AooZjij95lpRU.png)

# 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)