--- license: "apache-2.0" --- *This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.* Note: Completely broken. Do not use. **Benchmarks** Average 59.52 ARC 59.47 HellaSwag 82.42 MMLU 62.21 TruthfulQA 40.01 Winogrande 78.3 GSM8K 34.72 **Training Details** Duration: ~10-12 hours on one Kaggle T4 with Unsloth Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k Rank: 8 Alpha: 16 Learning rate: 5e-6 Beta: 0.1 Batch size: 8 Epochs: 1 Learning rate scheduler: Linear Prompt Format: ```You are a helpful assistant.[INST] PROMPT [/INST]RESPONSE``` (The start token \ must be added manually and not automatically) **WanDB Reports** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/Tg3dknWsTvfqM96Fab2YJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/8DQ0WiypkVIJeK_Y18Wv0.png) [](https://github.com/unslothai/unsloth)