--- license: apache-2.0 --- This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 59.06 | | ARC (25-shot) | 66.21 | | HellaSwag (10-shot) | 83.64 | | MMLU (5-shot) | 62.37 | | TruthfulQA (0-shot) | 59.65 | | Winogrande (5-shot) | 78.14 | | GSM8K (5-shot) | 19.56 | | DROP (3-shot) | 43.84 |