Adding Evaluation Results

#8
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -74,3 +74,17 @@ Apache 2.0
74
 
75
  ---
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
  ---
76
 
77
+
78
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
79
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B)
80
+
81
+ | Metric | Value |
82
+ |-----------------------|---------------------------|
83
+ | Avg. | 46.17 |
84
+ | ARC (25-shot) | 54.1 |
85
+ | HellaSwag (10-shot) | 77.91 |
86
+ | MMLU (5-shot) | 54.49 |
87
+ | TruthfulQA (0-shot) | 49.36 |
88
+ | Winogrande (5-shot) | 70.17 |
89
+ | GSM8K (5-shot) | 9.93 |
90
+ | DROP (3-shot) | 7.24 |