Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -151,3 +151,17 @@ You can find all the work I have done trying on this [Pastebin](https://pastebin
151
  Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
152
 
153
  If you want to support me, you can [here](https://ko-fi.com/undiai).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
152
 
153
  If you want to support me, you can [here](https://ko-fi.com/undiai).
154
+
155
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
156
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11)
157
+
158
+ | Metric | Value |
159
+ |-----------------------|---------------------------|
160
+ | Avg. | 53.01 |
161
+ | ARC (25-shot) | 64.42 |
162
+ | HellaSwag (10-shot) | 83.93 |
163
+ | MMLU (5-shot) | 63.82 |
164
+ | TruthfulQA (0-shot) | 56.68 |
165
+ | Winogrande (5-shot) | 77.74 |
166
+ | GSM8K (5-shot) | 14.94 |
167
+ | DROP (3-shot) | 9.57 |