Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -87,4 +87,17 @@ The base LLaMA model is trained on various data, some of which may contain offen
|
|
87 |
journal={CoRR},
|
88 |
year={2021}
|
89 |
}
|
90 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
journal={CoRR},
|
88 |
year={2021}
|
89 |
}
|
90 |
+
```
|
91 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
92 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lilloukas__GPlatty-30B)
|
93 |
+
|
94 |
+
| Metric | Value |
|
95 |
+
|-----------------------|---------------------------|
|
96 |
+
| Avg. | 58.87 |
|
97 |
+
| ARC (25-shot) | 65.78 |
|
98 |
+
| HellaSwag (10-shot) | 84.79 |
|
99 |
+
| MMLU (5-shot) | 63.49 |
|
100 |
+
| TruthfulQA (0-shot) | 52.45 |
|
101 |
+
| Winogrande (5-shot) | 80.98 |
|
102 |
+
| GSM8K (5-shot) | 13.87 |
|
103 |
+
| DROP (3-shot) | 50.73 |
|