Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -213,4 +213,17 @@ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based
|
|
213 |
---
|
214 |
|
215 |
# Licenese
|
216 |
-
[The MIT license](https://opensource.org/licenses/MIT)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
213 |
---
|
214 |
|
215 |
# Licenese
|
216 |
+
[The MIT license](https://opensource.org/licenses/MIT)
|
217 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
218 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b)
|
219 |
+
|
220 |
+
| Metric | Value |
|
221 |
+
|-----------------------|---------------------------|
|
222 |
+
| Avg. | 27.58 |
|
223 |
+
| ARC (25-shot) | 29.18 |
|
224 |
+
| HellaSwag (10-shot) | 43.73 |
|
225 |
+
| MMLU (5-shot) | 23.1 |
|
226 |
+
| TruthfulQA (0-shot) | 45.0 |
|
227 |
+
| Winogrande (5-shot) | 51.85 |
|
228 |
+
| GSM8K (5-shot) | 0.0 |
|
229 |
+
| DROP (3-shot) | 0.19 |
|