Adding Evaluation Results
#29
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -101,4 +101,17 @@ Please cite the repo if you use the data, method or code in this repo.
|
|
101 |
journal={arXiv preprint arXiv:2306.08568},
|
102 |
year={2023}
|
103 |
}
|
104 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
journal={arXiv preprint arXiv:2306.08568},
|
102 |
year={2023}
|
103 |
}
|
104 |
+
```
|
105 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
106 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardCoder-Python-34B-V1.0)
|
107 |
+
|
108 |
+
| Metric | Value |
|
109 |
+
|-----------------------|---------------------------|
|
110 |
+
| Avg. | 46.83 |
|
111 |
+
| ARC (25-shot) | 52.13 |
|
112 |
+
| HellaSwag (10-shot) | 74.78 |
|
113 |
+
| MMLU (5-shot) | 49.15 |
|
114 |
+
| TruthfulQA (0-shot) | 48.85 |
|
115 |
+
| Winogrande (5-shot) | 68.35 |
|
116 |
+
| GSM8K (5-shot) | 9.48 |
|
117 |
+
| DROP (3-shot) | 25.06 |
|