leaderboard-pr-bot
commited on
Commit
•
aa5b7a3
1
Parent(s):
ae87025
Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
base_model: teknium/OpenHermes-2.5-Mistral-7B
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
-
model-index:
|
7 |
-
- name: qlora-out
|
8 |
-
results: []
|
9 |
datasets:
|
10 |
- winglian/no_robots_rlhf
|
11 |
- HuggingFaceH4/no_robots
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -46,4 +46,17 @@ The following hyperparameters were used during training:
|
|
46 |
- Transformers 4.35.2
|
47 |
- Pytorch 2.0.1+cu118
|
48 |
- Datasets 2.15.0
|
49 |
-
- Tokenizers 0.15.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
|
|
|
|
5 |
datasets:
|
6 |
- winglian/no_robots_rlhf
|
7 |
- HuggingFaceH4/no_robots
|
8 |
+
base_model: teknium/OpenHermes-2.5-Mistral-7B
|
9 |
+
model-index:
|
10 |
+
- name: qlora-out
|
11 |
+
results: []
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
46 |
- Transformers 4.35.2
|
47 |
- Pytorch 2.0.1+cu118
|
48 |
- Datasets 2.15.0
|
49 |
+
- Tokenizers 0.15.0
|
50 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
51 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots)
|
52 |
+
|
53 |
+
| Metric |Value|
|
54 |
+
|---------------------------------|----:|
|
55 |
+
|Avg. |66.40|
|
56 |
+
|AI2 Reasoning Challenge (25-Shot)|64.93|
|
57 |
+
|HellaSwag (10-Shot) |84.30|
|
58 |
+
|MMLU (5-Shot) |63.86|
|
59 |
+
|TruthfulQA (0-shot) |52.12|
|
60 |
+
|Winogrande (5-shot) |77.90|
|
61 |
+
|GSM8k (5-shot) |55.27|
|
62 |
+
|