Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
3 |
tags:
|
4 |
- Mixtral
|
5 |
- instruct
|
@@ -8,14 +10,12 @@ tags:
|
|
8 |
- gpt4
|
9 |
- synthetic data
|
10 |
- distillation
|
|
|
|
|
|
|
11 |
model-index:
|
12 |
- name: Nous-Hermes-2-Mixtral-8x7B-SFT
|
13 |
results: []
|
14 |
-
license: apache-2.0
|
15 |
-
language:
|
16 |
-
- en
|
17 |
-
datasets:
|
18 |
-
- teknium/OpenHermes-2.5
|
19 |
---
|
20 |
|
21 |
# Nous Hermes 2 - Mixtral 8x7B - SFT
|
@@ -247,3 +247,17 @@ for chat in prompts:
|
|
247 |
(other sizes available in Qeternity's repos)
|
248 |
|
249 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: apache-2.0
|
5 |
tags:
|
6 |
- Mixtral
|
7 |
- instruct
|
|
|
10 |
- gpt4
|
11 |
- synthetic data
|
12 |
- distillation
|
13 |
+
datasets:
|
14 |
+
- teknium/OpenHermes-2.5
|
15 |
+
base_model: mistralai/Mixtral-8x7B-v0.1
|
16 |
model-index:
|
17 |
- name: Nous-Hermes-2-Mixtral-8x7B-SFT
|
18 |
results: []
|
|
|
|
|
|
|
|
|
|
|
19 |
---
|
20 |
|
21 |
# Nous Hermes 2 - Mixtral 8x7B - SFT
|
|
|
247 |
(other sizes available in Qeternity's repos)
|
248 |
|
249 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
250 |
+
|
251 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
252 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Hermes-2-Mixtral-8x7B-SFT)
|
253 |
+
|
254 |
+
| Metric |Value|
|
255 |
+
|---------------------------------|----:|
|
256 |
+
|Avg. |72.07|
|
257 |
+
|AI2 Reasoning Challenge (25-Shot)|69.71|
|
258 |
+
|HellaSwag (10-Shot) |86.74|
|
259 |
+
|MMLU (5-Shot) |72.21|
|
260 |
+
|TruthfulQA (0-shot) |51.22|
|
261 |
+
|Winogrande (5-shot) |82.95|
|
262 |
+
|GSM8k (5-shot) |69.60|
|
263 |
+
|