leaderboard-pr-bot commited on
Commit
e93bce2
1 Parent(s): e6b4724

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -3,6 +3,109 @@ license: cc-by-nc-4.0
3
  tags:
4
  - not-for-all-audiences
5
  - nsfw
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  Mixtral-8x7B-MoE-RP-Story is a model made primarely for chatting, RP (Roleplay) and storywriting.
@@ -31,4 +134,17 @@ The list of model used and their activator/theme can be found [here](https://hug
31
 
32
  Using Bagel as a base let us a lot of different prompting system theorically, you can see all the prompting available [here](https://huggingface.co/jondurbin/bagel-7b-v0.1#prompt-formatting).
33
 
34
- If you want to support me, you can [here](https://ko-fi.com/undiai).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  tags:
4
  - not-for-all-audiences
5
  - nsfw
6
+ model-index:
7
+ - name: Mixtral-8x7B-MoE-RP-Story
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: AI2 Reasoning Challenge (25-Shot)
14
+ type: ai2_arc
15
+ config: ARC-Challenge
16
+ split: test
17
+ args:
18
+ num_few_shot: 25
19
+ metrics:
20
+ - type: acc_norm
21
+ value: 51.54
22
+ name: normalized accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: HellaSwag (10-Shot)
31
+ type: hellaswag
32
+ split: validation
33
+ args:
34
+ num_few_shot: 10
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 70.0
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MMLU (5-Shot)
47
+ type: cais/mmlu
48
+ config: all
49
+ split: test
50
+ args:
51
+ num_few_shot: 5
52
+ metrics:
53
+ - type: acc
54
+ value: 43.04
55
+ name: accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: TruthfulQA (0-shot)
64
+ type: truthful_qa
65
+ config: multiple_choice
66
+ split: validation
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: mc2
71
+ value: 41.53
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: Winogrande (5-shot)
80
+ type: winogrande
81
+ config: winogrande_xl
82
+ split: validation
83
+ args:
84
+ num_few_shot: 5
85
+ metrics:
86
+ - type: acc
87
+ value: 67.32
88
+ name: accuracy
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: GSM8k (5-shot)
97
+ type: gsm8k
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 9.93
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mixtral-8x7B-MoE-RP-Story
108
+ name: Open LLM Leaderboard
109
  ---
110
 
111
  Mixtral-8x7B-MoE-RP-Story is a model made primarely for chatting, RP (Roleplay) and storywriting.
 
134
 
135
  Using Bagel as a base let us a lot of different prompting system theorically, you can see all the prompting available [here](https://huggingface.co/jondurbin/bagel-7b-v0.1#prompt-formatting).
136
 
137
+ If you want to support me, you can [here](https://ko-fi.com/undiai).
138
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
139
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mixtral-8x7B-MoE-RP-Story)
140
+
141
+ | Metric |Value|
142
+ |---------------------------------|----:|
143
+ |Avg. |47.23|
144
+ |AI2 Reasoning Challenge (25-Shot)|51.54|
145
+ |HellaSwag (10-Shot) |70.00|
146
+ |MMLU (5-Shot) |43.04|
147
+ |TruthfulQA (0-shot) |41.53|
148
+ |Winogrande (5-shot) |67.32|
149
+ |GSM8k (5-shot) | 9.93|
150
+