leaderboard-pr-bot commited on
Commit
5573151
1 Parent(s): da68c64

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +128 -11
README.md CHANGED
@@ -1,17 +1,120 @@
1
  ---
2
- license: cc-by-nc-4.0
3
- base_model: mlabonne/Marcoro14-7B-slerp
4
- datasets:
5
- - argilla/distilabel-intel-orca-dpo-pairs
6
  language:
7
- - en
 
8
  tags:
9
- - distilabel
10
- - dpo
11
- - rlaif
12
- - rlhf
13
- - merge
14
- - mergekit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
  # ⚗️ distilabeled Marcoro14 7B Slerp
17
 
@@ -72,3 +175,17 @@ We'd like to thank the amazing open community and in particular:
72
  * The Intel team for publishing a great open dataset and show how well it worked in the first place
73
  * Teknium and NousResearch for their awesome work and models.
74
  * Maxime for sharing such great resources.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
2
  language:
3
+ - en
4
+ license: cc-by-nc-4.0
5
  tags:
6
+ - distilabel
7
+ - dpo
8
+ - rlaif
9
+ - rlhf
10
+ - merge
11
+ - mergekit
12
+ datasets:
13
+ - argilla/distilabel-intel-orca-dpo-pairs
14
+ base_model: mlabonne/Marcoro14-7B-slerp
15
+ model-index:
16
+ - name: distilabeled-Marcoro14-7B-slerp
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: AI2 Reasoning Challenge (25-Shot)
23
+ type: ai2_arc
24
+ config: ARC-Challenge
25
+ split: test
26
+ args:
27
+ num_few_shot: 25
28
+ metrics:
29
+ - type: acc_norm
30
+ value: 70.73
31
+ name: normalized accuracy
32
+ source:
33
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
34
+ name: Open LLM Leaderboard
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: HellaSwag (10-Shot)
40
+ type: hellaswag
41
+ split: validation
42
+ args:
43
+ num_few_shot: 10
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 87.47
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MMLU (5-Shot)
56
+ type: cais/mmlu
57
+ config: all
58
+ split: test
59
+ args:
60
+ num_few_shot: 5
61
+ metrics:
62
+ - type: acc
63
+ value: 65.22
64
+ name: accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: TruthfulQA (0-shot)
73
+ type: truthful_qa
74
+ config: multiple_choice
75
+ split: validation
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: mc2
80
+ value: 65.1
81
+ source:
82
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
83
+ name: Open LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: Winogrande (5-shot)
89
+ type: winogrande
90
+ config: winogrande_xl
91
+ split: validation
92
+ args:
93
+ num_few_shot: 5
94
+ metrics:
95
+ - type: acc
96
+ value: 82.08
97
+ name: accuracy
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: GSM8k (5-shot)
106
+ type: gsm8k
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 71.19
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=argilla/distilabeled-Marcoro14-7B-slerp
117
+ name: Open LLM Leaderboard
118
  ---
119
  # ⚗️ distilabeled Marcoro14 7B Slerp
120
 
 
175
  * The Intel team for publishing a great open dataset and show how well it worked in the first place
176
  * Teknium and NousResearch for their awesome work and models.
177
  * Maxime for sharing such great resources.
178
+
179
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
180
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp)
181
+
182
+ | Metric |Value|
183
+ |---------------------------------|----:|
184
+ |Avg. |73.63|
185
+ |AI2 Reasoning Challenge (25-Shot)|70.73|
186
+ |HellaSwag (10-Shot) |87.47|
187
+ |MMLU (5-Shot) |65.22|
188
+ |TruthfulQA (0-shot) |65.10|
189
+ |Winogrande (5-shot) |82.08|
190
+ |GSM8k (5-shot) |71.19|
191
+