leaderboard-pr-bot commited on
Commit
0d98a2e
1 Parent(s): d6024b9

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +120 -3
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  license: other
3
- license_name: yi-license
4
- license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
5
  datasets:
6
  - ai2_arc
7
  - unalignment/spicy-3.1
@@ -27,11 +25,116 @@ datasets:
27
  - Intel/orca_dpo_pairs
28
  - unalignment/toxic-dpo-v0.1
29
  - jondurbin/truthy-dpo-v0.1
30
- - allenai/ultrafeedback_binarized_cleaned
31
  - Squish42/bluemoon-fandom-1-1-rp-cleaned
32
  - LDJnr/Capybara
33
  - JULIELab/EmoBank
34
  - kingbri/PIPPA-shareGPT
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ---
36
 
37
  # A bagel, with everything
@@ -179,3 +282,17 @@ If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `t
179
  {instruction} [/INST]
180
  ```
181
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
 
 
3
  datasets:
4
  - ai2_arc
5
  - unalignment/spicy-3.1
 
25
  - Intel/orca_dpo_pairs
26
  - unalignment/toxic-dpo-v0.1
27
  - jondurbin/truthy-dpo-v0.1
28
+ - allenai/ultrafeedback_binarized_cleaned
29
  - Squish42/bluemoon-fandom-1-1-rp-cleaned
30
  - LDJnr/Capybara
31
  - JULIELab/EmoBank
32
  - kingbri/PIPPA-shareGPT
33
+ license_name: yi-license
34
+ license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
35
+ model-index:
36
+ - name: UNA-34Beagles-32K-bf16-v1
37
+ results:
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: AI2 Reasoning Challenge (25-Shot)
43
+ type: ai2_arc
44
+ config: ARC-Challenge
45
+ split: test
46
+ args:
47
+ num_few_shot: 25
48
+ metrics:
49
+ - type: acc_norm
50
+ value: 73.55
51
+ name: normalized accuracy
52
+ source:
53
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
54
+ name: Open LLM Leaderboard
55
+ - task:
56
+ type: text-generation
57
+ name: Text Generation
58
+ dataset:
59
+ name: HellaSwag (10-Shot)
60
+ type: hellaswag
61
+ split: validation
62
+ args:
63
+ num_few_shot: 10
64
+ metrics:
65
+ - type: acc_norm
66
+ value: 85.93
67
+ name: normalized accuracy
68
+ source:
69
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: MMLU (5-Shot)
76
+ type: cais/mmlu
77
+ config: all
78
+ split: test
79
+ args:
80
+ num_few_shot: 5
81
+ metrics:
82
+ - type: acc
83
+ value: 76.45
84
+ name: accuracy
85
+ source:
86
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: TruthfulQA (0-shot)
93
+ type: truthful_qa
94
+ config: multiple_choice
95
+ split: validation
96
+ args:
97
+ num_few_shot: 0
98
+ metrics:
99
+ - type: mc2
100
+ value: 73.55
101
+ source:
102
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
103
+ name: Open LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: Winogrande (5-shot)
109
+ type: winogrande
110
+ config: winogrande_xl
111
+ split: validation
112
+ args:
113
+ num_few_shot: 5
114
+ metrics:
115
+ - type: acc
116
+ value: 82.95
117
+ name: accuracy
118
+ source:
119
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
120
+ name: Open LLM Leaderboard
121
+ - task:
122
+ type: text-generation
123
+ name: Text Generation
124
+ dataset:
125
+ name: GSM8k (5-shot)
126
+ type: gsm8k
127
+ config: main
128
+ split: test
129
+ args:
130
+ num_few_shot: 5
131
+ metrics:
132
+ - type: acc
133
+ value: 60.05
134
+ name: accuracy
135
+ source:
136
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=one-man-army/UNA-34Beagles-32K-bf16-v1
137
+ name: Open LLM Leaderboard
138
  ---
139
 
140
  # A bagel, with everything
 
282
  {instruction} [/INST]
283
  ```
284
 
285
+
286
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
287
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1)
288
+
289
+ | Metric |Value|
290
+ |---------------------------------|----:|
291
+ |Avg. |75.41|
292
+ |AI2 Reasoning Challenge (25-Shot)|73.55|
293
+ |HellaSwag (10-Shot) |85.93|
294
+ |MMLU (5-Shot) |76.45|
295
+ |TruthfulQA (0-shot) |73.55|
296
+ |Winogrande (5-shot) |82.95|
297
+ |GSM8k (5-shot) |60.05|
298
+