leaderboard-pr-bot commited on
Commit
2f20cc3
1 Parent(s): 4527adf

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +124 -8
README.md CHANGED
@@ -1,13 +1,7 @@
1
  ---
2
- license: apache-2.0
3
- datasets:
4
- - stanfordnlp/SHP
5
- - Anthropic/hh-rlhf
6
- - OpenAssistant/oasst1
7
  language:
8
  - en
9
- metrics:
10
- - accuracy
11
  tags:
12
  - human feedback
13
  - rlhf
@@ -17,6 +11,115 @@ tags:
17
  - halos
18
  - dpo
19
  - rl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
 
22
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
@@ -54,4 +157,17 @@ If you find this repo or the technical paper useful in your research, please fee
54
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
55
  year = {2023},
56
  }
57
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
 
5
  tags:
6
  - human feedback
7
  - rlhf
 
11
  - halos
12
  - dpo
13
  - rl
14
+ datasets:
15
+ - stanfordnlp/SHP
16
+ - Anthropic/hh-rlhf
17
+ - OpenAssistant/oasst1
18
+ metrics:
19
+ - accuracy
20
+ model-index:
21
+ - name: archangel_sft-kto_llama13b
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: AI2 Reasoning Challenge (25-Shot)
28
+ type: ai2_arc
29
+ config: ARC-Challenge
30
+ split: test
31
+ args:
32
+ num_few_shot: 25
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 56.14
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: HellaSwag (10-Shot)
45
+ type: hellaswag
46
+ split: validation
47
+ args:
48
+ num_few_shot: 10
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 80.8
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MMLU (5-Shot)
61
+ type: cais/mmlu
62
+ config: all
63
+ split: test
64
+ args:
65
+ num_few_shot: 5
66
+ metrics:
67
+ - type: acc
68
+ value: 47.84
69
+ name: accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: TruthfulQA (0-shot)
78
+ type: truthful_qa
79
+ config: multiple_choice
80
+ split: validation
81
+ args:
82
+ num_few_shot: 0
83
+ metrics:
84
+ - type: mc2
85
+ value: 39.42
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: Winogrande (5-shot)
94
+ type: winogrande
95
+ config: winogrande_xl
96
+ split: validation
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 76.16
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: GSM8k (5-shot)
111
+ type: gsm8k
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 16.83
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ContextualAI/archangel_sft-kto_llama13b
122
+ name: Open LLM Leaderboard
123
  ---
124
 
125
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
 
157
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
158
  year = {2023},
159
  }
160
+ ```
161
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
162
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ContextualAI__archangel_sft-kto_llama13b)
163
+
164
+ | Metric |Value|
165
+ |---------------------------------|----:|
166
+ |Avg. |52.87|
167
+ |AI2 Reasoning Challenge (25-Shot)|56.14|
168
+ |HellaSwag (10-Shot) |80.80|
169
+ |MMLU (5-Shot) |47.84|
170
+ |TruthfulQA (0-shot) |39.42|
171
+ |Winogrande (5-shot) |76.16|
172
+ |GSM8k (5-shot) |16.83|
173
+