lingzhiai commited on
Commit
bd851f1
·
verified ·
1 Parent(s): ba00eb0

Upload 33 files

Browse files
README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ # Lingzhi-0.5B-chat
Results/BBH/results.json ADDED
The diff for this file is too large to render. See raw diff
 
Results/CEVAL-VALID/results.json ADDED
@@ -0,0 +1,2701 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "ceval-valid": {
4
+ "acc_norm,none": 0.5505200594353641,
5
+ "acc_norm_stderr,none": 0.012961996609689193,
6
+ "acc,none": 0.5505200594353641,
7
+ "acc_stderr,none": 0.012961996609689193,
8
+ "alias": "ceval-valid"
9
+ },
10
+ "ceval-valid_accountant": {
11
+ "acc,none": 0.4897959183673469,
12
+ "acc_stderr,none": 0.07215375318230074,
13
+ "acc_norm,none": 0.4897959183673469,
14
+ "acc_norm_stderr,none": 0.07215375318230074,
15
+ "alias": " - ceval-valid_accountant"
16
+ },
17
+ "ceval-valid_advanced_mathematics": {
18
+ "acc,none": 0.3684210526315789,
19
+ "acc_stderr,none": 0.11369720523522558,
20
+ "acc_norm,none": 0.3684210526315789,
21
+ "acc_norm_stderr,none": 0.11369720523522558,
22
+ "alias": " - ceval-valid_advanced_mathematics"
23
+ },
24
+ "ceval-valid_art_studies": {
25
+ "acc,none": 0.3939393939393939,
26
+ "acc_stderr,none": 0.08637692614387409,
27
+ "acc_norm,none": 0.3939393939393939,
28
+ "acc_norm_stderr,none": 0.08637692614387409,
29
+ "alias": " - ceval-valid_art_studies"
30
+ },
31
+ "ceval-valid_basic_medicine": {
32
+ "acc,none": 0.47368421052631576,
33
+ "acc_stderr,none": 0.11768778828946262,
34
+ "acc_norm,none": 0.47368421052631576,
35
+ "acc_norm_stderr,none": 0.11768778828946262,
36
+ "alias": " - ceval-valid_basic_medicine"
37
+ },
38
+ "ceval-valid_business_administration": {
39
+ "acc,none": 0.48484848484848486,
40
+ "acc_stderr,none": 0.08834775598250456,
41
+ "acc_norm,none": 0.48484848484848486,
42
+ "acc_norm_stderr,none": 0.08834775598250456,
43
+ "alias": " - ceval-valid_business_administration"
44
+ },
45
+ "ceval-valid_chinese_language_and_literature": {
46
+ "acc,none": 0.34782608695652173,
47
+ "acc_stderr,none": 0.10154334054280735,
48
+ "acc_norm,none": 0.34782608695652173,
49
+ "acc_norm_stderr,none": 0.10154334054280735,
50
+ "alias": " - ceval-valid_chinese_language_and_literature"
51
+ },
52
+ "ceval-valid_civil_servant": {
53
+ "acc,none": 0.44680851063829785,
54
+ "acc_stderr,none": 0.07330262843906579,
55
+ "acc_norm,none": 0.44680851063829785,
56
+ "acc_norm_stderr,none": 0.07330262843906579,
57
+ "alias": " - ceval-valid_civil_servant"
58
+ },
59
+ "ceval-valid_clinical_medicine": {
60
+ "acc,none": 0.36363636363636365,
61
+ "acc_stderr,none": 0.10497277621629558,
62
+ "acc_norm,none": 0.36363636363636365,
63
+ "acc_norm_stderr,none": 0.10497277621629558,
64
+ "alias": " - ceval-valid_clinical_medicine"
65
+ },
66
+ "ceval-valid_college_chemistry": {
67
+ "acc,none": 0.375,
68
+ "acc_stderr,none": 0.10094660663590604,
69
+ "acc_norm,none": 0.375,
70
+ "acc_norm_stderr,none": 0.10094660663590604,
71
+ "alias": " - ceval-valid_college_chemistry"
72
+ },
73
+ "ceval-valid_college_economics": {
74
+ "acc,none": 0.4727272727272727,
75
+ "acc_stderr,none": 0.06794008776080909,
76
+ "acc_norm,none": 0.4727272727272727,
77
+ "acc_norm_stderr,none": 0.06794008776080909,
78
+ "alias": " - ceval-valid_college_economics"
79
+ },
80
+ "ceval-valid_college_physics": {
81
+ "acc,none": 0.42105263157894735,
82
+ "acc_stderr,none": 0.11637279966159299,
83
+ "acc_norm,none": 0.42105263157894735,
84
+ "acc_norm_stderr,none": 0.11637279966159299,
85
+ "alias": " - ceval-valid_college_physics"
86
+ },
87
+ "ceval-valid_college_programming": {
88
+ "acc,none": 0.4864864864864865,
89
+ "acc_stderr,none": 0.08330289193201319,
90
+ "acc_norm,none": 0.4864864864864865,
91
+ "acc_norm_stderr,none": 0.08330289193201319,
92
+ "alias": " - ceval-valid_college_programming"
93
+ },
94
+ "ceval-valid_computer_architecture": {
95
+ "acc,none": 0.5238095238095238,
96
+ "acc_stderr,none": 0.11167656571008164,
97
+ "acc_norm,none": 0.5238095238095238,
98
+ "acc_norm_stderr,none": 0.11167656571008164,
99
+ "alias": " - ceval-valid_computer_architecture"
100
+ },
101
+ "ceval-valid_computer_network": {
102
+ "acc,none": 0.42105263157894735,
103
+ "acc_stderr,none": 0.11637279966159299,
104
+ "acc_norm,none": 0.42105263157894735,
105
+ "acc_norm_stderr,none": 0.11637279966159299,
106
+ "alias": " - ceval-valid_computer_network"
107
+ },
108
+ "ceval-valid_discrete_mathematics": {
109
+ "acc,none": 0.1875,
110
+ "acc_stderr,none": 0.10077822185373188,
111
+ "acc_norm,none": 0.1875,
112
+ "acc_norm_stderr,none": 0.10077822185373188,
113
+ "alias": " - ceval-valid_discrete_mathematics"
114
+ },
115
+ "ceval-valid_education_science": {
116
+ "acc,none": 0.7586206896551724,
117
+ "acc_stderr,none": 0.08086923723833499,
118
+ "acc_norm,none": 0.7586206896551724,
119
+ "acc_norm_stderr,none": 0.08086923723833499,
120
+ "alias": " - ceval-valid_education_science"
121
+ },
122
+ "ceval-valid_electrical_engineer": {
123
+ "acc,none": 0.35135135135135137,
124
+ "acc_stderr,none": 0.0795654132101608,
125
+ "acc_norm,none": 0.35135135135135137,
126
+ "acc_norm_stderr,none": 0.0795654132101608,
127
+ "alias": " - ceval-valid_electrical_engineer"
128
+ },
129
+ "ceval-valid_environmental_impact_assessment_engineer": {
130
+ "acc,none": 0.5161290322580645,
131
+ "acc_stderr,none": 0.09123958466923197,
132
+ "acc_norm,none": 0.5161290322580645,
133
+ "acc_norm_stderr,none": 0.09123958466923197,
134
+ "alias": " - ceval-valid_environmental_impact_assessment_engineer"
135
+ },
136
+ "ceval-valid_fire_engineer": {
137
+ "acc,none": 0.6774193548387096,
138
+ "acc_stderr,none": 0.08534681648595453,
139
+ "acc_norm,none": 0.6774193548387096,
140
+ "acc_norm_stderr,none": 0.08534681648595453,
141
+ "alias": " - ceval-valid_fire_engineer"
142
+ },
143
+ "ceval-valid_high_school_biology": {
144
+ "acc,none": 0.5263157894736842,
145
+ "acc_stderr,none": 0.1176877882894626,
146
+ "acc_norm,none": 0.5263157894736842,
147
+ "acc_norm_stderr,none": 0.1176877882894626,
148
+ "alias": " - ceval-valid_high_school_biology"
149
+ },
150
+ "ceval-valid_high_school_chemistry": {
151
+ "acc,none": 0.42105263157894735,
152
+ "acc_stderr,none": 0.11637279966159299,
153
+ "acc_norm,none": 0.42105263157894735,
154
+ "acc_norm_stderr,none": 0.11637279966159299,
155
+ "alias": " - ceval-valid_high_school_chemistry"
156
+ },
157
+ "ceval-valid_high_school_chinese": {
158
+ "acc,none": 0.42105263157894735,
159
+ "acc_stderr,none": 0.11637279966159299,
160
+ "acc_norm,none": 0.42105263157894735,
161
+ "acc_norm_stderr,none": 0.11637279966159299,
162
+ "alias": " - ceval-valid_high_school_chinese"
163
+ },
164
+ "ceval-valid_high_school_geography": {
165
+ "acc,none": 0.6842105263157895,
166
+ "acc_stderr,none": 0.10956136839295433,
167
+ "acc_norm,none": 0.6842105263157895,
168
+ "acc_norm_stderr,none": 0.10956136839295433,
169
+ "alias": " - ceval-valid_high_school_geography"
170
+ },
171
+ "ceval-valid_high_school_history": {
172
+ "acc,none": 0.85,
173
+ "acc_stderr,none": 0.08191780219091252,
174
+ "acc_norm,none": 0.85,
175
+ "acc_norm_stderr,none": 0.08191780219091252,
176
+ "alias": " - ceval-valid_high_school_history"
177
+ },
178
+ "ceval-valid_high_school_mathematics": {
179
+ "acc,none": 0.3333333333333333,
180
+ "acc_stderr,none": 0.11433239009500591,
181
+ "acc_norm,none": 0.3333333333333333,
182
+ "acc_norm_stderr,none": 0.11433239009500591,
183
+ "alias": " - ceval-valid_high_school_mathematics"
184
+ },
185
+ "ceval-valid_high_school_physics": {
186
+ "acc,none": 0.5789473684210527,
187
+ "acc_stderr,none": 0.11637279966159299,
188
+ "acc_norm,none": 0.5789473684210527,
189
+ "acc_norm_stderr,none": 0.11637279966159299,
190
+ "alias": " - ceval-valid_high_school_physics"
191
+ },
192
+ "ceval-valid_high_school_politics": {
193
+ "acc,none": 0.6842105263157895,
194
+ "acc_stderr,none": 0.10956136839295434,
195
+ "acc_norm,none": 0.6842105263157895,
196
+ "acc_norm_stderr,none": 0.10956136839295434,
197
+ "alias": " - ceval-valid_high_school_politics"
198
+ },
199
+ "ceval-valid_ideological_and_moral_cultivation": {
200
+ "acc,none": 0.8947368421052632,
201
+ "acc_stderr,none": 0.0723351864143449,
202
+ "acc_norm,none": 0.8947368421052632,
203
+ "acc_norm_stderr,none": 0.0723351864143449,
204
+ "alias": " - ceval-valid_ideological_and_moral_cultivation"
205
+ },
206
+ "ceval-valid_law": {
207
+ "acc,none": 0.5,
208
+ "acc_stderr,none": 0.1042572070285374,
209
+ "acc_norm,none": 0.5,
210
+ "acc_norm_stderr,none": 0.1042572070285374,
211
+ "alias": " - ceval-valid_law"
212
+ },
213
+ "ceval-valid_legal_professional": {
214
+ "acc,none": 0.5652173913043478,
215
+ "acc_stderr,none": 0.10568965974008646,
216
+ "acc_norm,none": 0.5652173913043478,
217
+ "acc_norm_stderr,none": 0.10568965974008646,
218
+ "alias": " - ceval-valid_legal_professional"
219
+ },
220
+ "ceval-valid_logic": {
221
+ "acc,none": 0.45454545454545453,
222
+ "acc_stderr,none": 0.10865714630312667,
223
+ "acc_norm,none": 0.45454545454545453,
224
+ "acc_norm_stderr,none": 0.10865714630312667,
225
+ "alias": " - ceval-valid_logic"
226
+ },
227
+ "ceval-valid_mao_zedong_thought": {
228
+ "acc,none": 0.9166666666666666,
229
+ "acc_stderr,none": 0.05763033956734371,
230
+ "acc_norm,none": 0.9166666666666666,
231
+ "acc_norm_stderr,none": 0.05763033956734371,
232
+ "alias": " - ceval-valid_mao_zedong_thought"
233
+ },
234
+ "ceval-valid_marxism": {
235
+ "acc,none": 0.9473684210526315,
236
+ "acc_stderr,none": 0.05263157894736841,
237
+ "acc_norm,none": 0.9473684210526315,
238
+ "acc_norm_stderr,none": 0.05263157894736841,
239
+ "alias": " - ceval-valid_marxism"
240
+ },
241
+ "ceval-valid_metrology_engineer": {
242
+ "acc,none": 0.75,
243
+ "acc_stderr,none": 0.09028938981432691,
244
+ "acc_norm,none": 0.75,
245
+ "acc_norm_stderr,none": 0.09028938981432691,
246
+ "alias": " - ceval-valid_metrology_engineer"
247
+ },
248
+ "ceval-valid_middle_school_biology": {
249
+ "acc,none": 0.8095238095238095,
250
+ "acc_stderr,none": 0.08780518530755131,
251
+ "acc_norm,none": 0.8095238095238095,
252
+ "acc_norm_stderr,none": 0.08780518530755131,
253
+ "alias": " - ceval-valid_middle_school_biology"
254
+ },
255
+ "ceval-valid_middle_school_chemistry": {
256
+ "acc,none": 0.75,
257
+ "acc_stderr,none": 0.09933992677987828,
258
+ "acc_norm,none": 0.75,
259
+ "acc_norm_stderr,none": 0.09933992677987828,
260
+ "alias": " - ceval-valid_middle_school_chemistry"
261
+ },
262
+ "ceval-valid_middle_school_geography": {
263
+ "acc,none": 0.5833333333333334,
264
+ "acc_stderr,none": 0.1486470975026408,
265
+ "acc_norm,none": 0.5833333333333334,
266
+ "acc_norm_stderr,none": 0.1486470975026408,
267
+ "alias": " - ceval-valid_middle_school_geography"
268
+ },
269
+ "ceval-valid_middle_school_history": {
270
+ "acc,none": 0.8636363636363636,
271
+ "acc_stderr,none": 0.07488677009526491,
272
+ "acc_norm,none": 0.8636363636363636,
273
+ "acc_norm_stderr,none": 0.07488677009526491,
274
+ "alias": " - ceval-valid_middle_school_history"
275
+ },
276
+ "ceval-valid_middle_school_mathematics": {
277
+ "acc,none": 0.21052631578947367,
278
+ "acc_stderr,none": 0.0960916767552923,
279
+ "acc_norm,none": 0.21052631578947367,
280
+ "acc_norm_stderr,none": 0.0960916767552923,
281
+ "alias": " - ceval-valid_middle_school_mathematics"
282
+ },
283
+ "ceval-valid_middle_school_physics": {
284
+ "acc,none": 0.6842105263157895,
285
+ "acc_stderr,none": 0.10956136839295434,
286
+ "acc_norm,none": 0.6842105263157895,
287
+ "acc_norm_stderr,none": 0.10956136839295434,
288
+ "alias": " - ceval-valid_middle_school_physics"
289
+ },
290
+ "ceval-valid_middle_school_politics": {
291
+ "acc,none": 0.8095238095238095,
292
+ "acc_stderr,none": 0.08780518530755134,
293
+ "acc_norm,none": 0.8095238095238095,
294
+ "acc_norm_stderr,none": 0.08780518530755134,
295
+ "alias": " - ceval-valid_middle_school_politics"
296
+ },
297
+ "ceval-valid_modern_chinese_history": {
298
+ "acc,none": 0.8260869565217391,
299
+ "acc_stderr,none": 0.08081046758996391,
300
+ "acc_norm,none": 0.8260869565217391,
301
+ "acc_norm_stderr,none": 0.08081046758996391,
302
+ "alias": " - ceval-valid_modern_chinese_history"
303
+ },
304
+ "ceval-valid_operating_system": {
305
+ "acc,none": 0.47368421052631576,
306
+ "acc_stderr,none": 0.11768778828946262,
307
+ "acc_norm,none": 0.47368421052631576,
308
+ "acc_norm_stderr,none": 0.11768778828946262,
309
+ "alias": " - ceval-valid_operating_system"
310
+ },
311
+ "ceval-valid_physician": {
312
+ "acc,none": 0.5102040816326531,
313
+ "acc_stderr,none": 0.07215375318230074,
314
+ "acc_norm,none": 0.5102040816326531,
315
+ "acc_norm_stderr,none": 0.07215375318230074,
316
+ "alias": " - ceval-valid_physician"
317
+ },
318
+ "ceval-valid_plant_protection": {
319
+ "acc,none": 0.45454545454545453,
320
+ "acc_stderr,none": 0.10865714630312667,
321
+ "acc_norm,none": 0.45454545454545453,
322
+ "acc_norm_stderr,none": 0.10865714630312667,
323
+ "alias": " - ceval-valid_plant_protection"
324
+ },
325
+ "ceval-valid_probability_and_statistics": {
326
+ "acc,none": 0.2222222222222222,
327
+ "acc_stderr,none": 0.10083169033033672,
328
+ "acc_norm,none": 0.2222222222222222,
329
+ "acc_norm_stderr,none": 0.10083169033033672,
330
+ "alias": " - ceval-valid_probability_and_statistics"
331
+ },
332
+ "ceval-valid_professional_tour_guide": {
333
+ "acc,none": 0.5172413793103449,
334
+ "acc_stderr,none": 0.09443492370778725,
335
+ "acc_norm,none": 0.5172413793103449,
336
+ "acc_norm_stderr,none": 0.09443492370778725,
337
+ "alias": " - ceval-valid_professional_tour_guide"
338
+ },
339
+ "ceval-valid_sports_science": {
340
+ "acc,none": 0.47368421052631576,
341
+ "acc_stderr,none": 0.11768778828946262,
342
+ "acc_norm,none": 0.47368421052631576,
343
+ "acc_norm_stderr,none": 0.11768778828946262,
344
+ "alias": " - ceval-valid_sports_science"
345
+ },
346
+ "ceval-valid_tax_accountant": {
347
+ "acc,none": 0.5102040816326531,
348
+ "acc_stderr,none": 0.07215375318230074,
349
+ "acc_norm,none": 0.5102040816326531,
350
+ "acc_norm_stderr,none": 0.07215375318230074,
351
+ "alias": " - ceval-valid_tax_accountant"
352
+ },
353
+ "ceval-valid_teacher_qualification": {
354
+ "acc,none": 0.7954545454545454,
355
+ "acc_stderr,none": 0.06151320742474889,
356
+ "acc_norm,none": 0.7954545454545454,
357
+ "acc_norm_stderr,none": 0.06151320742474889,
358
+ "alias": " - ceval-valid_teacher_qualification"
359
+ },
360
+ "ceval-valid_urban_and_rural_planner": {
361
+ "acc,none": 0.6086956521739131,
362
+ "acc_stderr,none": 0.07275304578557182,
363
+ "acc_norm,none": 0.6086956521739131,
364
+ "acc_norm_stderr,none": 0.07275304578557182,
365
+ "alias": " - ceval-valid_urban_and_rural_planner"
366
+ },
367
+ "ceval-valid_veterinary_medicine": {
368
+ "acc,none": 0.5652173913043478,
369
+ "acc_stderr,none": 0.10568965974008646,
370
+ "acc_norm,none": 0.5652173913043478,
371
+ "acc_norm_stderr,none": 0.10568965974008646,
372
+ "alias": " - ceval-valid_veterinary_medicine"
373
+ }
374
+ },
375
+ "groups": {
376
+ "ceval-valid": {
377
+ "acc_norm,none": 0.5505200594353641,
378
+ "acc_norm_stderr,none": 0.012961996609689193,
379
+ "acc,none": 0.5505200594353641,
380
+ "acc_stderr,none": 0.012961996609689193,
381
+ "alias": "ceval-valid"
382
+ }
383
+ },
384
+ "group_subtasks": {
385
+ "ceval-valid": [
386
+ "ceval-valid_teacher_qualification",
387
+ "ceval-valid_plant_protection",
388
+ "ceval-valid_high_school_geography",
389
+ "ceval-valid_high_school_chinese",
390
+ "ceval-valid_probability_and_statistics",
391
+ "ceval-valid_environmental_impact_assessment_engineer",
392
+ "ceval-valid_chinese_language_and_literature",
393
+ "ceval-valid_art_studies",
394
+ "ceval-valid_veterinary_medicine",
395
+ "ceval-valid_sports_science",
396
+ "ceval-valid_professional_tour_guide",
397
+ "ceval-valid_middle_school_politics",
398
+ "ceval-valid_middle_school_geography",
399
+ "ceval-valid_mao_zedong_thought",
400
+ "ceval-valid_law",
401
+ "ceval-valid_high_school_mathematics",
402
+ "ceval-valid_high_school_biology",
403
+ "ceval-valid_computer_network",
404
+ "ceval-valid_college_chemistry",
405
+ "ceval-valid_urban_and_rural_planner",
406
+ "ceval-valid_physician",
407
+ "ceval-valid_middle_school_mathematics",
408
+ "ceval-valid_middle_school_biology",
409
+ "ceval-valid_logic",
410
+ "ceval-valid_legal_professional",
411
+ "ceval-valid_college_physics",
412
+ "ceval-valid_college_economics",
413
+ "ceval-valid_civil_servant",
414
+ "ceval-valid_business_administration",
415
+ "ceval-valid_tax_accountant",
416
+ "ceval-valid_metrology_engineer",
417
+ "ceval-valid_high_school_physics",
418
+ "ceval-valid_high_school_chemistry",
419
+ "ceval-valid_computer_architecture",
420
+ "ceval-valid_accountant",
421
+ "ceval-valid_operating_system",
422
+ "ceval-valid_middle_school_physics",
423
+ "ceval-valid_marxism",
424
+ "ceval-valid_fire_engineer",
425
+ "ceval-valid_college_programming",
426
+ "ceval-valid_middle_school_chemistry",
427
+ "ceval-valid_high_school_politics",
428
+ "ceval-valid_high_school_history",
429
+ "ceval-valid_electrical_engineer",
430
+ "ceval-valid_discrete_mathematics",
431
+ "ceval-valid_advanced_mathematics",
432
+ "ceval-valid_modern_chinese_history",
433
+ "ceval-valid_middle_school_history",
434
+ "ceval-valid_ideological_and_moral_cultivation",
435
+ "ceval-valid_education_science",
436
+ "ceval-valid_clinical_medicine",
437
+ "ceval-valid_basic_medicine"
438
+ ]
439
+ },
440
+ "configs": {
441
+ "ceval-valid_accountant": {
442
+ "task": "ceval-valid_accountant",
443
+ "group": "ceval-valid",
444
+ "dataset_path": "ceval/ceval-exam",
445
+ "dataset_name": "accountant",
446
+ "validation_split": "val",
447
+ "fewshot_split": "dev",
448
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
449
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
450
+ "doc_to_choice": [
451
+ "A",
452
+ "B",
453
+ "C",
454
+ "D"
455
+ ],
456
+ "description": "以下是中国关于注册会计师的单项选择题,请选出其中的正确答案。\n\n",
457
+ "target_delimiter": " ",
458
+ "fewshot_delimiter": "\n\n",
459
+ "fewshot_config": {
460
+ "sampler": "first_n"
461
+ },
462
+ "num_fewshot": 0,
463
+ "metric_list": [
464
+ {
465
+ "metric": "acc",
466
+ "aggregation": "mean",
467
+ "higher_is_better": true
468
+ },
469
+ {
470
+ "metric": "acc_norm",
471
+ "aggregation": "mean",
472
+ "higher_is_better": true
473
+ }
474
+ ],
475
+ "output_type": "multiple_choice",
476
+ "repeats": 1,
477
+ "should_decontaminate": false,
478
+ "metadata": {
479
+ "version": 1.0
480
+ }
481
+ },
482
+ "ceval-valid_advanced_mathematics": {
483
+ "task": "ceval-valid_advanced_mathematics",
484
+ "group": "ceval-valid",
485
+ "dataset_path": "ceval/ceval-exam",
486
+ "dataset_name": "advanced_mathematics",
487
+ "validation_split": "val",
488
+ "fewshot_split": "dev",
489
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
490
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
491
+ "doc_to_choice": [
492
+ "A",
493
+ "B",
494
+ "C",
495
+ "D"
496
+ ],
497
+ "description": "以下是中国关于高等数学的单项选择题,请选出其中的正确答案。\n\n",
498
+ "target_delimiter": " ",
499
+ "fewshot_delimiter": "\n\n",
500
+ "fewshot_config": {
501
+ "sampler": "first_n"
502
+ },
503
+ "num_fewshot": 0,
504
+ "metric_list": [
505
+ {
506
+ "metric": "acc",
507
+ "aggregation": "mean",
508
+ "higher_is_better": true
509
+ },
510
+ {
511
+ "metric": "acc_norm",
512
+ "aggregation": "mean",
513
+ "higher_is_better": true
514
+ }
515
+ ],
516
+ "output_type": "multiple_choice",
517
+ "repeats": 1,
518
+ "should_decontaminate": false,
519
+ "metadata": {
520
+ "version": 1.0
521
+ }
522
+ },
523
+ "ceval-valid_art_studies": {
524
+ "task": "ceval-valid_art_studies",
525
+ "group": "ceval-valid",
526
+ "dataset_path": "ceval/ceval-exam",
527
+ "dataset_name": "art_studies",
528
+ "validation_split": "val",
529
+ "fewshot_split": "dev",
530
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
531
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
532
+ "doc_to_choice": [
533
+ "A",
534
+ "B",
535
+ "C",
536
+ "D"
537
+ ],
538
+ "description": "以下是中国关于艺术���的单项选择题,请选出其中的正确答案。\n\n",
539
+ "target_delimiter": " ",
540
+ "fewshot_delimiter": "\n\n",
541
+ "fewshot_config": {
542
+ "sampler": "first_n"
543
+ },
544
+ "num_fewshot": 0,
545
+ "metric_list": [
546
+ {
547
+ "metric": "acc",
548
+ "aggregation": "mean",
549
+ "higher_is_better": true
550
+ },
551
+ {
552
+ "metric": "acc_norm",
553
+ "aggregation": "mean",
554
+ "higher_is_better": true
555
+ }
556
+ ],
557
+ "output_type": "multiple_choice",
558
+ "repeats": 1,
559
+ "should_decontaminate": false,
560
+ "metadata": {
561
+ "version": 1.0
562
+ }
563
+ },
564
+ "ceval-valid_basic_medicine": {
565
+ "task": "ceval-valid_basic_medicine",
566
+ "group": "ceval-valid",
567
+ "dataset_path": "ceval/ceval-exam",
568
+ "dataset_name": "basic_medicine",
569
+ "validation_split": "val",
570
+ "fewshot_split": "dev",
571
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
572
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
573
+ "doc_to_choice": [
574
+ "A",
575
+ "B",
576
+ "C",
577
+ "D"
578
+ ],
579
+ "description": "以下是中国关于基础医学的单项选择题,请选出其中的正确答案。\n\n",
580
+ "target_delimiter": " ",
581
+ "fewshot_delimiter": "\n\n",
582
+ "fewshot_config": {
583
+ "sampler": "first_n"
584
+ },
585
+ "num_fewshot": 0,
586
+ "metric_list": [
587
+ {
588
+ "metric": "acc",
589
+ "aggregation": "mean",
590
+ "higher_is_better": true
591
+ },
592
+ {
593
+ "metric": "acc_norm",
594
+ "aggregation": "mean",
595
+ "higher_is_better": true
596
+ }
597
+ ],
598
+ "output_type": "multiple_choice",
599
+ "repeats": 1,
600
+ "should_decontaminate": false,
601
+ "metadata": {
602
+ "version": 1.0
603
+ }
604
+ },
605
+ "ceval-valid_business_administration": {
606
+ "task": "ceval-valid_business_administration",
607
+ "group": "ceval-valid",
608
+ "dataset_path": "ceval/ceval-exam",
609
+ "dataset_name": "business_administration",
610
+ "validation_split": "val",
611
+ "fewshot_split": "dev",
612
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
613
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
614
+ "doc_to_choice": [
615
+ "A",
616
+ "B",
617
+ "C",
618
+ "D"
619
+ ],
620
+ "description": "以下是中国关于工商管理的单项选择题,请选出其中的正确答案。\n\n",
621
+ "target_delimiter": " ",
622
+ "fewshot_delimiter": "\n\n",
623
+ "fewshot_config": {
624
+ "sampler": "first_n"
625
+ },
626
+ "num_fewshot": 0,
627
+ "metric_list": [
628
+ {
629
+ "metric": "acc",
630
+ "aggregation": "mean",
631
+ "higher_is_better": true
632
+ },
633
+ {
634
+ "metric": "acc_norm",
635
+ "aggregation": "mean",
636
+ "higher_is_better": true
637
+ }
638
+ ],
639
+ "output_type": "multiple_choice",
640
+ "repeats": 1,
641
+ "should_decontaminate": false,
642
+ "metadata": {
643
+ "version": 1.0
644
+ }
645
+ },
646
+ "ceval-valid_chinese_language_and_literature": {
647
+ "task": "ceval-valid_chinese_language_and_literature",
648
+ "group": "ceval-valid",
649
+ "dataset_path": "ceval/ceval-exam",
650
+ "dataset_name": "chinese_language_and_literature",
651
+ "validation_split": "val",
652
+ "fewshot_split": "dev",
653
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
654
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
655
+ "doc_to_choice": [
656
+ "A",
657
+ "B",
658
+ "C",
659
+ "D"
660
+ ],
661
+ "description": "以下是中国关于中国语言文学的单项选择题,请选出其中的正确答案。\n\n",
662
+ "target_delimiter": " ",
663
+ "fewshot_delimiter": "\n\n",
664
+ "fewshot_config": {
665
+ "sampler": "first_n"
666
+ },
667
+ "num_fewshot": 0,
668
+ "metric_list": [
669
+ {
670
+ "metric": "acc",
671
+ "aggregation": "mean",
672
+ "higher_is_better": true
673
+ },
674
+ {
675
+ "metric": "acc_norm",
676
+ "aggregation": "mean",
677
+ "higher_is_better": true
678
+ }
679
+ ],
680
+ "output_type": "multiple_choice",
681
+ "repeats": 1,
682
+ "should_decontaminate": false,
683
+ "metadata": {
684
+ "version": 1.0
685
+ }
686
+ },
687
+ "ceval-valid_civil_servant": {
688
+ "task": "ceval-valid_civil_servant",
689
+ "group": "ceval-valid",
690
+ "dataset_path": "ceval/ceval-exam",
691
+ "dataset_name": "civil_servant",
692
+ "validation_split": "val",
693
+ "fewshot_split": "dev",
694
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
695
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
696
+ "doc_to_choice": [
697
+ "A",
698
+ "B",
699
+ "C",
700
+ "D"
701
+ ],
702
+ "description": "以下是中国关于公务员的单项选择题,请选出其中的正确答案。\n\n",
703
+ "target_delimiter": " ",
704
+ "fewshot_delimiter": "\n\n",
705
+ "fewshot_config": {
706
+ "sampler": "first_n"
707
+ },
708
+ "num_fewshot": 0,
709
+ "metric_list": [
710
+ {
711
+ "metric": "acc",
712
+ "aggregation": "mean",
713
+ "higher_is_better": true
714
+ },
715
+ {
716
+ "metric": "acc_norm",
717
+ "aggregation": "mean",
718
+ "higher_is_better": true
719
+ }
720
+ ],
721
+ "output_type": "multiple_choice",
722
+ "repeats": 1,
723
+ "should_decontaminate": false,
724
+ "metadata": {
725
+ "version": 1.0
726
+ }
727
+ },
728
+ "ceval-valid_clinical_medicine": {
729
+ "task": "ceval-valid_clinical_medicine",
730
+ "group": "ceval-valid",
731
+ "dataset_path": "ceval/ceval-exam",
732
+ "dataset_name": "clinical_medicine",
733
+ "validation_split": "val",
734
+ "fewshot_split": "dev",
735
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
736
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
737
+ "doc_to_choice": [
738
+ "A",
739
+ "B",
740
+ "C",
741
+ "D"
742
+ ],
743
+ "description": "以下是中国关于临床医学的单项选择题,请选出其中的正确答案。\n\n",
744
+ "target_delimiter": " ",
745
+ "fewshot_delimiter": "\n\n",
746
+ "fewshot_config": {
747
+ "sampler": "first_n"
748
+ },
749
+ "num_fewshot": 0,
750
+ "metric_list": [
751
+ {
752
+ "metric": "acc",
753
+ "aggregation": "mean",
754
+ "higher_is_better": true
755
+ },
756
+ {
757
+ "metric": "acc_norm",
758
+ "aggregation": "mean",
759
+ "higher_is_better": true
760
+ }
761
+ ],
762
+ "output_type": "multiple_choice",
763
+ "repeats": 1,
764
+ "should_decontaminate": false,
765
+ "metadata": {
766
+ "version": 1.0
767
+ }
768
+ },
769
+ "ceval-valid_college_chemistry": {
770
+ "task": "ceval-valid_college_chemistry",
771
+ "group": "ceval-valid",
772
+ "dataset_path": "ceval/ceval-exam",
773
+ "dataset_name": "college_chemistry",
774
+ "validation_split": "val",
775
+ "fewshot_split": "dev",
776
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
777
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
778
+ "doc_to_choice": [
779
+ "A",
780
+ "B",
781
+ "C",
782
+ "D"
783
+ ],
784
+ "description": "以下是中国关于大学化学的单项选择题,请选出其中的正确答案。\n\n",
785
+ "target_delimiter": " ",
786
+ "fewshot_delimiter": "\n\n",
787
+ "fewshot_config": {
788
+ "sampler": "first_n"
789
+ },
790
+ "num_fewshot": 0,
791
+ "metric_list": [
792
+ {
793
+ "metric": "acc",
794
+ "aggregation": "mean",
795
+ "higher_is_better": true
796
+ },
797
+ {
798
+ "metric": "acc_norm",
799
+ "aggregation": "mean",
800
+ "higher_is_better": true
801
+ }
802
+ ],
803
+ "output_type": "multiple_choice",
804
+ "repeats": 1,
805
+ "should_decontaminate": false,
806
+ "metadata": {
807
+ "version": 1.0
808
+ }
809
+ },
810
+ "ceval-valid_college_economics": {
811
+ "task": "ceval-valid_college_economics",
812
+ "group": "ceval-valid",
813
+ "dataset_path": "ceval/ceval-exam",
814
+ "dataset_name": "college_economics",
815
+ "validation_split": "val",
816
+ "fewshot_split": "dev",
817
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
818
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
819
+ "doc_to_choice": [
820
+ "A",
821
+ "B",
822
+ "C",
823
+ "D"
824
+ ],
825
+ "description": "以下是中国关于大学经济学的单项选择题,请选出其中的正确答案。\n\n",
826
+ "target_delimiter": " ",
827
+ "fewshot_delimiter": "\n\n",
828
+ "fewshot_config": {
829
+ "sampler": "first_n"
830
+ },
831
+ "num_fewshot": 0,
832
+ "metric_list": [
833
+ {
834
+ "metric": "acc",
835
+ "aggregation": "mean",
836
+ "higher_is_better": true
837
+ },
838
+ {
839
+ "metric": "acc_norm",
840
+ "aggregation": "mean",
841
+ "higher_is_better": true
842
+ }
843
+ ],
844
+ "output_type": "multiple_choice",
845
+ "repeats": 1,
846
+ "should_decontaminate": false,
847
+ "metadata": {
848
+ "version": 1.0
849
+ }
850
+ },
851
+ "ceval-valid_college_physics": {
852
+ "task": "ceval-valid_college_physics",
853
+ "group": "ceval-valid",
854
+ "dataset_path": "ceval/ceval-exam",
855
+ "dataset_name": "college_physics",
856
+ "validation_split": "val",
857
+ "fewshot_split": "dev",
858
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
859
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
860
+ "doc_to_choice": [
861
+ "A",
862
+ "B",
863
+ "C",
864
+ "D"
865
+ ],
866
+ "description": "以下是中国关于大学物理的单项选择题,请选出其中的正确答案。\n\n",
867
+ "target_delimiter": " ",
868
+ "fewshot_delimiter": "\n\n",
869
+ "fewshot_config": {
870
+ "sampler": "first_n"
871
+ },
872
+ "num_fewshot": 0,
873
+ "metric_list": [
874
+ {
875
+ "metric": "acc",
876
+ "aggregation": "mean",
877
+ "higher_is_better": true
878
+ },
879
+ {
880
+ "metric": "acc_norm",
881
+ "aggregation": "mean",
882
+ "higher_is_better": true
883
+ }
884
+ ],
885
+ "output_type": "multiple_choice",
886
+ "repeats": 1,
887
+ "should_decontaminate": false,
888
+ "metadata": {
889
+ "version": 1.0
890
+ }
891
+ },
892
+ "ceval-valid_college_programming": {
893
+ "task": "ceval-valid_college_programming",
894
+ "group": "ceval-valid",
895
+ "dataset_path": "ceval/ceval-exam",
896
+ "dataset_name": "college_programming",
897
+ "validation_split": "val",
898
+ "fewshot_split": "dev",
899
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
900
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
901
+ "doc_to_choice": [
902
+ "A",
903
+ "B",
904
+ "C",
905
+ "D"
906
+ ],
907
+ "description": "以下是中国关于大学编程的单项选择题,请选出其中的正确答案。\n\n",
908
+ "target_delimiter": " ",
909
+ "fewshot_delimiter": "\n\n",
910
+ "fewshot_config": {
911
+ "sampler": "first_n"
912
+ },
913
+ "num_fewshot": 0,
914
+ "metric_list": [
915
+ {
916
+ "metric": "acc",
917
+ "aggregation": "mean",
918
+ "higher_is_better": true
919
+ },
920
+ {
921
+ "metric": "acc_norm",
922
+ "aggregation": "mean",
923
+ "higher_is_better": true
924
+ }
925
+ ],
926
+ "output_type": "multiple_choice",
927
+ "repeats": 1,
928
+ "should_decontaminate": false,
929
+ "metadata": {
930
+ "version": 1.0
931
+ }
932
+ },
933
+ "ceval-valid_computer_architecture": {
934
+ "task": "ceval-valid_computer_architecture",
935
+ "group": "ceval-valid",
936
+ "dataset_path": "ceval/ceval-exam",
937
+ "dataset_name": "computer_architecture",
938
+ "validation_split": "val",
939
+ "fewshot_split": "dev",
940
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
941
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
942
+ "doc_to_choice": [
943
+ "A",
944
+ "B",
945
+ "C",
946
+ "D"
947
+ ],
948
+ "description": "以下是中国关于计算机组成的单项选择题,请选出其中的正确答案。\n\n",
949
+ "target_delimiter": " ",
950
+ "fewshot_delimiter": "\n\n",
951
+ "fewshot_config": {
952
+ "sampler": "first_n"
953
+ },
954
+ "num_fewshot": 0,
955
+ "metric_list": [
956
+ {
957
+ "metric": "acc",
958
+ "aggregation": "mean",
959
+ "higher_is_better": true
960
+ },
961
+ {
962
+ "metric": "acc_norm",
963
+ "aggregation": "mean",
964
+ "higher_is_better": true
965
+ }
966
+ ],
967
+ "output_type": "multiple_choice",
968
+ "repeats": 1,
969
+ "should_decontaminate": false,
970
+ "metadata": {
971
+ "version": 1.0
972
+ }
973
+ },
974
+ "ceval-valid_computer_network": {
975
+ "task": "ceval-valid_computer_network",
976
+ "group": "ceval-valid",
977
+ "dataset_path": "ceval/ceval-exam",
978
+ "dataset_name": "computer_network",
979
+ "validation_split": "val",
980
+ "fewshot_split": "dev",
981
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
982
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
983
+ "doc_to_choice": [
984
+ "A",
985
+ "B",
986
+ "C",
987
+ "D"
988
+ ],
989
+ "description": "以下是中国关于计算机网络的单项选择题,请选出其中的正确答案。\n\n",
990
+ "target_delimiter": " ",
991
+ "fewshot_delimiter": "\n\n",
992
+ "fewshot_config": {
993
+ "sampler": "first_n"
994
+ },
995
+ "num_fewshot": 0,
996
+ "metric_list": [
997
+ {
998
+ "metric": "acc",
999
+ "aggregation": "mean",
1000
+ "higher_is_better": true
1001
+ },
1002
+ {
1003
+ "metric": "acc_norm",
1004
+ "aggregation": "mean",
1005
+ "higher_is_better": true
1006
+ }
1007
+ ],
1008
+ "output_type": "multiple_choice",
1009
+ "repeats": 1,
1010
+ "should_decontaminate": false,
1011
+ "metadata": {
1012
+ "version": 1.0
1013
+ }
1014
+ },
1015
+ "ceval-valid_discrete_mathematics": {
1016
+ "task": "ceval-valid_discrete_mathematics",
1017
+ "group": "ceval-valid",
1018
+ "dataset_path": "ceval/ceval-exam",
1019
+ "dataset_name": "discrete_mathematics",
1020
+ "validation_split": "val",
1021
+ "fewshot_split": "dev",
1022
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1023
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1024
+ "doc_to_choice": [
1025
+ "A",
1026
+ "B",
1027
+ "C",
1028
+ "D"
1029
+ ],
1030
+ "description": "以下是中国关于离散数学的单项选择题,请选出其中的正确答案。\n\n",
1031
+ "target_delimiter": " ",
1032
+ "fewshot_delimiter": "\n\n",
1033
+ "fewshot_config": {
1034
+ "sampler": "first_n"
1035
+ },
1036
+ "num_fewshot": 0,
1037
+ "metric_list": [
1038
+ {
1039
+ "metric": "acc",
1040
+ "aggregation": "mean",
1041
+ "higher_is_better": true
1042
+ },
1043
+ {
1044
+ "metric": "acc_norm",
1045
+ "aggregation": "mean",
1046
+ "higher_is_better": true
1047
+ }
1048
+ ],
1049
+ "output_type": "multiple_choice",
1050
+ "repeats": 1,
1051
+ "should_decontaminate": false,
1052
+ "metadata": {
1053
+ "version": 1.0
1054
+ }
1055
+ },
1056
+ "ceval-valid_education_science": {
1057
+ "task": "ceval-valid_education_science",
1058
+ "group": "ceval-valid",
1059
+ "dataset_path": "ceval/ceval-exam",
1060
+ "dataset_name": "education_science",
1061
+ "validation_split": "val",
1062
+ "fewshot_split": "dev",
1063
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1064
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1065
+ "doc_to_choice": [
1066
+ "A",
1067
+ "B",
1068
+ "C",
1069
+ "D"
1070
+ ],
1071
+ "description": "以下是中国关于教育学的单项选择题,请选出其中的正确答案。\n\n",
1072
+ "target_delimiter": " ",
1073
+ "fewshot_delimiter": "\n\n",
1074
+ "fewshot_config": {
1075
+ "sampler": "first_n"
1076
+ },
1077
+ "num_fewshot": 0,
1078
+ "metric_list": [
1079
+ {
1080
+ "metric": "acc",
1081
+ "aggregation": "mean",
1082
+ "higher_is_better": true
1083
+ },
1084
+ {
1085
+ "metric": "acc_norm",
1086
+ "aggregation": "mean",
1087
+ "higher_is_better": true
1088
+ }
1089
+ ],
1090
+ "output_type": "multiple_choice",
1091
+ "repeats": 1,
1092
+ "should_decontaminate": false,
1093
+ "metadata": {
1094
+ "version": 1.0
1095
+ }
1096
+ },
1097
+ "ceval-valid_electrical_engineer": {
1098
+ "task": "ceval-valid_electrical_engineer",
1099
+ "group": "ceval-valid",
1100
+ "dataset_path": "ceval/ceval-exam",
1101
+ "dataset_name": "electrical_engineer",
1102
+ "validation_split": "val",
1103
+ "fewshot_split": "dev",
1104
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1105
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1106
+ "doc_to_choice": [
1107
+ "A",
1108
+ "B",
1109
+ "C",
1110
+ "D"
1111
+ ],
1112
+ "description": "以下是中国关于注册电气工程师的单项选择题,请选出其中的正确答案。\n\n",
1113
+ "target_delimiter": " ",
1114
+ "fewshot_delimiter": "\n\n",
1115
+ "fewshot_config": {
1116
+ "sampler": "first_n"
1117
+ },
1118
+ "num_fewshot": 0,
1119
+ "metric_list": [
1120
+ {
1121
+ "metric": "acc",
1122
+ "aggregation": "mean",
1123
+ "higher_is_better": true
1124
+ },
1125
+ {
1126
+ "metric": "acc_norm",
1127
+ "aggregation": "mean",
1128
+ "higher_is_better": true
1129
+ }
1130
+ ],
1131
+ "output_type": "multiple_choice",
1132
+ "repeats": 1,
1133
+ "should_decontaminate": false,
1134
+ "metadata": {
1135
+ "version": 1.0
1136
+ }
1137
+ },
1138
+ "ceval-valid_environmental_impact_assessment_engineer": {
1139
+ "task": "ceval-valid_environmental_impact_assessment_engineer",
1140
+ "group": "ceval-valid",
1141
+ "dataset_path": "ceval/ceval-exam",
1142
+ "dataset_name": "environmental_impact_assessment_engineer",
1143
+ "validation_split": "val",
1144
+ "fewshot_split": "dev",
1145
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1146
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1147
+ "doc_to_choice": [
1148
+ "A",
1149
+ "B",
1150
+ "C",
1151
+ "D"
1152
+ ],
1153
+ "description": "以下是中国关于环境影响评价工程师的单项选择题,请选出其中的正确答案。\n\n",
1154
+ "target_delimiter": " ",
1155
+ "fewshot_delimiter": "\n\n",
1156
+ "fewshot_config": {
1157
+ "sampler": "first_n"
1158
+ },
1159
+ "num_fewshot": 0,
1160
+ "metric_list": [
1161
+ {
1162
+ "metric": "acc",
1163
+ "aggregation": "mean",
1164
+ "higher_is_better": true
1165
+ },
1166
+ {
1167
+ "metric": "acc_norm",
1168
+ "aggregation": "mean",
1169
+ "higher_is_better": true
1170
+ }
1171
+ ],
1172
+ "output_type": "multiple_choice",
1173
+ "repeats": 1,
1174
+ "should_decontaminate": false,
1175
+ "metadata": {
1176
+ "version": 1.0
1177
+ }
1178
+ },
1179
+ "ceval-valid_fire_engineer": {
1180
+ "task": "ceval-valid_fire_engineer",
1181
+ "group": "ceval-valid",
1182
+ "dataset_path": "ceval/ceval-exam",
1183
+ "dataset_name": "fire_engineer",
1184
+ "validation_split": "val",
1185
+ "fewshot_split": "dev",
1186
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1187
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1188
+ "doc_to_choice": [
1189
+ "A",
1190
+ "B",
1191
+ "C",
1192
+ "D"
1193
+ ],
1194
+ "description": "以下是中国关于注册消防工程师的单项选择题,请选出其中的正确答案。\n\n",
1195
+ "target_delimiter": " ",
1196
+ "fewshot_delimiter": "\n\n",
1197
+ "fewshot_config": {
1198
+ "sampler": "first_n"
1199
+ },
1200
+ "num_fewshot": 0,
1201
+ "metric_list": [
1202
+ {
1203
+ "metric": "acc",
1204
+ "aggregation": "mean",
1205
+ "higher_is_better": true
1206
+ },
1207
+ {
1208
+ "metric": "acc_norm",
1209
+ "aggregation": "mean",
1210
+ "higher_is_better": true
1211
+ }
1212
+ ],
1213
+ "output_type": "multiple_choice",
1214
+ "repeats": 1,
1215
+ "should_decontaminate": false,
1216
+ "metadata": {
1217
+ "version": 1.0
1218
+ }
1219
+ },
1220
+ "ceval-valid_high_school_biology": {
1221
+ "task": "ceval-valid_high_school_biology",
1222
+ "group": "ceval-valid",
1223
+ "dataset_path": "ceval/ceval-exam",
1224
+ "dataset_name": "high_school_biology",
1225
+ "validation_split": "val",
1226
+ "fewshot_split": "dev",
1227
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1228
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1229
+ "doc_to_choice": [
1230
+ "A",
1231
+ "B",
1232
+ "C",
1233
+ "D"
1234
+ ],
1235
+ "description": "以下是中国关于高中生物的单项选择题,请选出其中的正确答案。\n\n",
1236
+ "target_delimiter": " ",
1237
+ "fewshot_delimiter": "\n\n",
1238
+ "fewshot_config": {
1239
+ "sampler": "first_n"
1240
+ },
1241
+ "num_fewshot": 0,
1242
+ "metric_list": [
1243
+ {
1244
+ "metric": "acc",
1245
+ "aggregation": "mean",
1246
+ "higher_is_better": true
1247
+ },
1248
+ {
1249
+ "metric": "acc_norm",
1250
+ "aggregation": "mean",
1251
+ "higher_is_better": true
1252
+ }
1253
+ ],
1254
+ "output_type": "multiple_choice",
1255
+ "repeats": 1,
1256
+ "should_decontaminate": false,
1257
+ "metadata": {
1258
+ "version": 1.0
1259
+ }
1260
+ },
1261
+ "ceval-valid_high_school_chemistry": {
1262
+ "task": "ceval-valid_high_school_chemistry",
1263
+ "group": "ceval-valid",
1264
+ "dataset_path": "ceval/ceval-exam",
1265
+ "dataset_name": "high_school_chemistry",
1266
+ "validation_split": "val",
1267
+ "fewshot_split": "dev",
1268
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1269
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1270
+ "doc_to_choice": [
1271
+ "A",
1272
+ "B",
1273
+ "C",
1274
+ "D"
1275
+ ],
1276
+ "description": "以下是中国关于高中化学的单项选择题,请选出其中的正确答案。\n\n",
1277
+ "target_delimiter": " ",
1278
+ "fewshot_delimiter": "\n\n",
1279
+ "fewshot_config": {
1280
+ "sampler": "first_n"
1281
+ },
1282
+ "num_fewshot": 0,
1283
+ "metric_list": [
1284
+ {
1285
+ "metric": "acc",
1286
+ "aggregation": "mean",
1287
+ "higher_is_better": true
1288
+ },
1289
+ {
1290
+ "metric": "acc_norm",
1291
+ "aggregation": "mean",
1292
+ "higher_is_better": true
1293
+ }
1294
+ ],
1295
+ "output_type": "multiple_choice",
1296
+ "repeats": 1,
1297
+ "should_decontaminate": false,
1298
+ "metadata": {
1299
+ "version": 1.0
1300
+ }
1301
+ },
1302
+ "ceval-valid_high_school_chinese": {
1303
+ "task": "ceval-valid_high_school_chinese",
1304
+ "group": "ceval-valid",
1305
+ "dataset_path": "ceval/ceval-exam",
1306
+ "dataset_name": "high_school_chinese",
1307
+ "validation_split": "val",
1308
+ "fewshot_split": "dev",
1309
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1310
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1311
+ "doc_to_choice": [
1312
+ "A",
1313
+ "B",
1314
+ "C",
1315
+ "D"
1316
+ ],
1317
+ "description": "以下是中国关于高中语文的单项选择题,请选出其中的正确答案。\n\n",
1318
+ "target_delimiter": " ",
1319
+ "fewshot_delimiter": "\n\n",
1320
+ "fewshot_config": {
1321
+ "sampler": "first_n"
1322
+ },
1323
+ "num_fewshot": 0,
1324
+ "metric_list": [
1325
+ {
1326
+ "metric": "acc",
1327
+ "aggregation": "mean",
1328
+ "higher_is_better": true
1329
+ },
1330
+ {
1331
+ "metric": "acc_norm",
1332
+ "aggregation": "mean",
1333
+ "higher_is_better": true
1334
+ }
1335
+ ],
1336
+ "output_type": "multiple_choice",
1337
+ "repeats": 1,
1338
+ "should_decontaminate": false,
1339
+ "metadata": {
1340
+ "version": 1.0
1341
+ }
1342
+ },
1343
+ "ceval-valid_high_school_geography": {
1344
+ "task": "ceval-valid_high_school_geography",
1345
+ "group": "ceval-valid",
1346
+ "dataset_path": "ceval/ceval-exam",
1347
+ "dataset_name": "high_school_geography",
1348
+ "validation_split": "val",
1349
+ "fewshot_split": "dev",
1350
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1351
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1352
+ "doc_to_choice": [
1353
+ "A",
1354
+ "B",
1355
+ "C",
1356
+ "D"
1357
+ ],
1358
+ "description": "以下是中国关于高中地理的单项选择题,请选出其中的正确答案。\n\n",
1359
+ "target_delimiter": " ",
1360
+ "fewshot_delimiter": "\n\n",
1361
+ "fewshot_config": {
1362
+ "sampler": "first_n"
1363
+ },
1364
+ "num_fewshot": 0,
1365
+ "metric_list": [
1366
+ {
1367
+ "metric": "acc",
1368
+ "aggregation": "mean",
1369
+ "higher_is_better": true
1370
+ },
1371
+ {
1372
+ "metric": "acc_norm",
1373
+ "aggregation": "mean",
1374
+ "higher_is_better": true
1375
+ }
1376
+ ],
1377
+ "output_type": "multiple_choice",
1378
+ "repeats": 1,
1379
+ "should_decontaminate": false,
1380
+ "metadata": {
1381
+ "version": 1.0
1382
+ }
1383
+ },
1384
+ "ceval-valid_high_school_history": {
1385
+ "task": "ceval-valid_high_school_history",
1386
+ "group": "ceval-valid",
1387
+ "dataset_path": "ceval/ceval-exam",
1388
+ "dataset_name": "high_school_history",
1389
+ "validation_split": "val",
1390
+ "fewshot_split": "dev",
1391
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1392
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1393
+ "doc_to_choice": [
1394
+ "A",
1395
+ "B",
1396
+ "C",
1397
+ "D"
1398
+ ],
1399
+ "description": "以下是中国关于高中历史的单项选择题,请选出其中的正确答案。\n\n",
1400
+ "target_delimiter": " ",
1401
+ "fewshot_delimiter": "\n\n",
1402
+ "fewshot_config": {
1403
+ "sampler": "first_n"
1404
+ },
1405
+ "num_fewshot": 0,
1406
+ "metric_list": [
1407
+ {
1408
+ "metric": "acc",
1409
+ "aggregation": "mean",
1410
+ "higher_is_better": true
1411
+ },
1412
+ {
1413
+ "metric": "acc_norm",
1414
+ "aggregation": "mean",
1415
+ "higher_is_better": true
1416
+ }
1417
+ ],
1418
+ "output_type": "multiple_choice",
1419
+ "repeats": 1,
1420
+ "should_decontaminate": false,
1421
+ "metadata": {
1422
+ "version": 1.0
1423
+ }
1424
+ },
1425
+ "ceval-valid_high_school_mathematics": {
1426
+ "task": "ceval-valid_high_school_mathematics",
1427
+ "group": "ceval-valid",
1428
+ "dataset_path": "ceval/ceval-exam",
1429
+ "dataset_name": "high_school_mathematics",
1430
+ "validation_split": "val",
1431
+ "fewshot_split": "dev",
1432
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1433
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1434
+ "doc_to_choice": [
1435
+ "A",
1436
+ "B",
1437
+ "C",
1438
+ "D"
1439
+ ],
1440
+ "description": "以下是中国关于高中数学的单项选择题,请选出其中的正确答案。\n\n",
1441
+ "target_delimiter": " ",
1442
+ "fewshot_delimiter": "\n\n",
1443
+ "fewshot_config": {
1444
+ "sampler": "first_n"
1445
+ },
1446
+ "num_fewshot": 0,
1447
+ "metric_list": [
1448
+ {
1449
+ "metric": "acc",
1450
+ "aggregation": "mean",
1451
+ "higher_is_better": true
1452
+ },
1453
+ {
1454
+ "metric": "acc_norm",
1455
+ "aggregation": "mean",
1456
+ "higher_is_better": true
1457
+ }
1458
+ ],
1459
+ "output_type": "multiple_choice",
1460
+ "repeats": 1,
1461
+ "should_decontaminate": false,
1462
+ "metadata": {
1463
+ "version": 1.0
1464
+ }
1465
+ },
1466
+ "ceval-valid_high_school_physics": {
1467
+ "task": "ceval-valid_high_school_physics",
1468
+ "group": "ceval-valid",
1469
+ "dataset_path": "ceval/ceval-exam",
1470
+ "dataset_name": "high_school_physics",
1471
+ "validation_split": "val",
1472
+ "fewshot_split": "dev",
1473
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1474
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1475
+ "doc_to_choice": [
1476
+ "A",
1477
+ "B",
1478
+ "C",
1479
+ "D"
1480
+ ],
1481
+ "description": "以下是中国关于高中物理的单项选择题,请选出其中的正确答案。\n\n",
1482
+ "target_delimiter": " ",
1483
+ "fewshot_delimiter": "\n\n",
1484
+ "fewshot_config": {
1485
+ "sampler": "first_n"
1486
+ },
1487
+ "num_fewshot": 0,
1488
+ "metric_list": [
1489
+ {
1490
+ "metric": "acc",
1491
+ "aggregation": "mean",
1492
+ "higher_is_better": true
1493
+ },
1494
+ {
1495
+ "metric": "acc_norm",
1496
+ "aggregation": "mean",
1497
+ "higher_is_better": true
1498
+ }
1499
+ ],
1500
+ "output_type": "multiple_choice",
1501
+ "repeats": 1,
1502
+ "should_decontaminate": false,
1503
+ "metadata": {
1504
+ "version": 1.0
1505
+ }
1506
+ },
1507
+ "ceval-valid_high_school_politics": {
1508
+ "task": "ceval-valid_high_school_politics",
1509
+ "group": "ceval-valid",
1510
+ "dataset_path": "ceval/ceval-exam",
1511
+ "dataset_name": "high_school_politics",
1512
+ "validation_split": "val",
1513
+ "fewshot_split": "dev",
1514
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1515
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1516
+ "doc_to_choice": [
1517
+ "A",
1518
+ "B",
1519
+ "C",
1520
+ "D"
1521
+ ],
1522
+ "description": "以下是中国关于高中政治的单项选择题,请选出其中的正确答案。\n\n",
1523
+ "target_delimiter": " ",
1524
+ "fewshot_delimiter": "\n\n",
1525
+ "fewshot_config": {
1526
+ "sampler": "first_n"
1527
+ },
1528
+ "num_fewshot": 0,
1529
+ "metric_list": [
1530
+ {
1531
+ "metric": "acc",
1532
+ "aggregation": "mean",
1533
+ "higher_is_better": true
1534
+ },
1535
+ {
1536
+ "metric": "acc_norm",
1537
+ "aggregation": "mean",
1538
+ "higher_is_better": true
1539
+ }
1540
+ ],
1541
+ "output_type": "multiple_choice",
1542
+ "repeats": 1,
1543
+ "should_decontaminate": false,
1544
+ "metadata": {
1545
+ "version": 1.0
1546
+ }
1547
+ },
1548
+ "ceval-valid_ideological_and_moral_cultivation": {
1549
+ "task": "ceval-valid_ideological_and_moral_cultivation",
1550
+ "group": "ceval-valid",
1551
+ "dataset_path": "ceval/ceval-exam",
1552
+ "dataset_name": "ideological_and_moral_cultivation",
1553
+ "validation_split": "val",
1554
+ "fewshot_split": "dev",
1555
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1556
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1557
+ "doc_to_choice": [
1558
+ "A",
1559
+ "B",
1560
+ "C",
1561
+ "D"
1562
+ ],
1563
+ "description": "以下是中国关于思想道德修养与法律基础的单项选择题,请选出其中的正确答案。\n\n",
1564
+ "target_delimiter": " ",
1565
+ "fewshot_delimiter": "\n\n",
1566
+ "fewshot_config": {
1567
+ "sampler": "first_n"
1568
+ },
1569
+ "num_fewshot": 0,
1570
+ "metric_list": [
1571
+ {
1572
+ "metric": "acc",
1573
+ "aggregation": "mean",
1574
+ "higher_is_better": true
1575
+ },
1576
+ {
1577
+ "metric": "acc_norm",
1578
+ "aggregation": "mean",
1579
+ "higher_is_better": true
1580
+ }
1581
+ ],
1582
+ "output_type": "multiple_choice",
1583
+ "repeats": 1,
1584
+ "should_decontaminate": false,
1585
+ "metadata": {
1586
+ "version": 1.0
1587
+ }
1588
+ },
1589
+ "ceval-valid_law": {
1590
+ "task": "ceval-valid_law",
1591
+ "group": "ceval-valid",
1592
+ "dataset_path": "ceval/ceval-exam",
1593
+ "dataset_name": "law",
1594
+ "validation_split": "val",
1595
+ "fewshot_split": "dev",
1596
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1597
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1598
+ "doc_to_choice": [
1599
+ "A",
1600
+ "B",
1601
+ "C",
1602
+ "D"
1603
+ ],
1604
+ "description": "以下是中国关于法学的单项选择题,请选出其中的正确答案。\n\n",
1605
+ "target_delimiter": " ",
1606
+ "fewshot_delimiter": "\n\n",
1607
+ "fewshot_config": {
1608
+ "sampler": "first_n"
1609
+ },
1610
+ "num_fewshot": 0,
1611
+ "metric_list": [
1612
+ {
1613
+ "metric": "acc",
1614
+ "aggregation": "mean",
1615
+ "higher_is_better": true
1616
+ },
1617
+ {
1618
+ "metric": "acc_norm",
1619
+ "aggregation": "mean",
1620
+ "higher_is_better": true
1621
+ }
1622
+ ],
1623
+ "output_type": "multiple_choice",
1624
+ "repeats": 1,
1625
+ "should_decontaminate": false,
1626
+ "metadata": {
1627
+ "version": 1.0
1628
+ }
1629
+ },
1630
+ "ceval-valid_legal_professional": {
1631
+ "task": "ceval-valid_legal_professional",
1632
+ "group": "ceval-valid",
1633
+ "dataset_path": "ceval/ceval-exam",
1634
+ "dataset_name": "legal_professional",
1635
+ "validation_split": "val",
1636
+ "fewshot_split": "dev",
1637
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1638
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1639
+ "doc_to_choice": [
1640
+ "A",
1641
+ "B",
1642
+ "C",
1643
+ "D"
1644
+ ],
1645
+ "description": "以下是中国关于法律职业资格的单项选择题,请选出其中的正确答案。\n\n",
1646
+ "target_delimiter": " ",
1647
+ "fewshot_delimiter": "\n\n",
1648
+ "fewshot_config": {
1649
+ "sampler": "first_n"
1650
+ },
1651
+ "num_fewshot": 0,
1652
+ "metric_list": [
1653
+ {
1654
+ "metric": "acc",
1655
+ "aggregation": "mean",
1656
+ "higher_is_better": true
1657
+ },
1658
+ {
1659
+ "metric": "acc_norm",
1660
+ "aggregation": "mean",
1661
+ "higher_is_better": true
1662
+ }
1663
+ ],
1664
+ "output_type": "multiple_choice",
1665
+ "repeats": 1,
1666
+ "should_decontaminate": false,
1667
+ "metadata": {
1668
+ "version": 1.0
1669
+ }
1670
+ },
1671
+ "ceval-valid_logic": {
1672
+ "task": "ceval-valid_logic",
1673
+ "group": "ceval-valid",
1674
+ "dataset_path": "ceval/ceval-exam",
1675
+ "dataset_name": "logic",
1676
+ "validation_split": "val",
1677
+ "fewshot_split": "dev",
1678
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1679
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1680
+ "doc_to_choice": [
1681
+ "A",
1682
+ "B",
1683
+ "C",
1684
+ "D"
1685
+ ],
1686
+ "description": "以下是中国关于逻辑学的单项选择题,请选出其中的正确答案。\n\n",
1687
+ "target_delimiter": " ",
1688
+ "fewshot_delimiter": "\n\n",
1689
+ "fewshot_config": {
1690
+ "sampler": "first_n"
1691
+ },
1692
+ "num_fewshot": 0,
1693
+ "metric_list": [
1694
+ {
1695
+ "metric": "acc",
1696
+ "aggregation": "mean",
1697
+ "higher_is_better": true
1698
+ },
1699
+ {
1700
+ "metric": "acc_norm",
1701
+ "aggregation": "mean",
1702
+ "higher_is_better": true
1703
+ }
1704
+ ],
1705
+ "output_type": "multiple_choice",
1706
+ "repeats": 1,
1707
+ "should_decontaminate": false,
1708
+ "metadata": {
1709
+ "version": 1.0
1710
+ }
1711
+ },
1712
+ "ceval-valid_mao_zedong_thought": {
1713
+ "task": "ceval-valid_mao_zedong_thought",
1714
+ "group": "ceval-valid",
1715
+ "dataset_path": "ceval/ceval-exam",
1716
+ "dataset_name": "mao_zedong_thought",
1717
+ "validation_split": "val",
1718
+ "fewshot_split": "dev",
1719
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1720
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1721
+ "doc_to_choice": [
1722
+ "A",
1723
+ "B",
1724
+ "C",
1725
+ "D"
1726
+ ],
1727
+ "description": "以下是中国关于毛泽东思想和中国特色社会主义理论体系概论的单项选择题,请选出其中的正确答案。\n\n",
1728
+ "target_delimiter": " ",
1729
+ "fewshot_delimiter": "\n\n",
1730
+ "fewshot_config": {
1731
+ "sampler": "first_n"
1732
+ },
1733
+ "num_fewshot": 0,
1734
+ "metric_list": [
1735
+ {
1736
+ "metric": "acc",
1737
+ "aggregation": "mean",
1738
+ "higher_is_better": true
1739
+ },
1740
+ {
1741
+ "metric": "acc_norm",
1742
+ "aggregation": "mean",
1743
+ "higher_is_better": true
1744
+ }
1745
+ ],
1746
+ "output_type": "multiple_choice",
1747
+ "repeats": 1,
1748
+ "should_decontaminate": false,
1749
+ "metadata": {
1750
+ "version": 1.0
1751
+ }
1752
+ },
1753
+ "ceval-valid_marxism": {
1754
+ "task": "ceval-valid_marxism",
1755
+ "group": "ceval-valid",
1756
+ "dataset_path": "ceval/ceval-exam",
1757
+ "dataset_name": "marxism",
1758
+ "validation_split": "val",
1759
+ "fewshot_split": "dev",
1760
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1761
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1762
+ "doc_to_choice": [
1763
+ "A",
1764
+ "B",
1765
+ "C",
1766
+ "D"
1767
+ ],
1768
+ "description": "以下是中国关于马克思主义基本原理的单项选择题,请选出其中的正确答案。\n\n",
1769
+ "target_delimiter": " ",
1770
+ "fewshot_delimiter": "\n\n",
1771
+ "fewshot_config": {
1772
+ "sampler": "first_n"
1773
+ },
1774
+ "num_fewshot": 0,
1775
+ "metric_list": [
1776
+ {
1777
+ "metric": "acc",
1778
+ "aggregation": "mean",
1779
+ "higher_is_better": true
1780
+ },
1781
+ {
1782
+ "metric": "acc_norm",
1783
+ "aggregation": "mean",
1784
+ "higher_is_better": true
1785
+ }
1786
+ ],
1787
+ "output_type": "multiple_choice",
1788
+ "repeats": 1,
1789
+ "should_decontaminate": false,
1790
+ "metadata": {
1791
+ "version": 1.0
1792
+ }
1793
+ },
1794
+ "ceval-valid_metrology_engineer": {
1795
+ "task": "ceval-valid_metrology_engineer",
1796
+ "group": "ceval-valid",
1797
+ "dataset_path": "ceval/ceval-exam",
1798
+ "dataset_name": "metrology_engineer",
1799
+ "validation_split": "val",
1800
+ "fewshot_split": "dev",
1801
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1802
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1803
+ "doc_to_choice": [
1804
+ "A",
1805
+ "B",
1806
+ "C",
1807
+ "D"
1808
+ ],
1809
+ "description": "以下是中国关于注册计量师的单项选择题,请选出其中的正确答案。\n\n",
1810
+ "target_delimiter": " ",
1811
+ "fewshot_delimiter": "\n\n",
1812
+ "fewshot_config": {
1813
+ "sampler": "first_n"
1814
+ },
1815
+ "num_fewshot": 0,
1816
+ "metric_list": [
1817
+ {
1818
+ "metric": "acc",
1819
+ "aggregation": "mean",
1820
+ "higher_is_better": true
1821
+ },
1822
+ {
1823
+ "metric": "acc_norm",
1824
+ "aggregation": "mean",
1825
+ "higher_is_better": true
1826
+ }
1827
+ ],
1828
+ "output_type": "multiple_choice",
1829
+ "repeats": 1,
1830
+ "should_decontaminate": false,
1831
+ "metadata": {
1832
+ "version": 1.0
1833
+ }
1834
+ },
1835
+ "ceval-valid_middle_school_biology": {
1836
+ "task": "ceval-valid_middle_school_biology",
1837
+ "group": "ceval-valid",
1838
+ "dataset_path": "ceval/ceval-exam",
1839
+ "dataset_name": "middle_school_biology",
1840
+ "validation_split": "val",
1841
+ "fewshot_split": "dev",
1842
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1843
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1844
+ "doc_to_choice": [
1845
+ "A",
1846
+ "B",
1847
+ "C",
1848
+ "D"
1849
+ ],
1850
+ "description": "以下是中国关于初中生物的单项选择题,请选出其中的正确答案。\n\n",
1851
+ "target_delimiter": " ",
1852
+ "fewshot_delimiter": "\n\n",
1853
+ "fewshot_config": {
1854
+ "sampler": "first_n"
1855
+ },
1856
+ "num_fewshot": 0,
1857
+ "metric_list": [
1858
+ {
1859
+ "metric": "acc",
1860
+ "aggregation": "mean",
1861
+ "higher_is_better": true
1862
+ },
1863
+ {
1864
+ "metric": "acc_norm",
1865
+ "aggregation": "mean",
1866
+ "higher_is_better": true
1867
+ }
1868
+ ],
1869
+ "output_type": "multiple_choice",
1870
+ "repeats": 1,
1871
+ "should_decontaminate": false,
1872
+ "metadata": {
1873
+ "version": 1.0
1874
+ }
1875
+ },
1876
+ "ceval-valid_middle_school_chemistry": {
1877
+ "task": "ceval-valid_middle_school_chemistry",
1878
+ "group": "ceval-valid",
1879
+ "dataset_path": "ceval/ceval-exam",
1880
+ "dataset_name": "middle_school_chemistry",
1881
+ "validation_split": "val",
1882
+ "fewshot_split": "dev",
1883
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1884
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1885
+ "doc_to_choice": [
1886
+ "A",
1887
+ "B",
1888
+ "C",
1889
+ "D"
1890
+ ],
1891
+ "description": "以下是中国关于初中化学的单项选择题,请选出其中的正确答案。\n\n",
1892
+ "target_delimiter": " ",
1893
+ "fewshot_delimiter": "\n\n",
1894
+ "fewshot_config": {
1895
+ "sampler": "first_n"
1896
+ },
1897
+ "num_fewshot": 0,
1898
+ "metric_list": [
1899
+ {
1900
+ "metric": "acc",
1901
+ "aggregation": "mean",
1902
+ "higher_is_better": true
1903
+ },
1904
+ {
1905
+ "metric": "acc_norm",
1906
+ "aggregation": "mean",
1907
+ "higher_is_better": true
1908
+ }
1909
+ ],
1910
+ "output_type": "multiple_choice",
1911
+ "repeats": 1,
1912
+ "should_decontaminate": false,
1913
+ "metadata": {
1914
+ "version": 1.0
1915
+ }
1916
+ },
1917
+ "ceval-valid_middle_school_geography": {
1918
+ "task": "ceval-valid_middle_school_geography",
1919
+ "group": "ceval-valid",
1920
+ "dataset_path": "ceval/ceval-exam",
1921
+ "dataset_name": "middle_school_geography",
1922
+ "validation_split": "val",
1923
+ "fewshot_split": "dev",
1924
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1925
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1926
+ "doc_to_choice": [
1927
+ "A",
1928
+ "B",
1929
+ "C",
1930
+ "D"
1931
+ ],
1932
+ "description": "以下是中国关于初中地理的单项选择题,请选出其中的正确答案。\n\n",
1933
+ "target_delimiter": " ",
1934
+ "fewshot_delimiter": "\n\n",
1935
+ "fewshot_config": {
1936
+ "sampler": "first_n"
1937
+ },
1938
+ "num_fewshot": 0,
1939
+ "metric_list": [
1940
+ {
1941
+ "metric": "acc",
1942
+ "aggregation": "mean",
1943
+ "higher_is_better": true
1944
+ },
1945
+ {
1946
+ "metric": "acc_norm",
1947
+ "aggregation": "mean",
1948
+ "higher_is_better": true
1949
+ }
1950
+ ],
1951
+ "output_type": "multiple_choice",
1952
+ "repeats": 1,
1953
+ "should_decontaminate": false,
1954
+ "metadata": {
1955
+ "version": 1.0
1956
+ }
1957
+ },
1958
+ "ceval-valid_middle_school_history": {
1959
+ "task": "ceval-valid_middle_school_history",
1960
+ "group": "ceval-valid",
1961
+ "dataset_path": "ceval/ceval-exam",
1962
+ "dataset_name": "middle_school_history",
1963
+ "validation_split": "val",
1964
+ "fewshot_split": "dev",
1965
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
1966
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
1967
+ "doc_to_choice": [
1968
+ "A",
1969
+ "B",
1970
+ "C",
1971
+ "D"
1972
+ ],
1973
+ "description": "以下是中国关于初中历史的单项选择题,请选出其中的正确答案。\n\n",
1974
+ "target_delimiter": " ",
1975
+ "fewshot_delimiter": "\n\n",
1976
+ "fewshot_config": {
1977
+ "sampler": "first_n"
1978
+ },
1979
+ "num_fewshot": 0,
1980
+ "metric_list": [
1981
+ {
1982
+ "metric": "acc",
1983
+ "aggregation": "mean",
1984
+ "higher_is_better": true
1985
+ },
1986
+ {
1987
+ "metric": "acc_norm",
1988
+ "aggregation": "mean",
1989
+ "higher_is_better": true
1990
+ }
1991
+ ],
1992
+ "output_type": "multiple_choice",
1993
+ "repeats": 1,
1994
+ "should_decontaminate": false,
1995
+ "metadata": {
1996
+ "version": 1.0
1997
+ }
1998
+ },
1999
+ "ceval-valid_middle_school_mathematics": {
2000
+ "task": "ceval-valid_middle_school_mathematics",
2001
+ "group": "ceval-valid",
2002
+ "dataset_path": "ceval/ceval-exam",
2003
+ "dataset_name": "middle_school_mathematics",
2004
+ "validation_split": "val",
2005
+ "fewshot_split": "dev",
2006
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2007
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2008
+ "doc_to_choice": [
2009
+ "A",
2010
+ "B",
2011
+ "C",
2012
+ "D"
2013
+ ],
2014
+ "description": "以下是中国关于初中数学的单项选择题,请选出其中的正确答案。\n\n",
2015
+ "target_delimiter": " ",
2016
+ "fewshot_delimiter": "\n\n",
2017
+ "fewshot_config": {
2018
+ "sampler": "first_n"
2019
+ },
2020
+ "num_fewshot": 0,
2021
+ "metric_list": [
2022
+ {
2023
+ "metric": "acc",
2024
+ "aggregation": "mean",
2025
+ "higher_is_better": true
2026
+ },
2027
+ {
2028
+ "metric": "acc_norm",
2029
+ "aggregation": "mean",
2030
+ "higher_is_better": true
2031
+ }
2032
+ ],
2033
+ "output_type": "multiple_choice",
2034
+ "repeats": 1,
2035
+ "should_decontaminate": false,
2036
+ "metadata": {
2037
+ "version": 1.0
2038
+ }
2039
+ },
2040
+ "ceval-valid_middle_school_physics": {
2041
+ "task": "ceval-valid_middle_school_physics",
2042
+ "group": "ceval-valid",
2043
+ "dataset_path": "ceval/ceval-exam",
2044
+ "dataset_name": "middle_school_physics",
2045
+ "validation_split": "val",
2046
+ "fewshot_split": "dev",
2047
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2048
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2049
+ "doc_to_choice": [
2050
+ "A",
2051
+ "B",
2052
+ "C",
2053
+ "D"
2054
+ ],
2055
+ "description": "以下是中国关于初中物理的单项选择题,请选出其中的正确答案。\n\n",
2056
+ "target_delimiter": " ",
2057
+ "fewshot_delimiter": "\n\n",
2058
+ "fewshot_config": {
2059
+ "sampler": "first_n"
2060
+ },
2061
+ "num_fewshot": 0,
2062
+ "metric_list": [
2063
+ {
2064
+ "metric": "acc",
2065
+ "aggregation": "mean",
2066
+ "higher_is_better": true
2067
+ },
2068
+ {
2069
+ "metric": "acc_norm",
2070
+ "aggregation": "mean",
2071
+ "higher_is_better": true
2072
+ }
2073
+ ],
2074
+ "output_type": "multiple_choice",
2075
+ "repeats": 1,
2076
+ "should_decontaminate": false,
2077
+ "metadata": {
2078
+ "version": 1.0
2079
+ }
2080
+ },
2081
+ "ceval-valid_middle_school_politics": {
2082
+ "task": "ceval-valid_middle_school_politics",
2083
+ "group": "ceval-valid",
2084
+ "dataset_path": "ceval/ceval-exam",
2085
+ "dataset_name": "middle_school_politics",
2086
+ "validation_split": "val",
2087
+ "fewshot_split": "dev",
2088
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2089
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2090
+ "doc_to_choice": [
2091
+ "A",
2092
+ "B",
2093
+ "C",
2094
+ "D"
2095
+ ],
2096
+ "description": "以下是中国关于初中政治的单项选择题,请选出其中的正确答案。\n\n",
2097
+ "target_delimiter": " ",
2098
+ "fewshot_delimiter": "\n\n",
2099
+ "fewshot_config": {
2100
+ "sampler": "first_n"
2101
+ },
2102
+ "num_fewshot": 0,
2103
+ "metric_list": [
2104
+ {
2105
+ "metric": "acc",
2106
+ "aggregation": "mean",
2107
+ "higher_is_better": true
2108
+ },
2109
+ {
2110
+ "metric": "acc_norm",
2111
+ "aggregation": "mean",
2112
+ "higher_is_better": true
2113
+ }
2114
+ ],
2115
+ "output_type": "multiple_choice",
2116
+ "repeats": 1,
2117
+ "should_decontaminate": false,
2118
+ "metadata": {
2119
+ "version": 1.0
2120
+ }
2121
+ },
2122
+ "ceval-valid_modern_chinese_history": {
2123
+ "task": "ceval-valid_modern_chinese_history",
2124
+ "group": "ceval-valid",
2125
+ "dataset_path": "ceval/ceval-exam",
2126
+ "dataset_name": "modern_chinese_history",
2127
+ "validation_split": "val",
2128
+ "fewshot_split": "dev",
2129
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2130
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2131
+ "doc_to_choice": [
2132
+ "A",
2133
+ "B",
2134
+ "C",
2135
+ "D"
2136
+ ],
2137
+ "description": "以下是中国关于近代史纲要的单项选择题,请选出其中的正确答案。\n\n",
2138
+ "target_delimiter": " ",
2139
+ "fewshot_delimiter": "\n\n",
2140
+ "fewshot_config": {
2141
+ "sampler": "first_n"
2142
+ },
2143
+ "num_fewshot": 0,
2144
+ "metric_list": [
2145
+ {
2146
+ "metric": "acc",
2147
+ "aggregation": "mean",
2148
+ "higher_is_better": true
2149
+ },
2150
+ {
2151
+ "metric": "acc_norm",
2152
+ "aggregation": "mean",
2153
+ "higher_is_better": true
2154
+ }
2155
+ ],
2156
+ "output_type": "multiple_choice",
2157
+ "repeats": 1,
2158
+ "should_decontaminate": false,
2159
+ "metadata": {
2160
+ "version": 1.0
2161
+ }
2162
+ },
2163
+ "ceval-valid_operating_system": {
2164
+ "task": "ceval-valid_operating_system",
2165
+ "group": "ceval-valid",
2166
+ "dataset_path": "ceval/ceval-exam",
2167
+ "dataset_name": "operating_system",
2168
+ "validation_split": "val",
2169
+ "fewshot_split": "dev",
2170
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2171
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2172
+ "doc_to_choice": [
2173
+ "A",
2174
+ "B",
2175
+ "C",
2176
+ "D"
2177
+ ],
2178
+ "description": "以下是中国关于操作系统的单项选择题,请选出其中的正确答案。\n\n",
2179
+ "target_delimiter": " ",
2180
+ "fewshot_delimiter": "\n\n",
2181
+ "fewshot_config": {
2182
+ "sampler": "first_n"
2183
+ },
2184
+ "num_fewshot": 0,
2185
+ "metric_list": [
2186
+ {
2187
+ "metric": "acc",
2188
+ "aggregation": "mean",
2189
+ "higher_is_better": true
2190
+ },
2191
+ {
2192
+ "metric": "acc_norm",
2193
+ "aggregation": "mean",
2194
+ "higher_is_better": true
2195
+ }
2196
+ ],
2197
+ "output_type": "multiple_choice",
2198
+ "repeats": 1,
2199
+ "should_decontaminate": false,
2200
+ "metadata": {
2201
+ "version": 1.0
2202
+ }
2203
+ },
2204
+ "ceval-valid_physician": {
2205
+ "task": "ceval-valid_physician",
2206
+ "group": "ceval-valid",
2207
+ "dataset_path": "ceval/ceval-exam",
2208
+ "dataset_name": "physician",
2209
+ "validation_split": "val",
2210
+ "fewshot_split": "dev",
2211
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2212
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2213
+ "doc_to_choice": [
2214
+ "A",
2215
+ "B",
2216
+ "C",
2217
+ "D"
2218
+ ],
2219
+ "description": "以下是中国关于医师资格的单项选择题,请选出其中的正确答案。\n\n",
2220
+ "target_delimiter": " ",
2221
+ "fewshot_delimiter": "\n\n",
2222
+ "fewshot_config": {
2223
+ "sampler": "first_n"
2224
+ },
2225
+ "num_fewshot": 0,
2226
+ "metric_list": [
2227
+ {
2228
+ "metric": "acc",
2229
+ "aggregation": "mean",
2230
+ "higher_is_better": true
2231
+ },
2232
+ {
2233
+ "metric": "acc_norm",
2234
+ "aggregation": "mean",
2235
+ "higher_is_better": true
2236
+ }
2237
+ ],
2238
+ "output_type": "multiple_choice",
2239
+ "repeats": 1,
2240
+ "should_decontaminate": false,
2241
+ "metadata": {
2242
+ "version": 1.0
2243
+ }
2244
+ },
2245
+ "ceval-valid_plant_protection": {
2246
+ "task": "ceval-valid_plant_protection",
2247
+ "group": "ceval-valid",
2248
+ "dataset_path": "ceval/ceval-exam",
2249
+ "dataset_name": "plant_protection",
2250
+ "validation_split": "val",
2251
+ "fewshot_split": "dev",
2252
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2253
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2254
+ "doc_to_choice": [
2255
+ "A",
2256
+ "B",
2257
+ "C",
2258
+ "D"
2259
+ ],
2260
+ "description": "以下是中国关于植物保护的单项选择题,请选出其中的正确答案。\n\n",
2261
+ "target_delimiter": " ",
2262
+ "fewshot_delimiter": "\n\n",
2263
+ "fewshot_config": {
2264
+ "sampler": "first_n"
2265
+ },
2266
+ "num_fewshot": 0,
2267
+ "metric_list": [
2268
+ {
2269
+ "metric": "acc",
2270
+ "aggregation": "mean",
2271
+ "higher_is_better": true
2272
+ },
2273
+ {
2274
+ "metric": "acc_norm",
2275
+ "aggregation": "mean",
2276
+ "higher_is_better": true
2277
+ }
2278
+ ],
2279
+ "output_type": "multiple_choice",
2280
+ "repeats": 1,
2281
+ "should_decontaminate": false,
2282
+ "metadata": {
2283
+ "version": 1.0
2284
+ }
2285
+ },
2286
+ "ceval-valid_probability_and_statistics": {
2287
+ "task": "ceval-valid_probability_and_statistics",
2288
+ "group": "ceval-valid",
2289
+ "dataset_path": "ceval/ceval-exam",
2290
+ "dataset_name": "probability_and_statistics",
2291
+ "validation_split": "val",
2292
+ "fewshot_split": "dev",
2293
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2294
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2295
+ "doc_to_choice": [
2296
+ "A",
2297
+ "B",
2298
+ "C",
2299
+ "D"
2300
+ ],
2301
+ "description": "以下是中国关于概率统计的单项选择题,请选出其中的正确答案。\n\n",
2302
+ "target_delimiter": " ",
2303
+ "fewshot_delimiter": "\n\n",
2304
+ "fewshot_config": {
2305
+ "sampler": "first_n"
2306
+ },
2307
+ "num_fewshot": 0,
2308
+ "metric_list": [
2309
+ {
2310
+ "metric": "acc",
2311
+ "aggregation": "mean",
2312
+ "higher_is_better": true
2313
+ },
2314
+ {
2315
+ "metric": "acc_norm",
2316
+ "aggregation": "mean",
2317
+ "higher_is_better": true
2318
+ }
2319
+ ],
2320
+ "output_type": "multiple_choice",
2321
+ "repeats": 1,
2322
+ "should_decontaminate": false,
2323
+ "metadata": {
2324
+ "version": 1.0
2325
+ }
2326
+ },
2327
+ "ceval-valid_professional_tour_guide": {
2328
+ "task": "ceval-valid_professional_tour_guide",
2329
+ "group": "ceval-valid",
2330
+ "dataset_path": "ceval/ceval-exam",
2331
+ "dataset_name": "professional_tour_guide",
2332
+ "validation_split": "val",
2333
+ "fewshot_split": "dev",
2334
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2335
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2336
+ "doc_to_choice": [
2337
+ "A",
2338
+ "B",
2339
+ "C",
2340
+ "D"
2341
+ ],
2342
+ "description": "以下是中国关于导游资格的单项选择题,请选出其中的正确答案。\n\n",
2343
+ "target_delimiter": " ",
2344
+ "fewshot_delimiter": "\n\n",
2345
+ "fewshot_config": {
2346
+ "sampler": "first_n"
2347
+ },
2348
+ "num_fewshot": 0,
2349
+ "metric_list": [
2350
+ {
2351
+ "metric": "acc",
2352
+ "aggregation": "mean",
2353
+ "higher_is_better": true
2354
+ },
2355
+ {
2356
+ "metric": "acc_norm",
2357
+ "aggregation": "mean",
2358
+ "higher_is_better": true
2359
+ }
2360
+ ],
2361
+ "output_type": "multiple_choice",
2362
+ "repeats": 1,
2363
+ "should_decontaminate": false,
2364
+ "metadata": {
2365
+ "version": 1.0
2366
+ }
2367
+ },
2368
+ "ceval-valid_sports_science": {
2369
+ "task": "ceval-valid_sports_science",
2370
+ "group": "ceval-valid",
2371
+ "dataset_path": "ceval/ceval-exam",
2372
+ "dataset_name": "sports_science",
2373
+ "validation_split": "val",
2374
+ "fewshot_split": "dev",
2375
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2376
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2377
+ "doc_to_choice": [
2378
+ "A",
2379
+ "B",
2380
+ "C",
2381
+ "D"
2382
+ ],
2383
+ "description": "以下是中国关于体育学的单项选择题,请选出其中的正确答案。\n\n",
2384
+ "target_delimiter": " ",
2385
+ "fewshot_delimiter": "\n\n",
2386
+ "fewshot_config": {
2387
+ "sampler": "first_n"
2388
+ },
2389
+ "num_fewshot": 0,
2390
+ "metric_list": [
2391
+ {
2392
+ "metric": "acc",
2393
+ "aggregation": "mean",
2394
+ "higher_is_better": true
2395
+ },
2396
+ {
2397
+ "metric": "acc_norm",
2398
+ "aggregation": "mean",
2399
+ "higher_is_better": true
2400
+ }
2401
+ ],
2402
+ "output_type": "multiple_choice",
2403
+ "repeats": 1,
2404
+ "should_decontaminate": false,
2405
+ "metadata": {
2406
+ "version": 1.0
2407
+ }
2408
+ },
2409
+ "ceval-valid_tax_accountant": {
2410
+ "task": "ceval-valid_tax_accountant",
2411
+ "group": "ceval-valid",
2412
+ "dataset_path": "ceval/ceval-exam",
2413
+ "dataset_name": "tax_accountant",
2414
+ "validation_split": "val",
2415
+ "fewshot_split": "dev",
2416
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2417
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2418
+ "doc_to_choice": [
2419
+ "A",
2420
+ "B",
2421
+ "C",
2422
+ "D"
2423
+ ],
2424
+ "description": "以下是中国关于税务师的单项选择题,请选出其中的正确答案。\n\n",
2425
+ "target_delimiter": " ",
2426
+ "fewshot_delimiter": "\n\n",
2427
+ "fewshot_config": {
2428
+ "sampler": "first_n"
2429
+ },
2430
+ "num_fewshot": 0,
2431
+ "metric_list": [
2432
+ {
2433
+ "metric": "acc",
2434
+ "aggregation": "mean",
2435
+ "higher_is_better": true
2436
+ },
2437
+ {
2438
+ "metric": "acc_norm",
2439
+ "aggregation": "mean",
2440
+ "higher_is_better": true
2441
+ }
2442
+ ],
2443
+ "output_type": "multiple_choice",
2444
+ "repeats": 1,
2445
+ "should_decontaminate": false,
2446
+ "metadata": {
2447
+ "version": 1.0
2448
+ }
2449
+ },
2450
+ "ceval-valid_teacher_qualification": {
2451
+ "task": "ceval-valid_teacher_qualification",
2452
+ "group": "ceval-valid",
2453
+ "dataset_path": "ceval/ceval-exam",
2454
+ "dataset_name": "teacher_qualification",
2455
+ "validation_split": "val",
2456
+ "fewshot_split": "dev",
2457
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2458
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2459
+ "doc_to_choice": [
2460
+ "A",
2461
+ "B",
2462
+ "C",
2463
+ "D"
2464
+ ],
2465
+ "description": "以下是中国关于教师资格的单项选择题,请选出其中的正确答案。\n\n",
2466
+ "target_delimiter": " ",
2467
+ "fewshot_delimiter": "\n\n",
2468
+ "fewshot_config": {
2469
+ "sampler": "first_n"
2470
+ },
2471
+ "num_fewshot": 0,
2472
+ "metric_list": [
2473
+ {
2474
+ "metric": "acc",
2475
+ "aggregation": "mean",
2476
+ "higher_is_better": true
2477
+ },
2478
+ {
2479
+ "metric": "acc_norm",
2480
+ "aggregation": "mean",
2481
+ "higher_is_better": true
2482
+ }
2483
+ ],
2484
+ "output_type": "multiple_choice",
2485
+ "repeats": 1,
2486
+ "should_decontaminate": false,
2487
+ "metadata": {
2488
+ "version": 1.0
2489
+ }
2490
+ },
2491
+ "ceval-valid_urban_and_rural_planner": {
2492
+ "task": "ceval-valid_urban_and_rural_planner",
2493
+ "group": "ceval-valid",
2494
+ "dataset_path": "ceval/ceval-exam",
2495
+ "dataset_name": "urban_and_rural_planner",
2496
+ "validation_split": "val",
2497
+ "fewshot_split": "dev",
2498
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2499
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2500
+ "doc_to_choice": [
2501
+ "A",
2502
+ "B",
2503
+ "C",
2504
+ "D"
2505
+ ],
2506
+ "description": "以下是中国关于注册城乡规划师的单项选择题,请选出其中的正确答案。\n\n",
2507
+ "target_delimiter": " ",
2508
+ "fewshot_delimiter": "\n\n",
2509
+ "fewshot_config": {
2510
+ "sampler": "first_n"
2511
+ },
2512
+ "num_fewshot": 0,
2513
+ "metric_list": [
2514
+ {
2515
+ "metric": "acc",
2516
+ "aggregation": "mean",
2517
+ "higher_is_better": true
2518
+ },
2519
+ {
2520
+ "metric": "acc_norm",
2521
+ "aggregation": "mean",
2522
+ "higher_is_better": true
2523
+ }
2524
+ ],
2525
+ "output_type": "multiple_choice",
2526
+ "repeats": 1,
2527
+ "should_decontaminate": false,
2528
+ "metadata": {
2529
+ "version": 1.0
2530
+ }
2531
+ },
2532
+ "ceval-valid_veterinary_medicine": {
2533
+ "task": "ceval-valid_veterinary_medicine",
2534
+ "group": "ceval-valid",
2535
+ "dataset_path": "ceval/ceval-exam",
2536
+ "dataset_name": "veterinary_medicine",
2537
+ "validation_split": "val",
2538
+ "fewshot_split": "dev",
2539
+ "doc_to_text": "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:",
2540
+ "doc_to_target": "{{['A', 'B', 'C', 'D'].index(answer)}}",
2541
+ "doc_to_choice": [
2542
+ "A",
2543
+ "B",
2544
+ "C",
2545
+ "D"
2546
+ ],
2547
+ "description": "以下是中国关于兽医学的单项选择题,请选出其中的正确答��。\n\n",
2548
+ "target_delimiter": " ",
2549
+ "fewshot_delimiter": "\n\n",
2550
+ "fewshot_config": {
2551
+ "sampler": "first_n"
2552
+ },
2553
+ "num_fewshot": 0,
2554
+ "metric_list": [
2555
+ {
2556
+ "metric": "acc",
2557
+ "aggregation": "mean",
2558
+ "higher_is_better": true
2559
+ },
2560
+ {
2561
+ "metric": "acc_norm",
2562
+ "aggregation": "mean",
2563
+ "higher_is_better": true
2564
+ }
2565
+ ],
2566
+ "output_type": "multiple_choice",
2567
+ "repeats": 1,
2568
+ "should_decontaminate": false,
2569
+ "metadata": {
2570
+ "version": 1.0
2571
+ }
2572
+ }
2573
+ },
2574
+ "versions": {
2575
+ "ceval-valid_accountant": 1.0,
2576
+ "ceval-valid_advanced_mathematics": 1.0,
2577
+ "ceval-valid_art_studies": 1.0,
2578
+ "ceval-valid_basic_medicine": 1.0,
2579
+ "ceval-valid_business_administration": 1.0,
2580
+ "ceval-valid_chinese_language_and_literature": 1.0,
2581
+ "ceval-valid_civil_servant": 1.0,
2582
+ "ceval-valid_clinical_medicine": 1.0,
2583
+ "ceval-valid_college_chemistry": 1.0,
2584
+ "ceval-valid_college_economics": 1.0,
2585
+ "ceval-valid_college_physics": 1.0,
2586
+ "ceval-valid_college_programming": 1.0,
2587
+ "ceval-valid_computer_architecture": 1.0,
2588
+ "ceval-valid_computer_network": 1.0,
2589
+ "ceval-valid_discrete_mathematics": 1.0,
2590
+ "ceval-valid_education_science": 1.0,
2591
+ "ceval-valid_electrical_engineer": 1.0,
2592
+ "ceval-valid_environmental_impact_assessment_engineer": 1.0,
2593
+ "ceval-valid_fire_engineer": 1.0,
2594
+ "ceval-valid_high_school_biology": 1.0,
2595
+ "ceval-valid_high_school_chemistry": 1.0,
2596
+ "ceval-valid_high_school_chinese": 1.0,
2597
+ "ceval-valid_high_school_geography": 1.0,
2598
+ "ceval-valid_high_school_history": 1.0,
2599
+ "ceval-valid_high_school_mathematics": 1.0,
2600
+ "ceval-valid_high_school_physics": 1.0,
2601
+ "ceval-valid_high_school_politics": 1.0,
2602
+ "ceval-valid_ideological_and_moral_cultivation": 1.0,
2603
+ "ceval-valid_law": 1.0,
2604
+ "ceval-valid_legal_professional": 1.0,
2605
+ "ceval-valid_logic": 1.0,
2606
+ "ceval-valid_mao_zedong_thought": 1.0,
2607
+ "ceval-valid_marxism": 1.0,
2608
+ "ceval-valid_metrology_engineer": 1.0,
2609
+ "ceval-valid_middle_school_biology": 1.0,
2610
+ "ceval-valid_middle_school_chemistry": 1.0,
2611
+ "ceval-valid_middle_school_geography": 1.0,
2612
+ "ceval-valid_middle_school_history": 1.0,
2613
+ "ceval-valid_middle_school_mathematics": 1.0,
2614
+ "ceval-valid_middle_school_physics": 1.0,
2615
+ "ceval-valid_middle_school_politics": 1.0,
2616
+ "ceval-valid_modern_chinese_history": 1.0,
2617
+ "ceval-valid_operating_system": 1.0,
2618
+ "ceval-valid_physician": 1.0,
2619
+ "ceval-valid_plant_protection": 1.0,
2620
+ "ceval-valid_probability_and_statistics": 1.0,
2621
+ "ceval-valid_professional_tour_guide": 1.0,
2622
+ "ceval-valid_sports_science": 1.0,
2623
+ "ceval-valid_tax_accountant": 1.0,
2624
+ "ceval-valid_teacher_qualification": 1.0,
2625
+ "ceval-valid_urban_and_rural_planner": 1.0,
2626
+ "ceval-valid_veterinary_medicine": 1.0
2627
+ },
2628
+ "n-shot": {
2629
+ "ceval-valid": 0,
2630
+ "ceval-valid_accountant": 0,
2631
+ "ceval-valid_advanced_mathematics": 0,
2632
+ "ceval-valid_art_studies": 0,
2633
+ "ceval-valid_basic_medicine": 0,
2634
+ "ceval-valid_business_administration": 0,
2635
+ "ceval-valid_chinese_language_and_literature": 0,
2636
+ "ceval-valid_civil_servant": 0,
2637
+ "ceval-valid_clinical_medicine": 0,
2638
+ "ceval-valid_college_chemistry": 0,
2639
+ "ceval-valid_college_economics": 0,
2640
+ "ceval-valid_college_physics": 0,
2641
+ "ceval-valid_college_programming": 0,
2642
+ "ceval-valid_computer_architecture": 0,
2643
+ "ceval-valid_computer_network": 0,
2644
+ "ceval-valid_discrete_mathematics": 0,
2645
+ "ceval-valid_education_science": 0,
2646
+ "ceval-valid_electrical_engineer": 0,
2647
+ "ceval-valid_environmental_impact_assessment_engineer": 0,
2648
+ "ceval-valid_fire_engineer": 0,
2649
+ "ceval-valid_high_school_biology": 0,
2650
+ "ceval-valid_high_school_chemistry": 0,
2651
+ "ceval-valid_high_school_chinese": 0,
2652
+ "ceval-valid_high_school_geography": 0,
2653
+ "ceval-valid_high_school_history": 0,
2654
+ "ceval-valid_high_school_mathematics": 0,
2655
+ "ceval-valid_high_school_physics": 0,
2656
+ "ceval-valid_high_school_politics": 0,
2657
+ "ceval-valid_ideological_and_moral_cultivation": 0,
2658
+ "ceval-valid_law": 0,
2659
+ "ceval-valid_legal_professional": 0,
2660
+ "ceval-valid_logic": 0,
2661
+ "ceval-valid_mao_zedong_thought": 0,
2662
+ "ceval-valid_marxism": 0,
2663
+ "ceval-valid_metrology_engineer": 0,
2664
+ "ceval-valid_middle_school_biology": 0,
2665
+ "ceval-valid_middle_school_chemistry": 0,
2666
+ "ceval-valid_middle_school_geography": 0,
2667
+ "ceval-valid_middle_school_history": 0,
2668
+ "ceval-valid_middle_school_mathematics": 0,
2669
+ "ceval-valid_middle_school_physics": 0,
2670
+ "ceval-valid_middle_school_politics": 0,
2671
+ "ceval-valid_modern_chinese_history": 0,
2672
+ "ceval-valid_operating_system": 0,
2673
+ "ceval-valid_physician": 0,
2674
+ "ceval-valid_plant_protection": 0,
2675
+ "ceval-valid_probability_and_statistics": 0,
2676
+ "ceval-valid_professional_tour_guide": 0,
2677
+ "ceval-valid_sports_science": 0,
2678
+ "ceval-valid_tax_accountant": 0,
2679
+ "ceval-valid_teacher_qualification": 0,
2680
+ "ceval-valid_urban_and_rural_planner": 0,
2681
+ "ceval-valid_veterinary_medicine": 0
2682
+ },
2683
+ "config": {
2684
+ "model": "hf",
2685
+ "model_args": "pretrained=/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
2686
+ "batch_size": "auto",
2687
+ "batch_sizes": [
2688
+ 64
2689
+ ],
2690
+ "device": null,
2691
+ "use_cache": null,
2692
+ "limit": null,
2693
+ "bootstrap_iters": 100000,
2694
+ "gen_kwargs": null
2695
+ },
2696
+ "git_hash": "3196e907",
2697
+ "date": 1719997012.1121945,
2698
+ "pretty_env_info": "PyTorch version: 2.4.0a0+07cecf4168.nv24.05\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.29.2\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H800\nGPU 1: NVIDIA H800\nGPU 2: NVIDIA H800\nGPU 3: NVIDIA H800\nGPU 4: NVIDIA H800\nGPU 5: NVIDIA H800\nGPU 6: NVIDIA H800\nGPU 7: NVIDIA H800\n\nNvidia driver version: 535.129.03\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8462Y+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 8\nFrequency boost: enabled\nCPU max MHz: 2801.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 120 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-31,64-95\nNUMA node1 CPU(s): 32-63,96-127\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==1.24.4\n[pip3] onnx==1.16.0\n[pip3] optree==0.11.0\n[pip3] pytorch-quantization==2.1.2\n[pip3] pytorch-triton==3.0.0+989adb9a2\n[pip3] torch==2.4.0a0+07cecf4168.nv24.5\n[pip3] torch-tensorrt==2.4.0a0\n[pip3] torchvision==0.19.0a0\n[conda] Could not collect",
2699
+ "transformers_version": "4.41.2",
2700
+ "upper_git_hash": null
2701
+ }
Results/CMMLU/results.json ADDED
The diff for this file is too large to render. See raw diff
 
Results/Code/Humaneval/evaluation_results.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "humaneval": {
3
+ "pass@1": 0.25
4
+ },
5
+ "config": {
6
+ "prefix": "",
7
+ "do_sample": false,
8
+ "temperature": 0,
9
+ "top_k": 0,
10
+ "top_p": 0.95,
11
+ "n_samples": 1,
12
+ "eos": "<|endoftext|>",
13
+ "seed": 0,
14
+ "model": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
15
+ "modeltype": "causal",
16
+ "peft_model": null,
17
+ "revision": null,
18
+ "use_auth_token": false,
19
+ "trust_remote_code": false,
20
+ "tasks": "humaneval",
21
+ "instruction_tokens": null,
22
+ "batch_size": 1,
23
+ "max_length_generation": 512,
24
+ "precision": "fp32",
25
+ "load_in_8bit": false,
26
+ "load_in_4bit": false,
27
+ "left_padding": false,
28
+ "limit": null,
29
+ "limit_start": 0,
30
+ "save_every_k_tasks": -1,
31
+ "postprocess": true,
32
+ "allow_code_execution": true,
33
+ "generation_only": false,
34
+ "load_generations_path": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750/Results/Code/generations_humaneval.json",
35
+ "load_data_path": null,
36
+ "metric_output_path": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750/Results/Code/Humaneval/evaluation_results.json",
37
+ "save_generations": false,
38
+ "load_generations_intermediate_paths": null,
39
+ "save_generations_path": "generations.json",
40
+ "save_references": false,
41
+ "save_references_path": "references.json",
42
+ "prompt": "prompt",
43
+ "max_memory_per_gpu": null,
44
+ "check_references": false
45
+ }
46
+ }
Results/Code/MBPP/evaluation_results.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "mbpp": {
3
+ "pass@1": 0.224
4
+ },
5
+ "config": {
6
+ "prefix": "",
7
+ "do_sample": false,
8
+ "temperature": 0,
9
+ "top_k": 0,
10
+ "top_p": 0.95,
11
+ "n_samples": 1,
12
+ "eos": "<|endoftext|>",
13
+ "seed": 0,
14
+ "model": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
15
+ "modeltype": "causal",
16
+ "peft_model": null,
17
+ "revision": null,
18
+ "use_auth_token": false,
19
+ "trust_remote_code": false,
20
+ "tasks": "mbpp",
21
+ "instruction_tokens": null,
22
+ "batch_size": 1,
23
+ "max_length_generation": 512,
24
+ "precision": "fp32",
25
+ "load_in_8bit": false,
26
+ "load_in_4bit": false,
27
+ "left_padding": false,
28
+ "limit": null,
29
+ "limit_start": 0,
30
+ "save_every_k_tasks": -1,
31
+ "postprocess": true,
32
+ "allow_code_execution": true,
33
+ "generation_only": false,
34
+ "load_generations_path": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750/Results/Code/generations_mbpp.json",
35
+ "load_data_path": null,
36
+ "metric_output_path": "/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750/Results/Code/MBPP/evaluation_results.json",
37
+ "save_generations": false,
38
+ "load_generations_intermediate_paths": null,
39
+ "save_generations_path": "generations.json",
40
+ "save_references": false,
41
+ "save_references_path": "references.json",
42
+ "prompt": "prompt",
43
+ "max_memory_per_gpu": null,
44
+ "check_references": false
45
+ }
46
+ }
Results/Code/generations_humaneval.json ADDED
The diff for this file is too large to render. See raw diff
 
Results/Code/generations_mbpp.json ADDED
The diff for this file is too large to render. See raw diff
 
Results/GSM8K/results.json ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "gsm8k": {
4
+ "exact_match,strict-match": 0.2934040940106141,
5
+ "exact_match_stderr,strict-match": 0.012541830815461487,
6
+ "exact_match,flexible-extract": 0.2941622441243366,
7
+ "exact_match_stderr,flexible-extract": 0.012551285331470152,
8
+ "alias": "gsm8k"
9
+ }
10
+ },
11
+ "group_subtasks": {
12
+ "gsm8k": []
13
+ },
14
+ "configs": {
15
+ "gsm8k": {
16
+ "task": "gsm8k",
17
+ "group": [
18
+ "math_word_problems"
19
+ ],
20
+ "dataset_path": "gsm8k",
21
+ "dataset_name": "main",
22
+ "training_split": "train",
23
+ "test_split": "test",
24
+ "fewshot_split": "train",
25
+ "doc_to_text": "Question: {{question}}\nAnswer:",
26
+ "doc_to_target": "{{answer}}",
27
+ "description": "",
28
+ "target_delimiter": " ",
29
+ "fewshot_delimiter": "\n\n",
30
+ "num_fewshot": 5,
31
+ "metric_list": [
32
+ {
33
+ "metric": "exact_match",
34
+ "aggregation": "mean",
35
+ "higher_is_better": true,
36
+ "ignore_case": true,
37
+ "ignore_punctuation": false,
38
+ "regexes_to_ignore": [
39
+ ",",
40
+ "\\$",
41
+ "(?s).*#### ",
42
+ "\\.$"
43
+ ]
44
+ }
45
+ ],
46
+ "output_type": "generate_until",
47
+ "generation_kwargs": {
48
+ "until": [
49
+ "Question:",
50
+ "</s>",
51
+ "<|im_end|>"
52
+ ],
53
+ "do_sample": false,
54
+ "temperature": 0.0
55
+ },
56
+ "repeats": 1,
57
+ "filter_list": [
58
+ {
59
+ "name": "strict-match",
60
+ "filter": [
61
+ {
62
+ "function": "regex",
63
+ "regex_pattern": "#### (\\-?[0-9\\.\\,]+)"
64
+ },
65
+ {
66
+ "function": "take_first"
67
+ }
68
+ ]
69
+ },
70
+ {
71
+ "name": "flexible-extract",
72
+ "filter": [
73
+ {
74
+ "function": "regex",
75
+ "group_select": -1,
76
+ "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)"
77
+ },
78
+ {
79
+ "function": "take_first"
80
+ }
81
+ ]
82
+ }
83
+ ],
84
+ "should_decontaminate": false,
85
+ "metadata": {
86
+ "version": 3.0
87
+ }
88
+ }
89
+ },
90
+ "versions": {
91
+ "gsm8k": 3.0
92
+ },
93
+ "n-shot": {
94
+ "gsm8k": 5
95
+ },
96
+ "config": {
97
+ "model": "hf",
98
+ "model_args": "pretrained=/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
99
+ "batch_size": "auto",
100
+ "batch_sizes": [],
101
+ "device": null,
102
+ "use_cache": null,
103
+ "limit": null,
104
+ "bootstrap_iters": 100000,
105
+ "gen_kwargs": null
106
+ },
107
+ "git_hash": "3196e907",
108
+ "date": 1719996509.435399,
109
+ "pretty_env_info": "PyTorch version: 2.4.0a0+07cecf4168.nv24.05\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.29.2\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H800\nGPU 1: NVIDIA H800\nGPU 2: NVIDIA H800\nGPU 3: NVIDIA H800\nGPU 4: NVIDIA H800\nGPU 5: NVIDIA H800\nGPU 6: NVIDIA H800\nGPU 7: NVIDIA H800\n\nNvidia driver version: 535.129.03\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8462Y+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 8\nFrequency boost: enabled\nCPU max MHz: 2801.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 120 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-31,64-95\nNUMA node1 CPU(s): 32-63,96-127\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==1.24.4\n[pip3] onnx==1.16.0\n[pip3] optree==0.11.0\n[pip3] pytorch-quantization==2.1.2\n[pip3] pytorch-triton==3.0.0+989adb9a2\n[pip3] torch==2.4.0a0+07cecf4168.nv24.5\n[pip3] torch-tensorrt==2.4.0a0\n[pip3] torchvision==0.19.0a0\n[conda] Could not collect",
110
+ "transformers_version": "4.41.2",
111
+ "upper_git_hash": null
112
+ }
Results/MATHQA/results.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "mathqa": {
4
+ "acc,none": 0.2917922948073702,
5
+ "acc_stderr,none": 0.00832181053996124,
6
+ "acc_norm,none": 0.2931323283082077,
7
+ "acc_norm_stderr,none": 0.00833300244579757,
8
+ "alias": "mathqa"
9
+ }
10
+ },
11
+ "group_subtasks": {
12
+ "mathqa": []
13
+ },
14
+ "configs": {
15
+ "mathqa": {
16
+ "task": "mathqa",
17
+ "group": [
18
+ "math_word_problems"
19
+ ],
20
+ "dataset_path": "math_qa",
21
+ "training_split": "train",
22
+ "validation_split": "validation",
23
+ "test_split": "test",
24
+ "doc_to_text": "Question: {{Problem}}\nAnswer:",
25
+ "doc_to_target": "{{['a', 'b', 'c', 'd', 'e'].index(correct)}}",
26
+ "doc_to_choice": "def doc_to_choice(doc):\n choices = [\n c[4:].rstrip(\" ,\")\n for c in re.findall(r\"[abcd] \\) .*?, |e \\) .*?$\", doc[\"options\"])\n ]\n return choices\n",
27
+ "description": "",
28
+ "target_delimiter": " ",
29
+ "fewshot_delimiter": "\n\n",
30
+ "num_fewshot": 0,
31
+ "metric_list": [
32
+ {
33
+ "metric": "acc",
34
+ "aggregation": "mean",
35
+ "higher_is_better": true
36
+ },
37
+ {
38
+ "metric": "acc_norm",
39
+ "aggregation": "mean",
40
+ "higher_is_better": true
41
+ }
42
+ ],
43
+ "output_type": "multiple_choice",
44
+ "repeats": 1,
45
+ "should_decontaminate": true,
46
+ "doc_to_decontamination_query": "Question: {{Problem}}\nAnswer:",
47
+ "metadata": {
48
+ "version": 1.0
49
+ }
50
+ }
51
+ },
52
+ "versions": {
53
+ "mathqa": 1.0
54
+ },
55
+ "n-shot": {
56
+ "mathqa": 0
57
+ },
58
+ "config": {
59
+ "model": "hf",
60
+ "model_args": "pretrained=/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
61
+ "batch_size": "auto",
62
+ "batch_sizes": [
63
+ 64
64
+ ],
65
+ "device": null,
66
+ "use_cache": null,
67
+ "limit": null,
68
+ "bootstrap_iters": 100000,
69
+ "gen_kwargs": null
70
+ },
71
+ "git_hash": "3196e907",
72
+ "date": 1719991136.7374132,
73
+ "pretty_env_info": "PyTorch version: 2.4.0a0+07cecf4168.nv24.05\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.29.2\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H800\nGPU 1: NVIDIA H800\nGPU 2: NVIDIA H800\nGPU 3: NVIDIA H800\nGPU 4: NVIDIA H800\nGPU 5: NVIDIA H800\nGPU 6: NVIDIA H800\nGPU 7: NVIDIA H800\n\nNvidia driver version: 535.129.03\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8462Y+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 8\nFrequency boost: enabled\nCPU max MHz: 2801.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 120 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-31,64-95\nNUMA node1 CPU(s): 32-63,96-127\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==1.24.4\n[pip3] onnx==1.16.0\n[pip3] optree==0.11.0\n[pip3] pytorch-quantization==2.1.2\n[pip3] pytorch-triton==3.0.0+989adb9a2\n[pip3] torch==2.4.0a0+07cecf4168.nv24.5\n[pip3] torch-tensorrt==2.4.0a0\n[pip3] torchvision==0.19.0a0\n[conda] Could not collect",
74
+ "transformers_version": "4.41.2",
75
+ "upper_git_hash": null
76
+ }
Results/MMLU/results.json ADDED
@@ -0,0 +1,2723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "mmlu": {
4
+ "acc,none": 0.44245833926791056,
5
+ "acc_stderr,none": 0.004097625263954979,
6
+ "alias": "mmlu"
7
+ },
8
+ "mmlu_humanities": {
9
+ "alias": " - humanities",
10
+ "acc,none": 0.40403825717322,
11
+ "acc_stderr,none": 0.006942627946513873
12
+ },
13
+ "mmlu_formal_logic": {
14
+ "alias": " - formal_logic",
15
+ "acc,none": 0.31746031746031744,
16
+ "acc_stderr,none": 0.041634530313028585
17
+ },
18
+ "mmlu_high_school_european_history": {
19
+ "alias": " - high_school_european_history",
20
+ "acc,none": 0.5393939393939394,
21
+ "acc_stderr,none": 0.03892207016552013
22
+ },
23
+ "mmlu_high_school_us_history": {
24
+ "alias": " - high_school_us_history",
25
+ "acc,none": 0.5343137254901961,
26
+ "acc_stderr,none": 0.03501038327635897
27
+ },
28
+ "mmlu_high_school_world_history": {
29
+ "alias": " - high_school_world_history",
30
+ "acc,none": 0.6033755274261603,
31
+ "acc_stderr,none": 0.03184399873811225
32
+ },
33
+ "mmlu_international_law": {
34
+ "alias": " - international_law",
35
+ "acc,none": 0.6776859504132231,
36
+ "acc_stderr,none": 0.042664163633521685
37
+ },
38
+ "mmlu_jurisprudence": {
39
+ "alias": " - jurisprudence",
40
+ "acc,none": 0.6018518518518519,
41
+ "acc_stderr,none": 0.04732332615978813
42
+ },
43
+ "mmlu_logical_fallacies": {
44
+ "alias": " - logical_fallacies",
45
+ "acc,none": 0.4539877300613497,
46
+ "acc_stderr,none": 0.0391170190467718
47
+ },
48
+ "mmlu_moral_disputes": {
49
+ "alias": " - moral_disputes",
50
+ "acc,none": 0.5144508670520231,
51
+ "acc_stderr,none": 0.026907849856282542
52
+ },
53
+ "mmlu_moral_scenarios": {
54
+ "alias": " - moral_scenarios",
55
+ "acc,none": 0.2424581005586592,
56
+ "acc_stderr,none": 0.014333522059217892
57
+ },
58
+ "mmlu_philosophy": {
59
+ "alias": " - philosophy",
60
+ "acc,none": 0.48231511254019294,
61
+ "acc_stderr,none": 0.02838032284907713
62
+ },
63
+ "mmlu_prehistory": {
64
+ "alias": " - prehistory",
65
+ "acc,none": 0.4537037037037037,
66
+ "acc_stderr,none": 0.0277012284685426
67
+ },
68
+ "mmlu_professional_law": {
69
+ "alias": " - professional_law",
70
+ "acc,none": 0.3396349413298566,
71
+ "acc_stderr,none": 0.012095592506931974
72
+ },
73
+ "mmlu_world_religions": {
74
+ "alias": " - world_religions",
75
+ "acc,none": 0.5029239766081871,
76
+ "acc_stderr,none": 0.03834759370936839
77
+ },
78
+ "mmlu_other": {
79
+ "alias": " - other",
80
+ "acc,none": 0.4766655938204055,
81
+ "acc_stderr,none": 0.008846010190081952
82
+ },
83
+ "mmlu_business_ethics": {
84
+ "alias": " - business_ethics",
85
+ "acc,none": 0.47,
86
+ "acc_stderr,none": 0.050161355804659205
87
+ },
88
+ "mmlu_clinical_knowledge": {
89
+ "alias": " - clinical_knowledge",
90
+ "acc,none": 0.4679245283018868,
91
+ "acc_stderr,none": 0.03070948699255654
92
+ },
93
+ "mmlu_college_medicine": {
94
+ "alias": " - college_medicine",
95
+ "acc,none": 0.45664739884393063,
96
+ "acc_stderr,none": 0.03798106566014499
97
+ },
98
+ "mmlu_global_facts": {
99
+ "alias": " - global_facts",
100
+ "acc,none": 0.32,
101
+ "acc_stderr,none": 0.04688261722621504
102
+ },
103
+ "mmlu_human_aging": {
104
+ "alias": " - human_aging",
105
+ "acc,none": 0.4977578475336323,
106
+ "acc_stderr,none": 0.03355746535223264
107
+ },
108
+ "mmlu_management": {
109
+ "alias": " - management",
110
+ "acc,none": 0.5825242718446602,
111
+ "acc_stderr,none": 0.048828405482122375
112
+ },
113
+ "mmlu_marketing": {
114
+ "alias": " - marketing",
115
+ "acc,none": 0.6623931623931624,
116
+ "acc_stderr,none": 0.030980296992618558
117
+ },
118
+ "mmlu_medical_genetics": {
119
+ "alias": " - medical_genetics",
120
+ "acc,none": 0.45,
121
+ "acc_stderr,none": 0.049999999999999996
122
+ },
123
+ "mmlu_miscellaneous": {
124
+ "alias": " - miscellaneous",
125
+ "acc,none": 0.49808429118773945,
126
+ "acc_stderr,none": 0.017879832259026677
127
+ },
128
+ "mmlu_nutrition": {
129
+ "alias": " - nutrition",
130
+ "acc,none": 0.5555555555555556,
131
+ "acc_stderr,none": 0.02845263998508801
132
+ },
133
+ "mmlu_professional_accounting": {
134
+ "alias": " - professional_accounting",
135
+ "acc,none": 0.40070921985815605,
136
+ "acc_stderr,none": 0.029233465745573086
137
+ },
138
+ "mmlu_professional_medicine": {
139
+ "alias": " - professional_medicine",
140
+ "acc,none": 0.33088235294117646,
141
+ "acc_stderr,none": 0.02858270975389843
142
+ },
143
+ "mmlu_virology": {
144
+ "alias": " - virology",
145
+ "acc,none": 0.39156626506024095,
146
+ "acc_stderr,none": 0.03799857454479636
147
+ },
148
+ "mmlu_social_sciences": {
149
+ "alias": " - social_sciences",
150
+ "acc,none": 0.5138121546961326,
151
+ "acc_stderr,none": 0.008898530360448643
152
+ },
153
+ "mmlu_econometrics": {
154
+ "alias": " - econometrics",
155
+ "acc,none": 0.32456140350877194,
156
+ "acc_stderr,none": 0.04404556157374768
157
+ },
158
+ "mmlu_high_school_geography": {
159
+ "alias": " - high_school_geography",
160
+ "acc,none": 0.5252525252525253,
161
+ "acc_stderr,none": 0.035578062450873145
162
+ },
163
+ "mmlu_high_school_government_and_politics": {
164
+ "alias": " - high_school_government_and_politics",
165
+ "acc,none": 0.5647668393782384,
166
+ "acc_stderr,none": 0.03578038165008586
167
+ },
168
+ "mmlu_high_school_macroeconomics": {
169
+ "alias": " - high_school_macroeconomics",
170
+ "acc,none": 0.4282051282051282,
171
+ "acc_stderr,none": 0.02508830145469483
172
+ },
173
+ "mmlu_high_school_microeconomics": {
174
+ "alias": " - high_school_microeconomics",
175
+ "acc,none": 0.4831932773109244,
176
+ "acc_stderr,none": 0.03246013680375308
177
+ },
178
+ "mmlu_high_school_psychology": {
179
+ "alias": " - high_school_psychology",
180
+ "acc,none": 0.5944954128440367,
181
+ "acc_stderr,none": 0.02105099799189684
182
+ },
183
+ "mmlu_human_sexuality": {
184
+ "alias": " - human_sexuality",
185
+ "acc,none": 0.549618320610687,
186
+ "acc_stderr,none": 0.04363643698524779
187
+ },
188
+ "mmlu_professional_psychology": {
189
+ "alias": " - professional_psychology",
190
+ "acc,none": 0.4297385620915033,
191
+ "acc_stderr,none": 0.020027122784928554
192
+ },
193
+ "mmlu_public_relations": {
194
+ "alias": " - public_relations",
195
+ "acc,none": 0.5363636363636364,
196
+ "acc_stderr,none": 0.047764491623961985
197
+ },
198
+ "mmlu_security_studies": {
199
+ "alias": " - security_studies",
200
+ "acc,none": 0.5551020408163265,
201
+ "acc_stderr,none": 0.031814251181977865
202
+ },
203
+ "mmlu_sociology": {
204
+ "alias": " - sociology",
205
+ "acc,none": 0.6268656716417911,
206
+ "acc_stderr,none": 0.03419832608176007
207
+ },
208
+ "mmlu_us_foreign_policy": {
209
+ "alias": " - us_foreign_policy",
210
+ "acc,none": 0.69,
211
+ "acc_stderr,none": 0.04648231987117316
212
+ },
213
+ "mmlu_stem": {
214
+ "alias": " - stem",
215
+ "acc,none": 0.3964478274659055,
216
+ "acc_stderr,none": 0.0086195325195511
217
+ },
218
+ "mmlu_abstract_algebra": {
219
+ "alias": " - abstract_algebra",
220
+ "acc,none": 0.35,
221
+ "acc_stderr,none": 0.047937248544110196
222
+ },
223
+ "mmlu_anatomy": {
224
+ "alias": " - anatomy",
225
+ "acc,none": 0.45925925925925926,
226
+ "acc_stderr,none": 0.04304979692464242
227
+ },
228
+ "mmlu_astronomy": {
229
+ "alias": " - astronomy",
230
+ "acc,none": 0.46710526315789475,
231
+ "acc_stderr,none": 0.04060127035236397
232
+ },
233
+ "mmlu_college_biology": {
234
+ "alias": " - college_biology",
235
+ "acc,none": 0.3680555555555556,
236
+ "acc_stderr,none": 0.040329990539607195
237
+ },
238
+ "mmlu_college_chemistry": {
239
+ "alias": " - college_chemistry",
240
+ "acc,none": 0.29,
241
+ "acc_stderr,none": 0.045604802157206845
242
+ },
243
+ "mmlu_college_computer_science": {
244
+ "alias": " - college_computer_science",
245
+ "acc,none": 0.37,
246
+ "acc_stderr,none": 0.048523658709391
247
+ },
248
+ "mmlu_college_mathematics": {
249
+ "alias": " - college_mathematics",
250
+ "acc,none": 0.26,
251
+ "acc_stderr,none": 0.044084400227680794
252
+ },
253
+ "mmlu_college_physics": {
254
+ "alias": " - college_physics",
255
+ "acc,none": 0.2549019607843137,
256
+ "acc_stderr,none": 0.04336432707993176
257
+ },
258
+ "mmlu_computer_security": {
259
+ "alias": " - computer_security",
260
+ "acc,none": 0.51,
261
+ "acc_stderr,none": 0.05024183937956913
262
+ },
263
+ "mmlu_conceptual_physics": {
264
+ "alias": " - conceptual_physics",
265
+ "acc,none": 0.3829787234042553,
266
+ "acc_stderr,none": 0.03177821250236922
267
+ },
268
+ "mmlu_electrical_engineering": {
269
+ "alias": " - electrical_engineering",
270
+ "acc,none": 0.5310344827586206,
271
+ "acc_stderr,none": 0.04158632762097828
272
+ },
273
+ "mmlu_elementary_mathematics": {
274
+ "alias": " - elementary_mathematics",
275
+ "acc,none": 0.4021164021164021,
276
+ "acc_stderr,none": 0.025253032554997695
277
+ },
278
+ "mmlu_high_school_biology": {
279
+ "alias": " - high_school_biology",
280
+ "acc,none": 0.47419354838709676,
281
+ "acc_stderr,none": 0.02840609505765332
282
+ },
283
+ "mmlu_high_school_chemistry": {
284
+ "alias": " - high_school_chemistry",
285
+ "acc,none": 0.43842364532019706,
286
+ "acc_stderr,none": 0.03491207857486519
287
+ },
288
+ "mmlu_high_school_computer_science": {
289
+ "alias": " - high_school_computer_science",
290
+ "acc,none": 0.58,
291
+ "acc_stderr,none": 0.049604496374885836
292
+ },
293
+ "mmlu_high_school_mathematics": {
294
+ "alias": " - high_school_mathematics",
295
+ "acc,none": 0.34444444444444444,
296
+ "acc_stderr,none": 0.028972648884844267
297
+ },
298
+ "mmlu_high_school_physics": {
299
+ "alias": " - high_school_physics",
300
+ "acc,none": 0.2781456953642384,
301
+ "acc_stderr,none": 0.03658603262763743
302
+ },
303
+ "mmlu_high_school_statistics": {
304
+ "alias": " - high_school_statistics",
305
+ "acc,none": 0.35648148148148145,
306
+ "acc_stderr,none": 0.03266478331527272
307
+ },
308
+ "mmlu_machine_learning": {
309
+ "alias": " - machine_learning",
310
+ "acc,none": 0.3125,
311
+ "acc_stderr,none": 0.043994650575715215
312
+ }
313
+ },
314
+ "groups": {
315
+ "mmlu": {
316
+ "acc,none": 0.44245833926791056,
317
+ "acc_stderr,none": 0.004097625263954979,
318
+ "alias": "mmlu"
319
+ },
320
+ "mmlu_humanities": {
321
+ "alias": " - humanities",
322
+ "acc,none": 0.40403825717322,
323
+ "acc_stderr,none": 0.006942627946513873
324
+ },
325
+ "mmlu_other": {
326
+ "alias": " - other",
327
+ "acc,none": 0.4766655938204055,
328
+ "acc_stderr,none": 0.008846010190081952
329
+ },
330
+ "mmlu_social_sciences": {
331
+ "alias": " - social_sciences",
332
+ "acc,none": 0.5138121546961326,
333
+ "acc_stderr,none": 0.008898530360448643
334
+ },
335
+ "mmlu_stem": {
336
+ "alias": " - stem",
337
+ "acc,none": 0.3964478274659055,
338
+ "acc_stderr,none": 0.0086195325195511
339
+ }
340
+ },
341
+ "group_subtasks": {
342
+ "mmlu_stem": [
343
+ "mmlu_high_school_mathematics",
344
+ "mmlu_high_school_computer_science",
345
+ "mmlu_high_school_chemistry",
346
+ "mmlu_electrical_engineering",
347
+ "mmlu_college_mathematics",
348
+ "mmlu_college_chemistry",
349
+ "mmlu_college_biology",
350
+ "mmlu_astronomy",
351
+ "mmlu_high_school_physics",
352
+ "mmlu_computer_security",
353
+ "mmlu_college_computer_science",
354
+ "mmlu_anatomy",
355
+ "mmlu_abstract_algebra",
356
+ "mmlu_high_school_biology",
357
+ "mmlu_machine_learning",
358
+ "mmlu_high_school_statistics",
359
+ "mmlu_elementary_mathematics",
360
+ "mmlu_conceptual_physics",
361
+ "mmlu_college_physics"
362
+ ],
363
+ "mmlu_other": [
364
+ "mmlu_marketing",
365
+ "mmlu_management",
366
+ "mmlu_human_aging",
367
+ "mmlu_global_facts",
368
+ "mmlu_business_ethics",
369
+ "mmlu_miscellaneous",
370
+ "mmlu_clinical_knowledge",
371
+ "mmlu_professional_accounting",
372
+ "mmlu_nutrition",
373
+ "mmlu_college_medicine",
374
+ "mmlu_virology",
375
+ "mmlu_professional_medicine",
376
+ "mmlu_medical_genetics"
377
+ ],
378
+ "mmlu_social_sciences": [
379
+ "mmlu_sociology",
380
+ "mmlu_econometrics",
381
+ "mmlu_us_foreign_policy",
382
+ "mmlu_security_studies",
383
+ "mmlu_high_school_macroeconomics",
384
+ "mmlu_public_relations",
385
+ "mmlu_professional_psychology",
386
+ "mmlu_human_sexuality",
387
+ "mmlu_high_school_psychology",
388
+ "mmlu_high_school_microeconomics",
389
+ "mmlu_high_school_government_and_politics",
390
+ "mmlu_high_school_geography"
391
+ ],
392
+ "mmlu_humanities": [
393
+ "mmlu_professional_law",
394
+ "mmlu_prehistory",
395
+ "mmlu_philosophy",
396
+ "mmlu_formal_logic",
397
+ "mmlu_moral_scenarios",
398
+ "mmlu_high_school_world_history",
399
+ "mmlu_high_school_european_history",
400
+ "mmlu_world_religions",
401
+ "mmlu_moral_disputes",
402
+ "mmlu_international_law",
403
+ "mmlu_high_school_us_history",
404
+ "mmlu_jurisprudence",
405
+ "mmlu_logical_fallacies"
406
+ ],
407
+ "mmlu": [
408
+ "mmlu_humanities",
409
+ "mmlu_social_sciences",
410
+ "mmlu_other",
411
+ "mmlu_stem"
412
+ ]
413
+ },
414
+ "configs": {
415
+ "mmlu_abstract_algebra": {
416
+ "task": "mmlu_abstract_algebra",
417
+ "task_alias": "abstract_algebra",
418
+ "group": "mmlu_stem",
419
+ "group_alias": "stem",
420
+ "dataset_path": "hails/mmlu_no_train",
421
+ "dataset_name": "abstract_algebra",
422
+ "test_split": "test",
423
+ "fewshot_split": "dev",
424
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
425
+ "doc_to_target": "answer",
426
+ "doc_to_choice": [
427
+ "A",
428
+ "B",
429
+ "C",
430
+ "D"
431
+ ],
432
+ "description": "The following are multiple choice questions (with answers) about abstract algebra.\n\n",
433
+ "target_delimiter": " ",
434
+ "fewshot_delimiter": "\n\n",
435
+ "fewshot_config": {
436
+ "sampler": "first_n"
437
+ },
438
+ "num_fewshot": 0,
439
+ "metric_list": [
440
+ {
441
+ "metric": "acc",
442
+ "aggregation": "mean",
443
+ "higher_is_better": true
444
+ }
445
+ ],
446
+ "output_type": "multiple_choice",
447
+ "repeats": 1,
448
+ "should_decontaminate": false,
449
+ "metadata": {
450
+ "version": 0.0
451
+ }
452
+ },
453
+ "mmlu_anatomy": {
454
+ "task": "mmlu_anatomy",
455
+ "task_alias": "anatomy",
456
+ "group": "mmlu_stem",
457
+ "group_alias": "stem",
458
+ "dataset_path": "hails/mmlu_no_train",
459
+ "dataset_name": "anatomy",
460
+ "test_split": "test",
461
+ "fewshot_split": "dev",
462
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
463
+ "doc_to_target": "answer",
464
+ "doc_to_choice": [
465
+ "A",
466
+ "B",
467
+ "C",
468
+ "D"
469
+ ],
470
+ "description": "The following are multiple choice questions (with answers) about anatomy.\n\n",
471
+ "target_delimiter": " ",
472
+ "fewshot_delimiter": "\n\n",
473
+ "fewshot_config": {
474
+ "sampler": "first_n"
475
+ },
476
+ "num_fewshot": 0,
477
+ "metric_list": [
478
+ {
479
+ "metric": "acc",
480
+ "aggregation": "mean",
481
+ "higher_is_better": true
482
+ }
483
+ ],
484
+ "output_type": "multiple_choice",
485
+ "repeats": 1,
486
+ "should_decontaminate": false,
487
+ "metadata": {
488
+ "version": 0.0
489
+ }
490
+ },
491
+ "mmlu_astronomy": {
492
+ "task": "mmlu_astronomy",
493
+ "task_alias": "astronomy",
494
+ "group": "mmlu_stem",
495
+ "group_alias": "stem",
496
+ "dataset_path": "hails/mmlu_no_train",
497
+ "dataset_name": "astronomy",
498
+ "test_split": "test",
499
+ "fewshot_split": "dev",
500
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
501
+ "doc_to_target": "answer",
502
+ "doc_to_choice": [
503
+ "A",
504
+ "B",
505
+ "C",
506
+ "D"
507
+ ],
508
+ "description": "The following are multiple choice questions (with answers) about astronomy.\n\n",
509
+ "target_delimiter": " ",
510
+ "fewshot_delimiter": "\n\n",
511
+ "fewshot_config": {
512
+ "sampler": "first_n"
513
+ },
514
+ "num_fewshot": 0,
515
+ "metric_list": [
516
+ {
517
+ "metric": "acc",
518
+ "aggregation": "mean",
519
+ "higher_is_better": true
520
+ }
521
+ ],
522
+ "output_type": "multiple_choice",
523
+ "repeats": 1,
524
+ "should_decontaminate": false,
525
+ "metadata": {
526
+ "version": 0.0
527
+ }
528
+ },
529
+ "mmlu_business_ethics": {
530
+ "task": "mmlu_business_ethics",
531
+ "task_alias": "business_ethics",
532
+ "group": "mmlu_other",
533
+ "group_alias": "other",
534
+ "dataset_path": "hails/mmlu_no_train",
535
+ "dataset_name": "business_ethics",
536
+ "test_split": "test",
537
+ "fewshot_split": "dev",
538
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
539
+ "doc_to_target": "answer",
540
+ "doc_to_choice": [
541
+ "A",
542
+ "B",
543
+ "C",
544
+ "D"
545
+ ],
546
+ "description": "The following are multiple choice questions (with answers) about business ethics.\n\n",
547
+ "target_delimiter": " ",
548
+ "fewshot_delimiter": "\n\n",
549
+ "fewshot_config": {
550
+ "sampler": "first_n"
551
+ },
552
+ "num_fewshot": 0,
553
+ "metric_list": [
554
+ {
555
+ "metric": "acc",
556
+ "aggregation": "mean",
557
+ "higher_is_better": true
558
+ }
559
+ ],
560
+ "output_type": "multiple_choice",
561
+ "repeats": 1,
562
+ "should_decontaminate": false,
563
+ "metadata": {
564
+ "version": 0.0
565
+ }
566
+ },
567
+ "mmlu_clinical_knowledge": {
568
+ "task": "mmlu_clinical_knowledge",
569
+ "task_alias": "clinical_knowledge",
570
+ "group": "mmlu_other",
571
+ "group_alias": "other",
572
+ "dataset_path": "hails/mmlu_no_train",
573
+ "dataset_name": "clinical_knowledge",
574
+ "test_split": "test",
575
+ "fewshot_split": "dev",
576
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
577
+ "doc_to_target": "answer",
578
+ "doc_to_choice": [
579
+ "A",
580
+ "B",
581
+ "C",
582
+ "D"
583
+ ],
584
+ "description": "The following are multiple choice questions (with answers) about clinical knowledge.\n\n",
585
+ "target_delimiter": " ",
586
+ "fewshot_delimiter": "\n\n",
587
+ "fewshot_config": {
588
+ "sampler": "first_n"
589
+ },
590
+ "num_fewshot": 0,
591
+ "metric_list": [
592
+ {
593
+ "metric": "acc",
594
+ "aggregation": "mean",
595
+ "higher_is_better": true
596
+ }
597
+ ],
598
+ "output_type": "multiple_choice",
599
+ "repeats": 1,
600
+ "should_decontaminate": false,
601
+ "metadata": {
602
+ "version": 0.0
603
+ }
604
+ },
605
+ "mmlu_college_biology": {
606
+ "task": "mmlu_college_biology",
607
+ "task_alias": "college_biology",
608
+ "group": "mmlu_stem",
609
+ "group_alias": "stem",
610
+ "dataset_path": "hails/mmlu_no_train",
611
+ "dataset_name": "college_biology",
612
+ "test_split": "test",
613
+ "fewshot_split": "dev",
614
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
615
+ "doc_to_target": "answer",
616
+ "doc_to_choice": [
617
+ "A",
618
+ "B",
619
+ "C",
620
+ "D"
621
+ ],
622
+ "description": "The following are multiple choice questions (with answers) about college biology.\n\n",
623
+ "target_delimiter": " ",
624
+ "fewshot_delimiter": "\n\n",
625
+ "fewshot_config": {
626
+ "sampler": "first_n"
627
+ },
628
+ "num_fewshot": 0,
629
+ "metric_list": [
630
+ {
631
+ "metric": "acc",
632
+ "aggregation": "mean",
633
+ "higher_is_better": true
634
+ }
635
+ ],
636
+ "output_type": "multiple_choice",
637
+ "repeats": 1,
638
+ "should_decontaminate": false,
639
+ "metadata": {
640
+ "version": 0.0
641
+ }
642
+ },
643
+ "mmlu_college_chemistry": {
644
+ "task": "mmlu_college_chemistry",
645
+ "task_alias": "college_chemistry",
646
+ "group": "mmlu_stem",
647
+ "group_alias": "stem",
648
+ "dataset_path": "hails/mmlu_no_train",
649
+ "dataset_name": "college_chemistry",
650
+ "test_split": "test",
651
+ "fewshot_split": "dev",
652
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
653
+ "doc_to_target": "answer",
654
+ "doc_to_choice": [
655
+ "A",
656
+ "B",
657
+ "C",
658
+ "D"
659
+ ],
660
+ "description": "The following are multiple choice questions (with answers) about college chemistry.\n\n",
661
+ "target_delimiter": " ",
662
+ "fewshot_delimiter": "\n\n",
663
+ "fewshot_config": {
664
+ "sampler": "first_n"
665
+ },
666
+ "num_fewshot": 0,
667
+ "metric_list": [
668
+ {
669
+ "metric": "acc",
670
+ "aggregation": "mean",
671
+ "higher_is_better": true
672
+ }
673
+ ],
674
+ "output_type": "multiple_choice",
675
+ "repeats": 1,
676
+ "should_decontaminate": false,
677
+ "metadata": {
678
+ "version": 0.0
679
+ }
680
+ },
681
+ "mmlu_college_computer_science": {
682
+ "task": "mmlu_college_computer_science",
683
+ "task_alias": "college_computer_science",
684
+ "group": "mmlu_stem",
685
+ "group_alias": "stem",
686
+ "dataset_path": "hails/mmlu_no_train",
687
+ "dataset_name": "college_computer_science",
688
+ "test_split": "test",
689
+ "fewshot_split": "dev",
690
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
691
+ "doc_to_target": "answer",
692
+ "doc_to_choice": [
693
+ "A",
694
+ "B",
695
+ "C",
696
+ "D"
697
+ ],
698
+ "description": "The following are multiple choice questions (with answers) about college computer science.\n\n",
699
+ "target_delimiter": " ",
700
+ "fewshot_delimiter": "\n\n",
701
+ "fewshot_config": {
702
+ "sampler": "first_n"
703
+ },
704
+ "num_fewshot": 0,
705
+ "metric_list": [
706
+ {
707
+ "metric": "acc",
708
+ "aggregation": "mean",
709
+ "higher_is_better": true
710
+ }
711
+ ],
712
+ "output_type": "multiple_choice",
713
+ "repeats": 1,
714
+ "should_decontaminate": false,
715
+ "metadata": {
716
+ "version": 0.0
717
+ }
718
+ },
719
+ "mmlu_college_mathematics": {
720
+ "task": "mmlu_college_mathematics",
721
+ "task_alias": "college_mathematics",
722
+ "group": "mmlu_stem",
723
+ "group_alias": "stem",
724
+ "dataset_path": "hails/mmlu_no_train",
725
+ "dataset_name": "college_mathematics",
726
+ "test_split": "test",
727
+ "fewshot_split": "dev",
728
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
729
+ "doc_to_target": "answer",
730
+ "doc_to_choice": [
731
+ "A",
732
+ "B",
733
+ "C",
734
+ "D"
735
+ ],
736
+ "description": "The following are multiple choice questions (with answers) about college mathematics.\n\n",
737
+ "target_delimiter": " ",
738
+ "fewshot_delimiter": "\n\n",
739
+ "fewshot_config": {
740
+ "sampler": "first_n"
741
+ },
742
+ "num_fewshot": 0,
743
+ "metric_list": [
744
+ {
745
+ "metric": "acc",
746
+ "aggregation": "mean",
747
+ "higher_is_better": true
748
+ }
749
+ ],
750
+ "output_type": "multiple_choice",
751
+ "repeats": 1,
752
+ "should_decontaminate": false,
753
+ "metadata": {
754
+ "version": 0.0
755
+ }
756
+ },
757
+ "mmlu_college_medicine": {
758
+ "task": "mmlu_college_medicine",
759
+ "task_alias": "college_medicine",
760
+ "group": "mmlu_other",
761
+ "group_alias": "other",
762
+ "dataset_path": "hails/mmlu_no_train",
763
+ "dataset_name": "college_medicine",
764
+ "test_split": "test",
765
+ "fewshot_split": "dev",
766
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
767
+ "doc_to_target": "answer",
768
+ "doc_to_choice": [
769
+ "A",
770
+ "B",
771
+ "C",
772
+ "D"
773
+ ],
774
+ "description": "The following are multiple choice questions (with answers) about college medicine.\n\n",
775
+ "target_delimiter": " ",
776
+ "fewshot_delimiter": "\n\n",
777
+ "fewshot_config": {
778
+ "sampler": "first_n"
779
+ },
780
+ "num_fewshot": 0,
781
+ "metric_list": [
782
+ {
783
+ "metric": "acc",
784
+ "aggregation": "mean",
785
+ "higher_is_better": true
786
+ }
787
+ ],
788
+ "output_type": "multiple_choice",
789
+ "repeats": 1,
790
+ "should_decontaminate": false,
791
+ "metadata": {
792
+ "version": 0.0
793
+ }
794
+ },
795
+ "mmlu_college_physics": {
796
+ "task": "mmlu_college_physics",
797
+ "task_alias": "college_physics",
798
+ "group": "mmlu_stem",
799
+ "group_alias": "stem",
800
+ "dataset_path": "hails/mmlu_no_train",
801
+ "dataset_name": "college_physics",
802
+ "test_split": "test",
803
+ "fewshot_split": "dev",
804
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
805
+ "doc_to_target": "answer",
806
+ "doc_to_choice": [
807
+ "A",
808
+ "B",
809
+ "C",
810
+ "D"
811
+ ],
812
+ "description": "The following are multiple choice questions (with answers) about college physics.\n\n",
813
+ "target_delimiter": " ",
814
+ "fewshot_delimiter": "\n\n",
815
+ "fewshot_config": {
816
+ "sampler": "first_n"
817
+ },
818
+ "num_fewshot": 0,
819
+ "metric_list": [
820
+ {
821
+ "metric": "acc",
822
+ "aggregation": "mean",
823
+ "higher_is_better": true
824
+ }
825
+ ],
826
+ "output_type": "multiple_choice",
827
+ "repeats": 1,
828
+ "should_decontaminate": false,
829
+ "metadata": {
830
+ "version": 0.0
831
+ }
832
+ },
833
+ "mmlu_computer_security": {
834
+ "task": "mmlu_computer_security",
835
+ "task_alias": "computer_security",
836
+ "group": "mmlu_stem",
837
+ "group_alias": "stem",
838
+ "dataset_path": "hails/mmlu_no_train",
839
+ "dataset_name": "computer_security",
840
+ "test_split": "test",
841
+ "fewshot_split": "dev",
842
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
843
+ "doc_to_target": "answer",
844
+ "doc_to_choice": [
845
+ "A",
846
+ "B",
847
+ "C",
848
+ "D"
849
+ ],
850
+ "description": "The following are multiple choice questions (with answers) about computer security.\n\n",
851
+ "target_delimiter": " ",
852
+ "fewshot_delimiter": "\n\n",
853
+ "fewshot_config": {
854
+ "sampler": "first_n"
855
+ },
856
+ "num_fewshot": 0,
857
+ "metric_list": [
858
+ {
859
+ "metric": "acc",
860
+ "aggregation": "mean",
861
+ "higher_is_better": true
862
+ }
863
+ ],
864
+ "output_type": "multiple_choice",
865
+ "repeats": 1,
866
+ "should_decontaminate": false,
867
+ "metadata": {
868
+ "version": 0.0
869
+ }
870
+ },
871
+ "mmlu_conceptual_physics": {
872
+ "task": "mmlu_conceptual_physics",
873
+ "task_alias": "conceptual_physics",
874
+ "group": "mmlu_stem",
875
+ "group_alias": "stem",
876
+ "dataset_path": "hails/mmlu_no_train",
877
+ "dataset_name": "conceptual_physics",
878
+ "test_split": "test",
879
+ "fewshot_split": "dev",
880
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
881
+ "doc_to_target": "answer",
882
+ "doc_to_choice": [
883
+ "A",
884
+ "B",
885
+ "C",
886
+ "D"
887
+ ],
888
+ "description": "The following are multiple choice questions (with answers) about conceptual physics.\n\n",
889
+ "target_delimiter": " ",
890
+ "fewshot_delimiter": "\n\n",
891
+ "fewshot_config": {
892
+ "sampler": "first_n"
893
+ },
894
+ "num_fewshot": 0,
895
+ "metric_list": [
896
+ {
897
+ "metric": "acc",
898
+ "aggregation": "mean",
899
+ "higher_is_better": true
900
+ }
901
+ ],
902
+ "output_type": "multiple_choice",
903
+ "repeats": 1,
904
+ "should_decontaminate": false,
905
+ "metadata": {
906
+ "version": 0.0
907
+ }
908
+ },
909
+ "mmlu_econometrics": {
910
+ "task": "mmlu_econometrics",
911
+ "task_alias": "econometrics",
912
+ "group": "mmlu_social_sciences",
913
+ "group_alias": "social_sciences",
914
+ "dataset_path": "hails/mmlu_no_train",
915
+ "dataset_name": "econometrics",
916
+ "test_split": "test",
917
+ "fewshot_split": "dev",
918
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
919
+ "doc_to_target": "answer",
920
+ "doc_to_choice": [
921
+ "A",
922
+ "B",
923
+ "C",
924
+ "D"
925
+ ],
926
+ "description": "The following are multiple choice questions (with answers) about econometrics.\n\n",
927
+ "target_delimiter": " ",
928
+ "fewshot_delimiter": "\n\n",
929
+ "fewshot_config": {
930
+ "sampler": "first_n"
931
+ },
932
+ "num_fewshot": 0,
933
+ "metric_list": [
934
+ {
935
+ "metric": "acc",
936
+ "aggregation": "mean",
937
+ "higher_is_better": true
938
+ }
939
+ ],
940
+ "output_type": "multiple_choice",
941
+ "repeats": 1,
942
+ "should_decontaminate": false,
943
+ "metadata": {
944
+ "version": 0.0
945
+ }
946
+ },
947
+ "mmlu_electrical_engineering": {
948
+ "task": "mmlu_electrical_engineering",
949
+ "task_alias": "electrical_engineering",
950
+ "group": "mmlu_stem",
951
+ "group_alias": "stem",
952
+ "dataset_path": "hails/mmlu_no_train",
953
+ "dataset_name": "electrical_engineering",
954
+ "test_split": "test",
955
+ "fewshot_split": "dev",
956
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
957
+ "doc_to_target": "answer",
958
+ "doc_to_choice": [
959
+ "A",
960
+ "B",
961
+ "C",
962
+ "D"
963
+ ],
964
+ "description": "The following are multiple choice questions (with answers) about electrical engineering.\n\n",
965
+ "target_delimiter": " ",
966
+ "fewshot_delimiter": "\n\n",
967
+ "fewshot_config": {
968
+ "sampler": "first_n"
969
+ },
970
+ "num_fewshot": 0,
971
+ "metric_list": [
972
+ {
973
+ "metric": "acc",
974
+ "aggregation": "mean",
975
+ "higher_is_better": true
976
+ }
977
+ ],
978
+ "output_type": "multiple_choice",
979
+ "repeats": 1,
980
+ "should_decontaminate": false,
981
+ "metadata": {
982
+ "version": 0.0
983
+ }
984
+ },
985
+ "mmlu_elementary_mathematics": {
986
+ "task": "mmlu_elementary_mathematics",
987
+ "task_alias": "elementary_mathematics",
988
+ "group": "mmlu_stem",
989
+ "group_alias": "stem",
990
+ "dataset_path": "hails/mmlu_no_train",
991
+ "dataset_name": "elementary_mathematics",
992
+ "test_split": "test",
993
+ "fewshot_split": "dev",
994
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
995
+ "doc_to_target": "answer",
996
+ "doc_to_choice": [
997
+ "A",
998
+ "B",
999
+ "C",
1000
+ "D"
1001
+ ],
1002
+ "description": "The following are multiple choice questions (with answers) about elementary mathematics.\n\n",
1003
+ "target_delimiter": " ",
1004
+ "fewshot_delimiter": "\n\n",
1005
+ "fewshot_config": {
1006
+ "sampler": "first_n"
1007
+ },
1008
+ "num_fewshot": 0,
1009
+ "metric_list": [
1010
+ {
1011
+ "metric": "acc",
1012
+ "aggregation": "mean",
1013
+ "higher_is_better": true
1014
+ }
1015
+ ],
1016
+ "output_type": "multiple_choice",
1017
+ "repeats": 1,
1018
+ "should_decontaminate": false,
1019
+ "metadata": {
1020
+ "version": 0.0
1021
+ }
1022
+ },
1023
+ "mmlu_formal_logic": {
1024
+ "task": "mmlu_formal_logic",
1025
+ "task_alias": "formal_logic",
1026
+ "group": "mmlu_humanities",
1027
+ "group_alias": "humanities",
1028
+ "dataset_path": "hails/mmlu_no_train",
1029
+ "dataset_name": "formal_logic",
1030
+ "test_split": "test",
1031
+ "fewshot_split": "dev",
1032
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1033
+ "doc_to_target": "answer",
1034
+ "doc_to_choice": [
1035
+ "A",
1036
+ "B",
1037
+ "C",
1038
+ "D"
1039
+ ],
1040
+ "description": "The following are multiple choice questions (with answers) about formal logic.\n\n",
1041
+ "target_delimiter": " ",
1042
+ "fewshot_delimiter": "\n\n",
1043
+ "fewshot_config": {
1044
+ "sampler": "first_n"
1045
+ },
1046
+ "num_fewshot": 0,
1047
+ "metric_list": [
1048
+ {
1049
+ "metric": "acc",
1050
+ "aggregation": "mean",
1051
+ "higher_is_better": true
1052
+ }
1053
+ ],
1054
+ "output_type": "multiple_choice",
1055
+ "repeats": 1,
1056
+ "should_decontaminate": false,
1057
+ "metadata": {
1058
+ "version": 0.0
1059
+ }
1060
+ },
1061
+ "mmlu_global_facts": {
1062
+ "task": "mmlu_global_facts",
1063
+ "task_alias": "global_facts",
1064
+ "group": "mmlu_other",
1065
+ "group_alias": "other",
1066
+ "dataset_path": "hails/mmlu_no_train",
1067
+ "dataset_name": "global_facts",
1068
+ "test_split": "test",
1069
+ "fewshot_split": "dev",
1070
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1071
+ "doc_to_target": "answer",
1072
+ "doc_to_choice": [
1073
+ "A",
1074
+ "B",
1075
+ "C",
1076
+ "D"
1077
+ ],
1078
+ "description": "The following are multiple choice questions (with answers) about global facts.\n\n",
1079
+ "target_delimiter": " ",
1080
+ "fewshot_delimiter": "\n\n",
1081
+ "fewshot_config": {
1082
+ "sampler": "first_n"
1083
+ },
1084
+ "num_fewshot": 0,
1085
+ "metric_list": [
1086
+ {
1087
+ "metric": "acc",
1088
+ "aggregation": "mean",
1089
+ "higher_is_better": true
1090
+ }
1091
+ ],
1092
+ "output_type": "multiple_choice",
1093
+ "repeats": 1,
1094
+ "should_decontaminate": false,
1095
+ "metadata": {
1096
+ "version": 0.0
1097
+ }
1098
+ },
1099
+ "mmlu_high_school_biology": {
1100
+ "task": "mmlu_high_school_biology",
1101
+ "task_alias": "high_school_biology",
1102
+ "group": "mmlu_stem",
1103
+ "group_alias": "stem",
1104
+ "dataset_path": "hails/mmlu_no_train",
1105
+ "dataset_name": "high_school_biology",
1106
+ "test_split": "test",
1107
+ "fewshot_split": "dev",
1108
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1109
+ "doc_to_target": "answer",
1110
+ "doc_to_choice": [
1111
+ "A",
1112
+ "B",
1113
+ "C",
1114
+ "D"
1115
+ ],
1116
+ "description": "The following are multiple choice questions (with answers) about high school biology.\n\n",
1117
+ "target_delimiter": " ",
1118
+ "fewshot_delimiter": "\n\n",
1119
+ "fewshot_config": {
1120
+ "sampler": "first_n"
1121
+ },
1122
+ "num_fewshot": 0,
1123
+ "metric_list": [
1124
+ {
1125
+ "metric": "acc",
1126
+ "aggregation": "mean",
1127
+ "higher_is_better": true
1128
+ }
1129
+ ],
1130
+ "output_type": "multiple_choice",
1131
+ "repeats": 1,
1132
+ "should_decontaminate": false,
1133
+ "metadata": {
1134
+ "version": 0.0
1135
+ }
1136
+ },
1137
+ "mmlu_high_school_chemistry": {
1138
+ "task": "mmlu_high_school_chemistry",
1139
+ "task_alias": "high_school_chemistry",
1140
+ "group": "mmlu_stem",
1141
+ "group_alias": "stem",
1142
+ "dataset_path": "hails/mmlu_no_train",
1143
+ "dataset_name": "high_school_chemistry",
1144
+ "test_split": "test",
1145
+ "fewshot_split": "dev",
1146
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1147
+ "doc_to_target": "answer",
1148
+ "doc_to_choice": [
1149
+ "A",
1150
+ "B",
1151
+ "C",
1152
+ "D"
1153
+ ],
1154
+ "description": "The following are multiple choice questions (with answers) about high school chemistry.\n\n",
1155
+ "target_delimiter": " ",
1156
+ "fewshot_delimiter": "\n\n",
1157
+ "fewshot_config": {
1158
+ "sampler": "first_n"
1159
+ },
1160
+ "num_fewshot": 0,
1161
+ "metric_list": [
1162
+ {
1163
+ "metric": "acc",
1164
+ "aggregation": "mean",
1165
+ "higher_is_better": true
1166
+ }
1167
+ ],
1168
+ "output_type": "multiple_choice",
1169
+ "repeats": 1,
1170
+ "should_decontaminate": false,
1171
+ "metadata": {
1172
+ "version": 0.0
1173
+ }
1174
+ },
1175
+ "mmlu_high_school_computer_science": {
1176
+ "task": "mmlu_high_school_computer_science",
1177
+ "task_alias": "high_school_computer_science",
1178
+ "group": "mmlu_stem",
1179
+ "group_alias": "stem",
1180
+ "dataset_path": "hails/mmlu_no_train",
1181
+ "dataset_name": "high_school_computer_science",
1182
+ "test_split": "test",
1183
+ "fewshot_split": "dev",
1184
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1185
+ "doc_to_target": "answer",
1186
+ "doc_to_choice": [
1187
+ "A",
1188
+ "B",
1189
+ "C",
1190
+ "D"
1191
+ ],
1192
+ "description": "The following are multiple choice questions (with answers) about high school computer science.\n\n",
1193
+ "target_delimiter": " ",
1194
+ "fewshot_delimiter": "\n\n",
1195
+ "fewshot_config": {
1196
+ "sampler": "first_n"
1197
+ },
1198
+ "num_fewshot": 0,
1199
+ "metric_list": [
1200
+ {
1201
+ "metric": "acc",
1202
+ "aggregation": "mean",
1203
+ "higher_is_better": true
1204
+ }
1205
+ ],
1206
+ "output_type": "multiple_choice",
1207
+ "repeats": 1,
1208
+ "should_decontaminate": false,
1209
+ "metadata": {
1210
+ "version": 0.0
1211
+ }
1212
+ },
1213
+ "mmlu_high_school_european_history": {
1214
+ "task": "mmlu_high_school_european_history",
1215
+ "task_alias": "high_school_european_history",
1216
+ "group": "mmlu_humanities",
1217
+ "group_alias": "humanities",
1218
+ "dataset_path": "hails/mmlu_no_train",
1219
+ "dataset_name": "high_school_european_history",
1220
+ "test_split": "test",
1221
+ "fewshot_split": "dev",
1222
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1223
+ "doc_to_target": "answer",
1224
+ "doc_to_choice": [
1225
+ "A",
1226
+ "B",
1227
+ "C",
1228
+ "D"
1229
+ ],
1230
+ "description": "The following are multiple choice questions (with answers) about high school european history.\n\n",
1231
+ "target_delimiter": " ",
1232
+ "fewshot_delimiter": "\n\n",
1233
+ "fewshot_config": {
1234
+ "sampler": "first_n"
1235
+ },
1236
+ "num_fewshot": 0,
1237
+ "metric_list": [
1238
+ {
1239
+ "metric": "acc",
1240
+ "aggregation": "mean",
1241
+ "higher_is_better": true
1242
+ }
1243
+ ],
1244
+ "output_type": "multiple_choice",
1245
+ "repeats": 1,
1246
+ "should_decontaminate": false,
1247
+ "metadata": {
1248
+ "version": 0.0
1249
+ }
1250
+ },
1251
+ "mmlu_high_school_geography": {
1252
+ "task": "mmlu_high_school_geography",
1253
+ "task_alias": "high_school_geography",
1254
+ "group": "mmlu_social_sciences",
1255
+ "group_alias": "social_sciences",
1256
+ "dataset_path": "hails/mmlu_no_train",
1257
+ "dataset_name": "high_school_geography",
1258
+ "test_split": "test",
1259
+ "fewshot_split": "dev",
1260
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1261
+ "doc_to_target": "answer",
1262
+ "doc_to_choice": [
1263
+ "A",
1264
+ "B",
1265
+ "C",
1266
+ "D"
1267
+ ],
1268
+ "description": "The following are multiple choice questions (with answers) about high school geography.\n\n",
1269
+ "target_delimiter": " ",
1270
+ "fewshot_delimiter": "\n\n",
1271
+ "fewshot_config": {
1272
+ "sampler": "first_n"
1273
+ },
1274
+ "num_fewshot": 0,
1275
+ "metric_list": [
1276
+ {
1277
+ "metric": "acc",
1278
+ "aggregation": "mean",
1279
+ "higher_is_better": true
1280
+ }
1281
+ ],
1282
+ "output_type": "multiple_choice",
1283
+ "repeats": 1,
1284
+ "should_decontaminate": false,
1285
+ "metadata": {
1286
+ "version": 0.0
1287
+ }
1288
+ },
1289
+ "mmlu_high_school_government_and_politics": {
1290
+ "task": "mmlu_high_school_government_and_politics",
1291
+ "task_alias": "high_school_government_and_politics",
1292
+ "group": "mmlu_social_sciences",
1293
+ "group_alias": "social_sciences",
1294
+ "dataset_path": "hails/mmlu_no_train",
1295
+ "dataset_name": "high_school_government_and_politics",
1296
+ "test_split": "test",
1297
+ "fewshot_split": "dev",
1298
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1299
+ "doc_to_target": "answer",
1300
+ "doc_to_choice": [
1301
+ "A",
1302
+ "B",
1303
+ "C",
1304
+ "D"
1305
+ ],
1306
+ "description": "The following are multiple choice questions (with answers) about high school government and politics.\n\n",
1307
+ "target_delimiter": " ",
1308
+ "fewshot_delimiter": "\n\n",
1309
+ "fewshot_config": {
1310
+ "sampler": "first_n"
1311
+ },
1312
+ "num_fewshot": 0,
1313
+ "metric_list": [
1314
+ {
1315
+ "metric": "acc",
1316
+ "aggregation": "mean",
1317
+ "higher_is_better": true
1318
+ }
1319
+ ],
1320
+ "output_type": "multiple_choice",
1321
+ "repeats": 1,
1322
+ "should_decontaminate": false,
1323
+ "metadata": {
1324
+ "version": 0.0
1325
+ }
1326
+ },
1327
+ "mmlu_high_school_macroeconomics": {
1328
+ "task": "mmlu_high_school_macroeconomics",
1329
+ "task_alias": "high_school_macroeconomics",
1330
+ "group": "mmlu_social_sciences",
1331
+ "group_alias": "social_sciences",
1332
+ "dataset_path": "hails/mmlu_no_train",
1333
+ "dataset_name": "high_school_macroeconomics",
1334
+ "test_split": "test",
1335
+ "fewshot_split": "dev",
1336
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1337
+ "doc_to_target": "answer",
1338
+ "doc_to_choice": [
1339
+ "A",
1340
+ "B",
1341
+ "C",
1342
+ "D"
1343
+ ],
1344
+ "description": "The following are multiple choice questions (with answers) about high school macroeconomics.\n\n",
1345
+ "target_delimiter": " ",
1346
+ "fewshot_delimiter": "\n\n",
1347
+ "fewshot_config": {
1348
+ "sampler": "first_n"
1349
+ },
1350
+ "num_fewshot": 0,
1351
+ "metric_list": [
1352
+ {
1353
+ "metric": "acc",
1354
+ "aggregation": "mean",
1355
+ "higher_is_better": true
1356
+ }
1357
+ ],
1358
+ "output_type": "multiple_choice",
1359
+ "repeats": 1,
1360
+ "should_decontaminate": false,
1361
+ "metadata": {
1362
+ "version": 0.0
1363
+ }
1364
+ },
1365
+ "mmlu_high_school_mathematics": {
1366
+ "task": "mmlu_high_school_mathematics",
1367
+ "task_alias": "high_school_mathematics",
1368
+ "group": "mmlu_stem",
1369
+ "group_alias": "stem",
1370
+ "dataset_path": "hails/mmlu_no_train",
1371
+ "dataset_name": "high_school_mathematics",
1372
+ "test_split": "test",
1373
+ "fewshot_split": "dev",
1374
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1375
+ "doc_to_target": "answer",
1376
+ "doc_to_choice": [
1377
+ "A",
1378
+ "B",
1379
+ "C",
1380
+ "D"
1381
+ ],
1382
+ "description": "The following are multiple choice questions (with answers) about high school mathematics.\n\n",
1383
+ "target_delimiter": " ",
1384
+ "fewshot_delimiter": "\n\n",
1385
+ "fewshot_config": {
1386
+ "sampler": "first_n"
1387
+ },
1388
+ "num_fewshot": 0,
1389
+ "metric_list": [
1390
+ {
1391
+ "metric": "acc",
1392
+ "aggregation": "mean",
1393
+ "higher_is_better": true
1394
+ }
1395
+ ],
1396
+ "output_type": "multiple_choice",
1397
+ "repeats": 1,
1398
+ "should_decontaminate": false,
1399
+ "metadata": {
1400
+ "version": 0.0
1401
+ }
1402
+ },
1403
+ "mmlu_high_school_microeconomics": {
1404
+ "task": "mmlu_high_school_microeconomics",
1405
+ "task_alias": "high_school_microeconomics",
1406
+ "group": "mmlu_social_sciences",
1407
+ "group_alias": "social_sciences",
1408
+ "dataset_path": "hails/mmlu_no_train",
1409
+ "dataset_name": "high_school_microeconomics",
1410
+ "test_split": "test",
1411
+ "fewshot_split": "dev",
1412
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1413
+ "doc_to_target": "answer",
1414
+ "doc_to_choice": [
1415
+ "A",
1416
+ "B",
1417
+ "C",
1418
+ "D"
1419
+ ],
1420
+ "description": "The following are multiple choice questions (with answers) about high school microeconomics.\n\n",
1421
+ "target_delimiter": " ",
1422
+ "fewshot_delimiter": "\n\n",
1423
+ "fewshot_config": {
1424
+ "sampler": "first_n"
1425
+ },
1426
+ "num_fewshot": 0,
1427
+ "metric_list": [
1428
+ {
1429
+ "metric": "acc",
1430
+ "aggregation": "mean",
1431
+ "higher_is_better": true
1432
+ }
1433
+ ],
1434
+ "output_type": "multiple_choice",
1435
+ "repeats": 1,
1436
+ "should_decontaminate": false,
1437
+ "metadata": {
1438
+ "version": 0.0
1439
+ }
1440
+ },
1441
+ "mmlu_high_school_physics": {
1442
+ "task": "mmlu_high_school_physics",
1443
+ "task_alias": "high_school_physics",
1444
+ "group": "mmlu_stem",
1445
+ "group_alias": "stem",
1446
+ "dataset_path": "hails/mmlu_no_train",
1447
+ "dataset_name": "high_school_physics",
1448
+ "test_split": "test",
1449
+ "fewshot_split": "dev",
1450
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1451
+ "doc_to_target": "answer",
1452
+ "doc_to_choice": [
1453
+ "A",
1454
+ "B",
1455
+ "C",
1456
+ "D"
1457
+ ],
1458
+ "description": "The following are multiple choice questions (with answers) about high school physics.\n\n",
1459
+ "target_delimiter": " ",
1460
+ "fewshot_delimiter": "\n\n",
1461
+ "fewshot_config": {
1462
+ "sampler": "first_n"
1463
+ },
1464
+ "num_fewshot": 0,
1465
+ "metric_list": [
1466
+ {
1467
+ "metric": "acc",
1468
+ "aggregation": "mean",
1469
+ "higher_is_better": true
1470
+ }
1471
+ ],
1472
+ "output_type": "multiple_choice",
1473
+ "repeats": 1,
1474
+ "should_decontaminate": false,
1475
+ "metadata": {
1476
+ "version": 0.0
1477
+ }
1478
+ },
1479
+ "mmlu_high_school_psychology": {
1480
+ "task": "mmlu_high_school_psychology",
1481
+ "task_alias": "high_school_psychology",
1482
+ "group": "mmlu_social_sciences",
1483
+ "group_alias": "social_sciences",
1484
+ "dataset_path": "hails/mmlu_no_train",
1485
+ "dataset_name": "high_school_psychology",
1486
+ "test_split": "test",
1487
+ "fewshot_split": "dev",
1488
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1489
+ "doc_to_target": "answer",
1490
+ "doc_to_choice": [
1491
+ "A",
1492
+ "B",
1493
+ "C",
1494
+ "D"
1495
+ ],
1496
+ "description": "The following are multiple choice questions (with answers) about high school psychology.\n\n",
1497
+ "target_delimiter": " ",
1498
+ "fewshot_delimiter": "\n\n",
1499
+ "fewshot_config": {
1500
+ "sampler": "first_n"
1501
+ },
1502
+ "num_fewshot": 0,
1503
+ "metric_list": [
1504
+ {
1505
+ "metric": "acc",
1506
+ "aggregation": "mean",
1507
+ "higher_is_better": true
1508
+ }
1509
+ ],
1510
+ "output_type": "multiple_choice",
1511
+ "repeats": 1,
1512
+ "should_decontaminate": false,
1513
+ "metadata": {
1514
+ "version": 0.0
1515
+ }
1516
+ },
1517
+ "mmlu_high_school_statistics": {
1518
+ "task": "mmlu_high_school_statistics",
1519
+ "task_alias": "high_school_statistics",
1520
+ "group": "mmlu_stem",
1521
+ "group_alias": "stem",
1522
+ "dataset_path": "hails/mmlu_no_train",
1523
+ "dataset_name": "high_school_statistics",
1524
+ "test_split": "test",
1525
+ "fewshot_split": "dev",
1526
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1527
+ "doc_to_target": "answer",
1528
+ "doc_to_choice": [
1529
+ "A",
1530
+ "B",
1531
+ "C",
1532
+ "D"
1533
+ ],
1534
+ "description": "The following are multiple choice questions (with answers) about high school statistics.\n\n",
1535
+ "target_delimiter": " ",
1536
+ "fewshot_delimiter": "\n\n",
1537
+ "fewshot_config": {
1538
+ "sampler": "first_n"
1539
+ },
1540
+ "num_fewshot": 0,
1541
+ "metric_list": [
1542
+ {
1543
+ "metric": "acc",
1544
+ "aggregation": "mean",
1545
+ "higher_is_better": true
1546
+ }
1547
+ ],
1548
+ "output_type": "multiple_choice",
1549
+ "repeats": 1,
1550
+ "should_decontaminate": false,
1551
+ "metadata": {
1552
+ "version": 0.0
1553
+ }
1554
+ },
1555
+ "mmlu_high_school_us_history": {
1556
+ "task": "mmlu_high_school_us_history",
1557
+ "task_alias": "high_school_us_history",
1558
+ "group": "mmlu_humanities",
1559
+ "group_alias": "humanities",
1560
+ "dataset_path": "hails/mmlu_no_train",
1561
+ "dataset_name": "high_school_us_history",
1562
+ "test_split": "test",
1563
+ "fewshot_split": "dev",
1564
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1565
+ "doc_to_target": "answer",
1566
+ "doc_to_choice": [
1567
+ "A",
1568
+ "B",
1569
+ "C",
1570
+ "D"
1571
+ ],
1572
+ "description": "The following are multiple choice questions (with answers) about high school us history.\n\n",
1573
+ "target_delimiter": " ",
1574
+ "fewshot_delimiter": "\n\n",
1575
+ "fewshot_config": {
1576
+ "sampler": "first_n"
1577
+ },
1578
+ "num_fewshot": 0,
1579
+ "metric_list": [
1580
+ {
1581
+ "metric": "acc",
1582
+ "aggregation": "mean",
1583
+ "higher_is_better": true
1584
+ }
1585
+ ],
1586
+ "output_type": "multiple_choice",
1587
+ "repeats": 1,
1588
+ "should_decontaminate": false,
1589
+ "metadata": {
1590
+ "version": 0.0
1591
+ }
1592
+ },
1593
+ "mmlu_high_school_world_history": {
1594
+ "task": "mmlu_high_school_world_history",
1595
+ "task_alias": "high_school_world_history",
1596
+ "group": "mmlu_humanities",
1597
+ "group_alias": "humanities",
1598
+ "dataset_path": "hails/mmlu_no_train",
1599
+ "dataset_name": "high_school_world_history",
1600
+ "test_split": "test",
1601
+ "fewshot_split": "dev",
1602
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1603
+ "doc_to_target": "answer",
1604
+ "doc_to_choice": [
1605
+ "A",
1606
+ "B",
1607
+ "C",
1608
+ "D"
1609
+ ],
1610
+ "description": "The following are multiple choice questions (with answers) about high school world history.\n\n",
1611
+ "target_delimiter": " ",
1612
+ "fewshot_delimiter": "\n\n",
1613
+ "fewshot_config": {
1614
+ "sampler": "first_n"
1615
+ },
1616
+ "num_fewshot": 0,
1617
+ "metric_list": [
1618
+ {
1619
+ "metric": "acc",
1620
+ "aggregation": "mean",
1621
+ "higher_is_better": true
1622
+ }
1623
+ ],
1624
+ "output_type": "multiple_choice",
1625
+ "repeats": 1,
1626
+ "should_decontaminate": false,
1627
+ "metadata": {
1628
+ "version": 0.0
1629
+ }
1630
+ },
1631
+ "mmlu_human_aging": {
1632
+ "task": "mmlu_human_aging",
1633
+ "task_alias": "human_aging",
1634
+ "group": "mmlu_other",
1635
+ "group_alias": "other",
1636
+ "dataset_path": "hails/mmlu_no_train",
1637
+ "dataset_name": "human_aging",
1638
+ "test_split": "test",
1639
+ "fewshot_split": "dev",
1640
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1641
+ "doc_to_target": "answer",
1642
+ "doc_to_choice": [
1643
+ "A",
1644
+ "B",
1645
+ "C",
1646
+ "D"
1647
+ ],
1648
+ "description": "The following are multiple choice questions (with answers) about human aging.\n\n",
1649
+ "target_delimiter": " ",
1650
+ "fewshot_delimiter": "\n\n",
1651
+ "fewshot_config": {
1652
+ "sampler": "first_n"
1653
+ },
1654
+ "num_fewshot": 0,
1655
+ "metric_list": [
1656
+ {
1657
+ "metric": "acc",
1658
+ "aggregation": "mean",
1659
+ "higher_is_better": true
1660
+ }
1661
+ ],
1662
+ "output_type": "multiple_choice",
1663
+ "repeats": 1,
1664
+ "should_decontaminate": false,
1665
+ "metadata": {
1666
+ "version": 0.0
1667
+ }
1668
+ },
1669
+ "mmlu_human_sexuality": {
1670
+ "task": "mmlu_human_sexuality",
1671
+ "task_alias": "human_sexuality",
1672
+ "group": "mmlu_social_sciences",
1673
+ "group_alias": "social_sciences",
1674
+ "dataset_path": "hails/mmlu_no_train",
1675
+ "dataset_name": "human_sexuality",
1676
+ "test_split": "test",
1677
+ "fewshot_split": "dev",
1678
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1679
+ "doc_to_target": "answer",
1680
+ "doc_to_choice": [
1681
+ "A",
1682
+ "B",
1683
+ "C",
1684
+ "D"
1685
+ ],
1686
+ "description": "The following are multiple choice questions (with answers) about human sexuality.\n\n",
1687
+ "target_delimiter": " ",
1688
+ "fewshot_delimiter": "\n\n",
1689
+ "fewshot_config": {
1690
+ "sampler": "first_n"
1691
+ },
1692
+ "num_fewshot": 0,
1693
+ "metric_list": [
1694
+ {
1695
+ "metric": "acc",
1696
+ "aggregation": "mean",
1697
+ "higher_is_better": true
1698
+ }
1699
+ ],
1700
+ "output_type": "multiple_choice",
1701
+ "repeats": 1,
1702
+ "should_decontaminate": false,
1703
+ "metadata": {
1704
+ "version": 0.0
1705
+ }
1706
+ },
1707
+ "mmlu_international_law": {
1708
+ "task": "mmlu_international_law",
1709
+ "task_alias": "international_law",
1710
+ "group": "mmlu_humanities",
1711
+ "group_alias": "humanities",
1712
+ "dataset_path": "hails/mmlu_no_train",
1713
+ "dataset_name": "international_law",
1714
+ "test_split": "test",
1715
+ "fewshot_split": "dev",
1716
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1717
+ "doc_to_target": "answer",
1718
+ "doc_to_choice": [
1719
+ "A",
1720
+ "B",
1721
+ "C",
1722
+ "D"
1723
+ ],
1724
+ "description": "The following are multiple choice questions (with answers) about international law.\n\n",
1725
+ "target_delimiter": " ",
1726
+ "fewshot_delimiter": "\n\n",
1727
+ "fewshot_config": {
1728
+ "sampler": "first_n"
1729
+ },
1730
+ "num_fewshot": 0,
1731
+ "metric_list": [
1732
+ {
1733
+ "metric": "acc",
1734
+ "aggregation": "mean",
1735
+ "higher_is_better": true
1736
+ }
1737
+ ],
1738
+ "output_type": "multiple_choice",
1739
+ "repeats": 1,
1740
+ "should_decontaminate": false,
1741
+ "metadata": {
1742
+ "version": 0.0
1743
+ }
1744
+ },
1745
+ "mmlu_jurisprudence": {
1746
+ "task": "mmlu_jurisprudence",
1747
+ "task_alias": "jurisprudence",
1748
+ "group": "mmlu_humanities",
1749
+ "group_alias": "humanities",
1750
+ "dataset_path": "hails/mmlu_no_train",
1751
+ "dataset_name": "jurisprudence",
1752
+ "test_split": "test",
1753
+ "fewshot_split": "dev",
1754
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1755
+ "doc_to_target": "answer",
1756
+ "doc_to_choice": [
1757
+ "A",
1758
+ "B",
1759
+ "C",
1760
+ "D"
1761
+ ],
1762
+ "description": "The following are multiple choice questions (with answers) about jurisprudence.\n\n",
1763
+ "target_delimiter": " ",
1764
+ "fewshot_delimiter": "\n\n",
1765
+ "fewshot_config": {
1766
+ "sampler": "first_n"
1767
+ },
1768
+ "num_fewshot": 0,
1769
+ "metric_list": [
1770
+ {
1771
+ "metric": "acc",
1772
+ "aggregation": "mean",
1773
+ "higher_is_better": true
1774
+ }
1775
+ ],
1776
+ "output_type": "multiple_choice",
1777
+ "repeats": 1,
1778
+ "should_decontaminate": false,
1779
+ "metadata": {
1780
+ "version": 0.0
1781
+ }
1782
+ },
1783
+ "mmlu_logical_fallacies": {
1784
+ "task": "mmlu_logical_fallacies",
1785
+ "task_alias": "logical_fallacies",
1786
+ "group": "mmlu_humanities",
1787
+ "group_alias": "humanities",
1788
+ "dataset_path": "hails/mmlu_no_train",
1789
+ "dataset_name": "logical_fallacies",
1790
+ "test_split": "test",
1791
+ "fewshot_split": "dev",
1792
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1793
+ "doc_to_target": "answer",
1794
+ "doc_to_choice": [
1795
+ "A",
1796
+ "B",
1797
+ "C",
1798
+ "D"
1799
+ ],
1800
+ "description": "The following are multiple choice questions (with answers) about logical fallacies.\n\n",
1801
+ "target_delimiter": " ",
1802
+ "fewshot_delimiter": "\n\n",
1803
+ "fewshot_config": {
1804
+ "sampler": "first_n"
1805
+ },
1806
+ "num_fewshot": 0,
1807
+ "metric_list": [
1808
+ {
1809
+ "metric": "acc",
1810
+ "aggregation": "mean",
1811
+ "higher_is_better": true
1812
+ }
1813
+ ],
1814
+ "output_type": "multiple_choice",
1815
+ "repeats": 1,
1816
+ "should_decontaminate": false,
1817
+ "metadata": {
1818
+ "version": 0.0
1819
+ }
1820
+ },
1821
+ "mmlu_machine_learning": {
1822
+ "task": "mmlu_machine_learning",
1823
+ "task_alias": "machine_learning",
1824
+ "group": "mmlu_stem",
1825
+ "group_alias": "stem",
1826
+ "dataset_path": "hails/mmlu_no_train",
1827
+ "dataset_name": "machine_learning",
1828
+ "test_split": "test",
1829
+ "fewshot_split": "dev",
1830
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1831
+ "doc_to_target": "answer",
1832
+ "doc_to_choice": [
1833
+ "A",
1834
+ "B",
1835
+ "C",
1836
+ "D"
1837
+ ],
1838
+ "description": "The following are multiple choice questions (with answers) about machine learning.\n\n",
1839
+ "target_delimiter": " ",
1840
+ "fewshot_delimiter": "\n\n",
1841
+ "fewshot_config": {
1842
+ "sampler": "first_n"
1843
+ },
1844
+ "num_fewshot": 0,
1845
+ "metric_list": [
1846
+ {
1847
+ "metric": "acc",
1848
+ "aggregation": "mean",
1849
+ "higher_is_better": true
1850
+ }
1851
+ ],
1852
+ "output_type": "multiple_choice",
1853
+ "repeats": 1,
1854
+ "should_decontaminate": false,
1855
+ "metadata": {
1856
+ "version": 0.0
1857
+ }
1858
+ },
1859
+ "mmlu_management": {
1860
+ "task": "mmlu_management",
1861
+ "task_alias": "management",
1862
+ "group": "mmlu_other",
1863
+ "group_alias": "other",
1864
+ "dataset_path": "hails/mmlu_no_train",
1865
+ "dataset_name": "management",
1866
+ "test_split": "test",
1867
+ "fewshot_split": "dev",
1868
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1869
+ "doc_to_target": "answer",
1870
+ "doc_to_choice": [
1871
+ "A",
1872
+ "B",
1873
+ "C",
1874
+ "D"
1875
+ ],
1876
+ "description": "The following are multiple choice questions (with answers) about management.\n\n",
1877
+ "target_delimiter": " ",
1878
+ "fewshot_delimiter": "\n\n",
1879
+ "fewshot_config": {
1880
+ "sampler": "first_n"
1881
+ },
1882
+ "num_fewshot": 0,
1883
+ "metric_list": [
1884
+ {
1885
+ "metric": "acc",
1886
+ "aggregation": "mean",
1887
+ "higher_is_better": true
1888
+ }
1889
+ ],
1890
+ "output_type": "multiple_choice",
1891
+ "repeats": 1,
1892
+ "should_decontaminate": false,
1893
+ "metadata": {
1894
+ "version": 0.0
1895
+ }
1896
+ },
1897
+ "mmlu_marketing": {
1898
+ "task": "mmlu_marketing",
1899
+ "task_alias": "marketing",
1900
+ "group": "mmlu_other",
1901
+ "group_alias": "other",
1902
+ "dataset_path": "hails/mmlu_no_train",
1903
+ "dataset_name": "marketing",
1904
+ "test_split": "test",
1905
+ "fewshot_split": "dev",
1906
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1907
+ "doc_to_target": "answer",
1908
+ "doc_to_choice": [
1909
+ "A",
1910
+ "B",
1911
+ "C",
1912
+ "D"
1913
+ ],
1914
+ "description": "The following are multiple choice questions (with answers) about marketing.\n\n",
1915
+ "target_delimiter": " ",
1916
+ "fewshot_delimiter": "\n\n",
1917
+ "fewshot_config": {
1918
+ "sampler": "first_n"
1919
+ },
1920
+ "num_fewshot": 0,
1921
+ "metric_list": [
1922
+ {
1923
+ "metric": "acc",
1924
+ "aggregation": "mean",
1925
+ "higher_is_better": true
1926
+ }
1927
+ ],
1928
+ "output_type": "multiple_choice",
1929
+ "repeats": 1,
1930
+ "should_decontaminate": false,
1931
+ "metadata": {
1932
+ "version": 0.0
1933
+ }
1934
+ },
1935
+ "mmlu_medical_genetics": {
1936
+ "task": "mmlu_medical_genetics",
1937
+ "task_alias": "medical_genetics",
1938
+ "group": "mmlu_other",
1939
+ "group_alias": "other",
1940
+ "dataset_path": "hails/mmlu_no_train",
1941
+ "dataset_name": "medical_genetics",
1942
+ "test_split": "test",
1943
+ "fewshot_split": "dev",
1944
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1945
+ "doc_to_target": "answer",
1946
+ "doc_to_choice": [
1947
+ "A",
1948
+ "B",
1949
+ "C",
1950
+ "D"
1951
+ ],
1952
+ "description": "The following are multiple choice questions (with answers) about medical genetics.\n\n",
1953
+ "target_delimiter": " ",
1954
+ "fewshot_delimiter": "\n\n",
1955
+ "fewshot_config": {
1956
+ "sampler": "first_n"
1957
+ },
1958
+ "num_fewshot": 0,
1959
+ "metric_list": [
1960
+ {
1961
+ "metric": "acc",
1962
+ "aggregation": "mean",
1963
+ "higher_is_better": true
1964
+ }
1965
+ ],
1966
+ "output_type": "multiple_choice",
1967
+ "repeats": 1,
1968
+ "should_decontaminate": false,
1969
+ "metadata": {
1970
+ "version": 0.0
1971
+ }
1972
+ },
1973
+ "mmlu_miscellaneous": {
1974
+ "task": "mmlu_miscellaneous",
1975
+ "task_alias": "miscellaneous",
1976
+ "group": "mmlu_other",
1977
+ "group_alias": "other",
1978
+ "dataset_path": "hails/mmlu_no_train",
1979
+ "dataset_name": "miscellaneous",
1980
+ "test_split": "test",
1981
+ "fewshot_split": "dev",
1982
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1983
+ "doc_to_target": "answer",
1984
+ "doc_to_choice": [
1985
+ "A",
1986
+ "B",
1987
+ "C",
1988
+ "D"
1989
+ ],
1990
+ "description": "The following are multiple choice questions (with answers) about miscellaneous.\n\n",
1991
+ "target_delimiter": " ",
1992
+ "fewshot_delimiter": "\n\n",
1993
+ "fewshot_config": {
1994
+ "sampler": "first_n"
1995
+ },
1996
+ "num_fewshot": 0,
1997
+ "metric_list": [
1998
+ {
1999
+ "metric": "acc",
2000
+ "aggregation": "mean",
2001
+ "higher_is_better": true
2002
+ }
2003
+ ],
2004
+ "output_type": "multiple_choice",
2005
+ "repeats": 1,
2006
+ "should_decontaminate": false,
2007
+ "metadata": {
2008
+ "version": 0.0
2009
+ }
2010
+ },
2011
+ "mmlu_moral_disputes": {
2012
+ "task": "mmlu_moral_disputes",
2013
+ "task_alias": "moral_disputes",
2014
+ "group": "mmlu_humanities",
2015
+ "group_alias": "humanities",
2016
+ "dataset_path": "hails/mmlu_no_train",
2017
+ "dataset_name": "moral_disputes",
2018
+ "test_split": "test",
2019
+ "fewshot_split": "dev",
2020
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2021
+ "doc_to_target": "answer",
2022
+ "doc_to_choice": [
2023
+ "A",
2024
+ "B",
2025
+ "C",
2026
+ "D"
2027
+ ],
2028
+ "description": "The following are multiple choice questions (with answers) about moral disputes.\n\n",
2029
+ "target_delimiter": " ",
2030
+ "fewshot_delimiter": "\n\n",
2031
+ "fewshot_config": {
2032
+ "sampler": "first_n"
2033
+ },
2034
+ "num_fewshot": 0,
2035
+ "metric_list": [
2036
+ {
2037
+ "metric": "acc",
2038
+ "aggregation": "mean",
2039
+ "higher_is_better": true
2040
+ }
2041
+ ],
2042
+ "output_type": "multiple_choice",
2043
+ "repeats": 1,
2044
+ "should_decontaminate": false,
2045
+ "metadata": {
2046
+ "version": 0.0
2047
+ }
2048
+ },
2049
+ "mmlu_moral_scenarios": {
2050
+ "task": "mmlu_moral_scenarios",
2051
+ "task_alias": "moral_scenarios",
2052
+ "group": "mmlu_humanities",
2053
+ "group_alias": "humanities",
2054
+ "dataset_path": "hails/mmlu_no_train",
2055
+ "dataset_name": "moral_scenarios",
2056
+ "test_split": "test",
2057
+ "fewshot_split": "dev",
2058
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2059
+ "doc_to_target": "answer",
2060
+ "doc_to_choice": [
2061
+ "A",
2062
+ "B",
2063
+ "C",
2064
+ "D"
2065
+ ],
2066
+ "description": "The following are multiple choice questions (with answers) about moral scenarios.\n\n",
2067
+ "target_delimiter": " ",
2068
+ "fewshot_delimiter": "\n\n",
2069
+ "fewshot_config": {
2070
+ "sampler": "first_n"
2071
+ },
2072
+ "num_fewshot": 0,
2073
+ "metric_list": [
2074
+ {
2075
+ "metric": "acc",
2076
+ "aggregation": "mean",
2077
+ "higher_is_better": true
2078
+ }
2079
+ ],
2080
+ "output_type": "multiple_choice",
2081
+ "repeats": 1,
2082
+ "should_decontaminate": false,
2083
+ "metadata": {
2084
+ "version": 0.0
2085
+ }
2086
+ },
2087
+ "mmlu_nutrition": {
2088
+ "task": "mmlu_nutrition",
2089
+ "task_alias": "nutrition",
2090
+ "group": "mmlu_other",
2091
+ "group_alias": "other",
2092
+ "dataset_path": "hails/mmlu_no_train",
2093
+ "dataset_name": "nutrition",
2094
+ "test_split": "test",
2095
+ "fewshot_split": "dev",
2096
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2097
+ "doc_to_target": "answer",
2098
+ "doc_to_choice": [
2099
+ "A",
2100
+ "B",
2101
+ "C",
2102
+ "D"
2103
+ ],
2104
+ "description": "The following are multiple choice questions (with answers) about nutrition.\n\n",
2105
+ "target_delimiter": " ",
2106
+ "fewshot_delimiter": "\n\n",
2107
+ "fewshot_config": {
2108
+ "sampler": "first_n"
2109
+ },
2110
+ "num_fewshot": 0,
2111
+ "metric_list": [
2112
+ {
2113
+ "metric": "acc",
2114
+ "aggregation": "mean",
2115
+ "higher_is_better": true
2116
+ }
2117
+ ],
2118
+ "output_type": "multiple_choice",
2119
+ "repeats": 1,
2120
+ "should_decontaminate": false,
2121
+ "metadata": {
2122
+ "version": 0.0
2123
+ }
2124
+ },
2125
+ "mmlu_philosophy": {
2126
+ "task": "mmlu_philosophy",
2127
+ "task_alias": "philosophy",
2128
+ "group": "mmlu_humanities",
2129
+ "group_alias": "humanities",
2130
+ "dataset_path": "hails/mmlu_no_train",
2131
+ "dataset_name": "philosophy",
2132
+ "test_split": "test",
2133
+ "fewshot_split": "dev",
2134
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2135
+ "doc_to_target": "answer",
2136
+ "doc_to_choice": [
2137
+ "A",
2138
+ "B",
2139
+ "C",
2140
+ "D"
2141
+ ],
2142
+ "description": "The following are multiple choice questions (with answers) about philosophy.\n\n",
2143
+ "target_delimiter": " ",
2144
+ "fewshot_delimiter": "\n\n",
2145
+ "fewshot_config": {
2146
+ "sampler": "first_n"
2147
+ },
2148
+ "num_fewshot": 0,
2149
+ "metric_list": [
2150
+ {
2151
+ "metric": "acc",
2152
+ "aggregation": "mean",
2153
+ "higher_is_better": true
2154
+ }
2155
+ ],
2156
+ "output_type": "multiple_choice",
2157
+ "repeats": 1,
2158
+ "should_decontaminate": false,
2159
+ "metadata": {
2160
+ "version": 0.0
2161
+ }
2162
+ },
2163
+ "mmlu_prehistory": {
2164
+ "task": "mmlu_prehistory",
2165
+ "task_alias": "prehistory",
2166
+ "group": "mmlu_humanities",
2167
+ "group_alias": "humanities",
2168
+ "dataset_path": "hails/mmlu_no_train",
2169
+ "dataset_name": "prehistory",
2170
+ "test_split": "test",
2171
+ "fewshot_split": "dev",
2172
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2173
+ "doc_to_target": "answer",
2174
+ "doc_to_choice": [
2175
+ "A",
2176
+ "B",
2177
+ "C",
2178
+ "D"
2179
+ ],
2180
+ "description": "The following are multiple choice questions (with answers) about prehistory.\n\n",
2181
+ "target_delimiter": " ",
2182
+ "fewshot_delimiter": "\n\n",
2183
+ "fewshot_config": {
2184
+ "sampler": "first_n"
2185
+ },
2186
+ "num_fewshot": 0,
2187
+ "metric_list": [
2188
+ {
2189
+ "metric": "acc",
2190
+ "aggregation": "mean",
2191
+ "higher_is_better": true
2192
+ }
2193
+ ],
2194
+ "output_type": "multiple_choice",
2195
+ "repeats": 1,
2196
+ "should_decontaminate": false,
2197
+ "metadata": {
2198
+ "version": 0.0
2199
+ }
2200
+ },
2201
+ "mmlu_professional_accounting": {
2202
+ "task": "mmlu_professional_accounting",
2203
+ "task_alias": "professional_accounting",
2204
+ "group": "mmlu_other",
2205
+ "group_alias": "other",
2206
+ "dataset_path": "hails/mmlu_no_train",
2207
+ "dataset_name": "professional_accounting",
2208
+ "test_split": "test",
2209
+ "fewshot_split": "dev",
2210
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2211
+ "doc_to_target": "answer",
2212
+ "doc_to_choice": [
2213
+ "A",
2214
+ "B",
2215
+ "C",
2216
+ "D"
2217
+ ],
2218
+ "description": "The following are multiple choice questions (with answers) about professional accounting.\n\n",
2219
+ "target_delimiter": " ",
2220
+ "fewshot_delimiter": "\n\n",
2221
+ "fewshot_config": {
2222
+ "sampler": "first_n"
2223
+ },
2224
+ "num_fewshot": 0,
2225
+ "metric_list": [
2226
+ {
2227
+ "metric": "acc",
2228
+ "aggregation": "mean",
2229
+ "higher_is_better": true
2230
+ }
2231
+ ],
2232
+ "output_type": "multiple_choice",
2233
+ "repeats": 1,
2234
+ "should_decontaminate": false,
2235
+ "metadata": {
2236
+ "version": 0.0
2237
+ }
2238
+ },
2239
+ "mmlu_professional_law": {
2240
+ "task": "mmlu_professional_law",
2241
+ "task_alias": "professional_law",
2242
+ "group": "mmlu_humanities",
2243
+ "group_alias": "humanities",
2244
+ "dataset_path": "hails/mmlu_no_train",
2245
+ "dataset_name": "professional_law",
2246
+ "test_split": "test",
2247
+ "fewshot_split": "dev",
2248
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2249
+ "doc_to_target": "answer",
2250
+ "doc_to_choice": [
2251
+ "A",
2252
+ "B",
2253
+ "C",
2254
+ "D"
2255
+ ],
2256
+ "description": "The following are multiple choice questions (with answers) about professional law.\n\n",
2257
+ "target_delimiter": " ",
2258
+ "fewshot_delimiter": "\n\n",
2259
+ "fewshot_config": {
2260
+ "sampler": "first_n"
2261
+ },
2262
+ "num_fewshot": 0,
2263
+ "metric_list": [
2264
+ {
2265
+ "metric": "acc",
2266
+ "aggregation": "mean",
2267
+ "higher_is_better": true
2268
+ }
2269
+ ],
2270
+ "output_type": "multiple_choice",
2271
+ "repeats": 1,
2272
+ "should_decontaminate": false,
2273
+ "metadata": {
2274
+ "version": 0.0
2275
+ }
2276
+ },
2277
+ "mmlu_professional_medicine": {
2278
+ "task": "mmlu_professional_medicine",
2279
+ "task_alias": "professional_medicine",
2280
+ "group": "mmlu_other",
2281
+ "group_alias": "other",
2282
+ "dataset_path": "hails/mmlu_no_train",
2283
+ "dataset_name": "professional_medicine",
2284
+ "test_split": "test",
2285
+ "fewshot_split": "dev",
2286
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2287
+ "doc_to_target": "answer",
2288
+ "doc_to_choice": [
2289
+ "A",
2290
+ "B",
2291
+ "C",
2292
+ "D"
2293
+ ],
2294
+ "description": "The following are multiple choice questions (with answers) about professional medicine.\n\n",
2295
+ "target_delimiter": " ",
2296
+ "fewshot_delimiter": "\n\n",
2297
+ "fewshot_config": {
2298
+ "sampler": "first_n"
2299
+ },
2300
+ "num_fewshot": 0,
2301
+ "metric_list": [
2302
+ {
2303
+ "metric": "acc",
2304
+ "aggregation": "mean",
2305
+ "higher_is_better": true
2306
+ }
2307
+ ],
2308
+ "output_type": "multiple_choice",
2309
+ "repeats": 1,
2310
+ "should_decontaminate": false,
2311
+ "metadata": {
2312
+ "version": 0.0
2313
+ }
2314
+ },
2315
+ "mmlu_professional_psychology": {
2316
+ "task": "mmlu_professional_psychology",
2317
+ "task_alias": "professional_psychology",
2318
+ "group": "mmlu_social_sciences",
2319
+ "group_alias": "social_sciences",
2320
+ "dataset_path": "hails/mmlu_no_train",
2321
+ "dataset_name": "professional_psychology",
2322
+ "test_split": "test",
2323
+ "fewshot_split": "dev",
2324
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2325
+ "doc_to_target": "answer",
2326
+ "doc_to_choice": [
2327
+ "A",
2328
+ "B",
2329
+ "C",
2330
+ "D"
2331
+ ],
2332
+ "description": "The following are multiple choice questions (with answers) about professional psychology.\n\n",
2333
+ "target_delimiter": " ",
2334
+ "fewshot_delimiter": "\n\n",
2335
+ "fewshot_config": {
2336
+ "sampler": "first_n"
2337
+ },
2338
+ "num_fewshot": 0,
2339
+ "metric_list": [
2340
+ {
2341
+ "metric": "acc",
2342
+ "aggregation": "mean",
2343
+ "higher_is_better": true
2344
+ }
2345
+ ],
2346
+ "output_type": "multiple_choice",
2347
+ "repeats": 1,
2348
+ "should_decontaminate": false,
2349
+ "metadata": {
2350
+ "version": 0.0
2351
+ }
2352
+ },
2353
+ "mmlu_public_relations": {
2354
+ "task": "mmlu_public_relations",
2355
+ "task_alias": "public_relations",
2356
+ "group": "mmlu_social_sciences",
2357
+ "group_alias": "social_sciences",
2358
+ "dataset_path": "hails/mmlu_no_train",
2359
+ "dataset_name": "public_relations",
2360
+ "test_split": "test",
2361
+ "fewshot_split": "dev",
2362
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2363
+ "doc_to_target": "answer",
2364
+ "doc_to_choice": [
2365
+ "A",
2366
+ "B",
2367
+ "C",
2368
+ "D"
2369
+ ],
2370
+ "description": "The following are multiple choice questions (with answers) about public relations.\n\n",
2371
+ "target_delimiter": " ",
2372
+ "fewshot_delimiter": "\n\n",
2373
+ "fewshot_config": {
2374
+ "sampler": "first_n"
2375
+ },
2376
+ "num_fewshot": 0,
2377
+ "metric_list": [
2378
+ {
2379
+ "metric": "acc",
2380
+ "aggregation": "mean",
2381
+ "higher_is_better": true
2382
+ }
2383
+ ],
2384
+ "output_type": "multiple_choice",
2385
+ "repeats": 1,
2386
+ "should_decontaminate": false,
2387
+ "metadata": {
2388
+ "version": 0.0
2389
+ }
2390
+ },
2391
+ "mmlu_security_studies": {
2392
+ "task": "mmlu_security_studies",
2393
+ "task_alias": "security_studies",
2394
+ "group": "mmlu_social_sciences",
2395
+ "group_alias": "social_sciences",
2396
+ "dataset_path": "hails/mmlu_no_train",
2397
+ "dataset_name": "security_studies",
2398
+ "test_split": "test",
2399
+ "fewshot_split": "dev",
2400
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2401
+ "doc_to_target": "answer",
2402
+ "doc_to_choice": [
2403
+ "A",
2404
+ "B",
2405
+ "C",
2406
+ "D"
2407
+ ],
2408
+ "description": "The following are multiple choice questions (with answers) about security studies.\n\n",
2409
+ "target_delimiter": " ",
2410
+ "fewshot_delimiter": "\n\n",
2411
+ "fewshot_config": {
2412
+ "sampler": "first_n"
2413
+ },
2414
+ "num_fewshot": 0,
2415
+ "metric_list": [
2416
+ {
2417
+ "metric": "acc",
2418
+ "aggregation": "mean",
2419
+ "higher_is_better": true
2420
+ }
2421
+ ],
2422
+ "output_type": "multiple_choice",
2423
+ "repeats": 1,
2424
+ "should_decontaminate": false,
2425
+ "metadata": {
2426
+ "version": 0.0
2427
+ }
2428
+ },
2429
+ "mmlu_sociology": {
2430
+ "task": "mmlu_sociology",
2431
+ "task_alias": "sociology",
2432
+ "group": "mmlu_social_sciences",
2433
+ "group_alias": "social_sciences",
2434
+ "dataset_path": "hails/mmlu_no_train",
2435
+ "dataset_name": "sociology",
2436
+ "test_split": "test",
2437
+ "fewshot_split": "dev",
2438
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2439
+ "doc_to_target": "answer",
2440
+ "doc_to_choice": [
2441
+ "A",
2442
+ "B",
2443
+ "C",
2444
+ "D"
2445
+ ],
2446
+ "description": "The following are multiple choice questions (with answers) about sociology.\n\n",
2447
+ "target_delimiter": " ",
2448
+ "fewshot_delimiter": "\n\n",
2449
+ "fewshot_config": {
2450
+ "sampler": "first_n"
2451
+ },
2452
+ "num_fewshot": 0,
2453
+ "metric_list": [
2454
+ {
2455
+ "metric": "acc",
2456
+ "aggregation": "mean",
2457
+ "higher_is_better": true
2458
+ }
2459
+ ],
2460
+ "output_type": "multiple_choice",
2461
+ "repeats": 1,
2462
+ "should_decontaminate": false,
2463
+ "metadata": {
2464
+ "version": 0.0
2465
+ }
2466
+ },
2467
+ "mmlu_us_foreign_policy": {
2468
+ "task": "mmlu_us_foreign_policy",
2469
+ "task_alias": "us_foreign_policy",
2470
+ "group": "mmlu_social_sciences",
2471
+ "group_alias": "social_sciences",
2472
+ "dataset_path": "hails/mmlu_no_train",
2473
+ "dataset_name": "us_foreign_policy",
2474
+ "test_split": "test",
2475
+ "fewshot_split": "dev",
2476
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2477
+ "doc_to_target": "answer",
2478
+ "doc_to_choice": [
2479
+ "A",
2480
+ "B",
2481
+ "C",
2482
+ "D"
2483
+ ],
2484
+ "description": "The following are multiple choice questions (with answers) about us foreign policy.\n\n",
2485
+ "target_delimiter": " ",
2486
+ "fewshot_delimiter": "\n\n",
2487
+ "fewshot_config": {
2488
+ "sampler": "first_n"
2489
+ },
2490
+ "num_fewshot": 0,
2491
+ "metric_list": [
2492
+ {
2493
+ "metric": "acc",
2494
+ "aggregation": "mean",
2495
+ "higher_is_better": true
2496
+ }
2497
+ ],
2498
+ "output_type": "multiple_choice",
2499
+ "repeats": 1,
2500
+ "should_decontaminate": false,
2501
+ "metadata": {
2502
+ "version": 0.0
2503
+ }
2504
+ },
2505
+ "mmlu_virology": {
2506
+ "task": "mmlu_virology",
2507
+ "task_alias": "virology",
2508
+ "group": "mmlu_other",
2509
+ "group_alias": "other",
2510
+ "dataset_path": "hails/mmlu_no_train",
2511
+ "dataset_name": "virology",
2512
+ "test_split": "test",
2513
+ "fewshot_split": "dev",
2514
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2515
+ "doc_to_target": "answer",
2516
+ "doc_to_choice": [
2517
+ "A",
2518
+ "B",
2519
+ "C",
2520
+ "D"
2521
+ ],
2522
+ "description": "The following are multiple choice questions (with answers) about virology.\n\n",
2523
+ "target_delimiter": " ",
2524
+ "fewshot_delimiter": "\n\n",
2525
+ "fewshot_config": {
2526
+ "sampler": "first_n"
2527
+ },
2528
+ "num_fewshot": 0,
2529
+ "metric_list": [
2530
+ {
2531
+ "metric": "acc",
2532
+ "aggregation": "mean",
2533
+ "higher_is_better": true
2534
+ }
2535
+ ],
2536
+ "output_type": "multiple_choice",
2537
+ "repeats": 1,
2538
+ "should_decontaminate": false,
2539
+ "metadata": {
2540
+ "version": 0.0
2541
+ }
2542
+ },
2543
+ "mmlu_world_religions": {
2544
+ "task": "mmlu_world_religions",
2545
+ "task_alias": "world_religions",
2546
+ "group": "mmlu_humanities",
2547
+ "group_alias": "humanities",
2548
+ "dataset_path": "hails/mmlu_no_train",
2549
+ "dataset_name": "world_religions",
2550
+ "test_split": "test",
2551
+ "fewshot_split": "dev",
2552
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2553
+ "doc_to_target": "answer",
2554
+ "doc_to_choice": [
2555
+ "A",
2556
+ "B",
2557
+ "C",
2558
+ "D"
2559
+ ],
2560
+ "description": "The following are multiple choice questions (with answers) about world religions.\n\n",
2561
+ "target_delimiter": " ",
2562
+ "fewshot_delimiter": "\n\n",
2563
+ "fewshot_config": {
2564
+ "sampler": "first_n"
2565
+ },
2566
+ "num_fewshot": 0,
2567
+ "metric_list": [
2568
+ {
2569
+ "metric": "acc",
2570
+ "aggregation": "mean",
2571
+ "higher_is_better": true
2572
+ }
2573
+ ],
2574
+ "output_type": "multiple_choice",
2575
+ "repeats": 1,
2576
+ "should_decontaminate": false,
2577
+ "metadata": {
2578
+ "version": 0.0
2579
+ }
2580
+ }
2581
+ },
2582
+ "versions": {
2583
+ "mmlu_abstract_algebra": 0.0,
2584
+ "mmlu_anatomy": 0.0,
2585
+ "mmlu_astronomy": 0.0,
2586
+ "mmlu_business_ethics": 0.0,
2587
+ "mmlu_clinical_knowledge": 0.0,
2588
+ "mmlu_college_biology": 0.0,
2589
+ "mmlu_college_chemistry": 0.0,
2590
+ "mmlu_college_computer_science": 0.0,
2591
+ "mmlu_college_mathematics": 0.0,
2592
+ "mmlu_college_medicine": 0.0,
2593
+ "mmlu_college_physics": 0.0,
2594
+ "mmlu_computer_security": 0.0,
2595
+ "mmlu_conceptual_physics": 0.0,
2596
+ "mmlu_econometrics": 0.0,
2597
+ "mmlu_electrical_engineering": 0.0,
2598
+ "mmlu_elementary_mathematics": 0.0,
2599
+ "mmlu_formal_logic": 0.0,
2600
+ "mmlu_global_facts": 0.0,
2601
+ "mmlu_high_school_biology": 0.0,
2602
+ "mmlu_high_school_chemistry": 0.0,
2603
+ "mmlu_high_school_computer_science": 0.0,
2604
+ "mmlu_high_school_european_history": 0.0,
2605
+ "mmlu_high_school_geography": 0.0,
2606
+ "mmlu_high_school_government_and_politics": 0.0,
2607
+ "mmlu_high_school_macroeconomics": 0.0,
2608
+ "mmlu_high_school_mathematics": 0.0,
2609
+ "mmlu_high_school_microeconomics": 0.0,
2610
+ "mmlu_high_school_physics": 0.0,
2611
+ "mmlu_high_school_psychology": 0.0,
2612
+ "mmlu_high_school_statistics": 0.0,
2613
+ "mmlu_high_school_us_history": 0.0,
2614
+ "mmlu_high_school_world_history": 0.0,
2615
+ "mmlu_human_aging": 0.0,
2616
+ "mmlu_human_sexuality": 0.0,
2617
+ "mmlu_international_law": 0.0,
2618
+ "mmlu_jurisprudence": 0.0,
2619
+ "mmlu_logical_fallacies": 0.0,
2620
+ "mmlu_machine_learning": 0.0,
2621
+ "mmlu_management": 0.0,
2622
+ "mmlu_marketing": 0.0,
2623
+ "mmlu_medical_genetics": 0.0,
2624
+ "mmlu_miscellaneous": 0.0,
2625
+ "mmlu_moral_disputes": 0.0,
2626
+ "mmlu_moral_scenarios": 0.0,
2627
+ "mmlu_nutrition": 0.0,
2628
+ "mmlu_philosophy": 0.0,
2629
+ "mmlu_prehistory": 0.0,
2630
+ "mmlu_professional_accounting": 0.0,
2631
+ "mmlu_professional_law": 0.0,
2632
+ "mmlu_professional_medicine": 0.0,
2633
+ "mmlu_professional_psychology": 0.0,
2634
+ "mmlu_public_relations": 0.0,
2635
+ "mmlu_security_studies": 0.0,
2636
+ "mmlu_sociology": 0.0,
2637
+ "mmlu_us_foreign_policy": 0.0,
2638
+ "mmlu_virology": 0.0,
2639
+ "mmlu_world_religions": 0.0
2640
+ },
2641
+ "n-shot": {
2642
+ "mmlu": 0,
2643
+ "mmlu_abstract_algebra": 0,
2644
+ "mmlu_anatomy": 0,
2645
+ "mmlu_astronomy": 0,
2646
+ "mmlu_business_ethics": 0,
2647
+ "mmlu_clinical_knowledge": 0,
2648
+ "mmlu_college_biology": 0,
2649
+ "mmlu_college_chemistry": 0,
2650
+ "mmlu_college_computer_science": 0,
2651
+ "mmlu_college_mathematics": 0,
2652
+ "mmlu_college_medicine": 0,
2653
+ "mmlu_college_physics": 0,
2654
+ "mmlu_computer_security": 0,
2655
+ "mmlu_conceptual_physics": 0,
2656
+ "mmlu_econometrics": 0,
2657
+ "mmlu_electrical_engineering": 0,
2658
+ "mmlu_elementary_mathematics": 0,
2659
+ "mmlu_formal_logic": 0,
2660
+ "mmlu_global_facts": 0,
2661
+ "mmlu_high_school_biology": 0,
2662
+ "mmlu_high_school_chemistry": 0,
2663
+ "mmlu_high_school_computer_science": 0,
2664
+ "mmlu_high_school_european_history": 0,
2665
+ "mmlu_high_school_geography": 0,
2666
+ "mmlu_high_school_government_and_politics": 0,
2667
+ "mmlu_high_school_macroeconomics": 0,
2668
+ "mmlu_high_school_mathematics": 0,
2669
+ "mmlu_high_school_microeconomics": 0,
2670
+ "mmlu_high_school_physics": 0,
2671
+ "mmlu_high_school_psychology": 0,
2672
+ "mmlu_high_school_statistics": 0,
2673
+ "mmlu_high_school_us_history": 0,
2674
+ "mmlu_high_school_world_history": 0,
2675
+ "mmlu_human_aging": 0,
2676
+ "mmlu_human_sexuality": 0,
2677
+ "mmlu_humanities": 0,
2678
+ "mmlu_international_law": 0,
2679
+ "mmlu_jurisprudence": 0,
2680
+ "mmlu_logical_fallacies": 0,
2681
+ "mmlu_machine_learning": 0,
2682
+ "mmlu_management": 0,
2683
+ "mmlu_marketing": 0,
2684
+ "mmlu_medical_genetics": 0,
2685
+ "mmlu_miscellaneous": 0,
2686
+ "mmlu_moral_disputes": 0,
2687
+ "mmlu_moral_scenarios": 0,
2688
+ "mmlu_nutrition": 0,
2689
+ "mmlu_other": 0,
2690
+ "mmlu_philosophy": 0,
2691
+ "mmlu_prehistory": 0,
2692
+ "mmlu_professional_accounting": 0,
2693
+ "mmlu_professional_law": 0,
2694
+ "mmlu_professional_medicine": 0,
2695
+ "mmlu_professional_psychology": 0,
2696
+ "mmlu_public_relations": 0,
2697
+ "mmlu_security_studies": 0,
2698
+ "mmlu_social_sciences": 0,
2699
+ "mmlu_sociology": 0,
2700
+ "mmlu_stem": 0,
2701
+ "mmlu_us_foreign_policy": 0,
2702
+ "mmlu_virology": 0,
2703
+ "mmlu_world_religions": 0
2704
+ },
2705
+ "config": {
2706
+ "model": "hf",
2707
+ "model_args": "pretrained=/gemini/platform/public/trained_models/sft/Qwen2_0.5B-2024.07.02-label.lingzhichat+finance+account+law-exp.00/checkpoint-1750",
2708
+ "batch_size": "auto",
2709
+ "batch_sizes": [
2710
+ 32
2711
+ ],
2712
+ "device": null,
2713
+ "use_cache": null,
2714
+ "limit": null,
2715
+ "bootstrap_iters": 100000,
2716
+ "gen_kwargs": null
2717
+ },
2718
+ "git_hash": "3196e907",
2719
+ "date": 1719990878.587922,
2720
+ "pretty_env_info": "PyTorch version: 2.4.0a0+07cecf4168.nv24.05\nIs debug build: False\nCUDA used to build PyTorch: 12.4\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.4 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.29.2\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.4.131\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA H800\nGPU 1: NVIDIA H800\nGPU 2: NVIDIA H800\nGPU 3: NVIDIA H800\nGPU 4: NVIDIA H800\nGPU 5: NVIDIA H800\nGPU 6: NVIDIA H800\nGPU 7: NVIDIA H800\n\nNvidia driver version: 535.129.03\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8462Y+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 8\nFrequency boost: enabled\nCPU max MHz: 2801.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 120 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-31,64-95\nNUMA node1 CPU(s): 32-63,96-127\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==1.24.4\n[pip3] onnx==1.16.0\n[pip3] optree==0.11.0\n[pip3] pytorch-quantization==2.1.2\n[pip3] pytorch-triton==3.0.0+989adb9a2\n[pip3] torch==2.4.0a0+07cecf4168.nv24.5\n[pip3] torch-tensorrt==2.4.0a0\n[pip3] torchvision==0.19.0a0\n[conda] Could not collect",
2721
+ "transformers_version": "4.41.2",
2722
+ "upper_git_hash": null
2723
+ }
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/gemini/platform/public/llm/huggingface/Qwen/Qwen2-0.5B",
3
+ "architectures": [
4
+ "Qwen2ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151643,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 896,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4864,
13
+ "max_position_embeddings": 131072,
14
+ "max_window_layers": 24,
15
+ "model_type": "qwen2",
16
+ "num_attention_heads": 14,
17
+ "num_hidden_layers": 24,
18
+ "num_key_value_heads": 2,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_theta": 1000000.0,
21
+ "sliding_window": 131072,
22
+ "tie_word_embeddings": true,
23
+ "torch_dtype": "float16",
24
+ "transformers_version": "4.42.2",
25
+ "use_cache": false,
26
+ "use_sliding_window": false,
27
+ "vocab_size": 151936
28
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.42.2"
6
+ }
latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step1750
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7279f763f24ae41dd763759456de5b051519d36a75a191755aab0d571ea4fdff
3
+ size 988097536
rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08282b46825aa78d10fe10e3fea89555c5b5a691b261a3ddfd58fcb58370edff
3
+ size 15984
rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbab71d98a3a9a92df82a6bba463947327c3a1bcf35cd9f4f46114641fc42dd9
3
+ size 15984
rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:caac82d57d878d30219a4f9ec289a97ff90c53afc160b968f251b3fd3454b8d8
3
+ size 15984
rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19762d2d370222b01817da11bbaa6665d542293373186d66f754e7246bb861ed
3
+ size 15984
rng_state_4.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00c7508b346a7d3c5c23392845f1d013331114ade778794b76e919cb3ed5d33e
3
+ size 15984
rng_state_5.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b89de7d14dd20a191f56b74c816ef8b7fe5c171e31efbeadbf321c4539ed68c3
3
+ size 15984
rng_state_6.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c71152053553e6e22d670fbc4fd7550bf8a046b54cad7b71869787986a6a42c
3
+ size 15984
rng_state_7.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b67db12a26a26ffe03d9afc84a43857eb2e5b2fec2dd189653b415f74208190
3
+ size 15984
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbd46ab5db65c10396897f14810422d3298269ee23ffadb0063f5707da7f5dcc
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|im_end|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": [
30
+ "<|im_start|>",
31
+ "<|im_end|>"
32
+ ],
33
+ "bos_token": null,
34
+ "chat_template": "{% set system_message = 'You are a helpful assistant.' %}{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ '<|im_start|>system\n' + system_message + '<|im_end|>\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\n' + content + '<|im_end|>\n<|im_start|>assistant\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>' + '\n' }}{% endif %}{% endfor %}",
35
+ "clean_up_tokenization_spaces": false,
36
+ "eos_token": "<|im_end|>",
37
+ "errors": "replace",
38
+ "model_max_length": 32768,
39
+ "pad_token": "<|endoftext|>",
40
+ "padding_side": "right",
41
+ "split_special_tokens": false,
42
+ "tokenizer_class": "Qwen2Tokenizer",
43
+ "unk_token": null
44
+ }
trainer_state.json ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.1432945499081444,
5
+ "eval_steps": 500,
6
+ "global_step": 1750,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0612369871402327,
13
+ "grad_norm": 0.8098917283898412,
14
+ "learning_rate": 9.996294850025658e-06,
15
+ "loss": 1.0009,
16
+ "step": 50
17
+ },
18
+ {
19
+ "epoch": 0.1224739742804654,
20
+ "grad_norm": 0.8351009158740302,
21
+ "learning_rate": 9.985184891357165e-06,
22
+ "loss": 0.9902,
23
+ "step": 100
24
+ },
25
+ {
26
+ "epoch": 0.1837109614206981,
27
+ "grad_norm": 0.7857771081781793,
28
+ "learning_rate": 9.96668658961975e-06,
29
+ "loss": 0.972,
30
+ "step": 150
31
+ },
32
+ {
33
+ "epoch": 0.2449479485609308,
34
+ "grad_norm": 0.7820326486942377,
35
+ "learning_rate": 9.940827360406297e-06,
36
+ "loss": 0.9727,
37
+ "step": 200
38
+ },
39
+ {
40
+ "epoch": 0.3061849357011635,
41
+ "grad_norm": 0.7529490157497454,
42
+ "learning_rate": 9.907645528645791e-06,
43
+ "loss": 0.9652,
44
+ "step": 250
45
+ },
46
+ {
47
+ "epoch": 0.3674219228413962,
48
+ "grad_norm": 0.8000880788190894,
49
+ "learning_rate": 9.867190271803466e-06,
50
+ "loss": 0.9661,
51
+ "step": 300
52
+ },
53
+ {
54
+ "epoch": 0.4286589099816289,
55
+ "grad_norm": 0.7925477606605517,
56
+ "learning_rate": 9.819521546996864e-06,
57
+ "loss": 0.9599,
58
+ "step": 350
59
+ },
60
+ {
61
+ "epoch": 0.4898958971218616,
62
+ "grad_norm": 0.7624902603510945,
63
+ "learning_rate": 9.764710002135784e-06,
64
+ "loss": 0.9547,
65
+ "step": 400
66
+ },
67
+ {
68
+ "epoch": 0.5511328842620943,
69
+ "grad_norm": 0.7431039272381413,
70
+ "learning_rate": 9.702836871217838e-06,
71
+ "loss": 0.9466,
72
+ "step": 450
73
+ },
74
+ {
75
+ "epoch": 0.612369871402327,
76
+ "grad_norm": 0.7574806145615771,
77
+ "learning_rate": 9.633993853934803e-06,
78
+ "loss": 0.9439,
79
+ "step": 500
80
+ },
81
+ {
82
+ "epoch": 0.6736068585425597,
83
+ "grad_norm": 0.748375839743952,
84
+ "learning_rate": 9.558282979768164e-06,
85
+ "loss": 0.9429,
86
+ "step": 550
87
+ },
88
+ {
89
+ "epoch": 0.7348438456827924,
90
+ "grad_norm": 0.7952459759871126,
91
+ "learning_rate": 9.475816456775313e-06,
92
+ "loss": 0.9366,
93
+ "step": 600
94
+ },
95
+ {
96
+ "epoch": 0.7960808328230251,
97
+ "grad_norm": 0.7782924796375454,
98
+ "learning_rate": 9.386716505290467e-06,
99
+ "loss": 0.9356,
100
+ "step": 650
101
+ },
102
+ {
103
+ "epoch": 0.8573178199632578,
104
+ "grad_norm": 0.7443095452962357,
105
+ "learning_rate": 9.291115176786814e-06,
106
+ "loss": 0.9304,
107
+ "step": 700
108
+ },
109
+ {
110
+ "epoch": 0.9185548071034905,
111
+ "grad_norm": 0.7822224402996899,
112
+ "learning_rate": 9.189154158168293e-06,
113
+ "loss": 0.9339,
114
+ "step": 750
115
+ },
116
+ {
117
+ "epoch": 0.9797917942437232,
118
+ "grad_norm": 0.7817236760492132,
119
+ "learning_rate": 9.08098456178111e-06,
120
+ "loss": 0.9295,
121
+ "step": 800
122
+ },
123
+ {
124
+ "epoch": 1.0410287813839558,
125
+ "grad_norm": 0.7824573951031337,
126
+ "learning_rate": 8.966766701456177e-06,
127
+ "loss": 0.9029,
128
+ "step": 850
129
+ },
130
+ {
131
+ "epoch": 1.1022657685241886,
132
+ "grad_norm": 0.7621342939491088,
133
+ "learning_rate": 8.846669854914395e-06,
134
+ "loss": 0.8871,
135
+ "step": 900
136
+ },
137
+ {
138
+ "epoch": 1.1635027556644213,
139
+ "grad_norm": 0.7431557953761879,
140
+ "learning_rate": 8.720872012886918e-06,
141
+ "loss": 0.882,
142
+ "step": 950
143
+ },
144
+ {
145
+ "epoch": 1.224739742804654,
146
+ "grad_norm": 0.3886245437993066,
147
+ "learning_rate": 8.58955961532221e-06,
148
+ "loss": 0.8864,
149
+ "step": 1000
150
+ },
151
+ {
152
+ "epoch": 1.2859767299448868,
153
+ "grad_norm": 0.7536794211653792,
154
+ "learning_rate": 8.452927275070858e-06,
155
+ "loss": 0.8805,
156
+ "step": 1050
157
+ },
158
+ {
159
+ "epoch": 1.3472137170851193,
160
+ "grad_norm": 0.7577244481174435,
161
+ "learning_rate": 8.311177489457653e-06,
162
+ "loss": 0.8855,
163
+ "step": 1100
164
+ },
165
+ {
166
+ "epoch": 1.408450704225352,
167
+ "grad_norm": 0.777343889968524,
168
+ "learning_rate": 8.164520340168404e-06,
169
+ "loss": 0.8862,
170
+ "step": 1150
171
+ },
172
+ {
173
+ "epoch": 1.4696876913655847,
174
+ "grad_norm": 0.736623306272446,
175
+ "learning_rate": 8.013173181896283e-06,
176
+ "loss": 0.8839,
177
+ "step": 1200
178
+ },
179
+ {
180
+ "epoch": 1.5309246785058175,
181
+ "grad_norm": 0.7451843323741638,
182
+ "learning_rate": 7.857360320209126e-06,
183
+ "loss": 0.8738,
184
+ "step": 1250
185
+ },
186
+ {
187
+ "epoch": 1.5921616656460502,
188
+ "grad_norm": 0.8023889052201314,
189
+ "learning_rate": 7.700553608573257e-06,
190
+ "loss": 0.8846,
191
+ "step": 1300
192
+ },
193
+ {
194
+ "epoch": 1.653398652786283,
195
+ "grad_norm": 0.7595177276389167,
196
+ "learning_rate": 7.536585976390474e-06,
197
+ "loss": 0.8818,
198
+ "step": 1350
199
+ },
200
+ {
201
+ "epoch": 1.7146356399265157,
202
+ "grad_norm": 0.7673121148739883,
203
+ "learning_rate": 7.3688589716215555e-06,
204
+ "loss": 0.8775,
205
+ "step": 1400
206
+ },
207
+ {
208
+ "epoch": 1.7758726270667484,
209
+ "grad_norm": 0.7577794985624852,
210
+ "learning_rate": 7.197621175749467e-06,
211
+ "loss": 0.8738,
212
+ "step": 1450
213
+ },
214
+ {
215
+ "epoch": 1.8371096142069812,
216
+ "grad_norm": 0.7927915981856463,
217
+ "learning_rate": 7.023126373460202e-06,
218
+ "loss": 0.8771,
219
+ "step": 1500
220
+ },
221
+ {
222
+ "epoch": 1.8983466013472137,
223
+ "grad_norm": 0.7595080397715767,
224
+ "learning_rate": 6.849210730229846e-06,
225
+ "loss": 0.8775,
226
+ "step": 1550
227
+ },
228
+ {
229
+ "epoch": 1.9595835884874464,
230
+ "grad_norm": 0.7976605652572579,
231
+ "learning_rate": 6.669034296168855e-06,
232
+ "loss": 0.8751,
233
+ "step": 1600
234
+ },
235
+ {
236
+ "epoch": 2.020820575627679,
237
+ "grad_norm": 0.7958737424052013,
238
+ "learning_rate": 6.486384253156014e-06,
239
+ "loss": 0.8579,
240
+ "step": 1650
241
+ },
242
+ {
243
+ "epoch": 2.0820575627679117,
244
+ "grad_norm": 0.8331069734198607,
245
+ "learning_rate": 6.301531299512195e-06,
246
+ "loss": 0.837,
247
+ "step": 1700
248
+ },
249
+ {
250
+ "epoch": 2.1432945499081444,
251
+ "grad_norm": 0.7511629331953945,
252
+ "learning_rate": 6.11474939840398e-06,
253
+ "loss": 0.8327,
254
+ "step": 1750
255
+ }
256
+ ],
257
+ "logging_steps": 50,
258
+ "max_steps": 4080,
259
+ "num_input_tokens_seen": 0,
260
+ "num_train_epochs": 5,
261
+ "save_steps": 250,
262
+ "stateful_callbacks": {
263
+ "TrainerControl": {
264
+ "args": {
265
+ "should_epoch_stop": false,
266
+ "should_evaluate": false,
267
+ "should_log": false,
268
+ "should_save": true,
269
+ "should_training_stop": false
270
+ },
271
+ "attributes": {}
272
+ }
273
+ },
274
+ "total_flos": 534961795891200.0,
275
+ "train_batch_size": 48,
276
+ "trial_name": null,
277
+ "trial_params": null
278
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:177f7284d8988303972d8deb70f69edb70868d890240ad46b306dcc6855a071e
3
+ size 7224
vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
zero_to_fp32.py ADDED
@@ -0,0 +1,604 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example: python zero_to_fp32.py . pytorch_model.bin
14
+
15
+ import argparse
16
+ import torch
17
+ import glob
18
+ import math
19
+ import os
20
+ import re
21
+ from collections import OrderedDict
22
+ from dataclasses import dataclass
23
+
24
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
25
+ # DeepSpeed data structures it has to be available in the current python environment.
26
+ from deepspeed.utils import logger
27
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
28
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
29
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
30
+
31
+
32
+ @dataclass
33
+ class zero_model_state:
34
+ buffers: dict()
35
+ param_shapes: dict()
36
+ shared_params: list
37
+ ds_version: int
38
+ frozen_param_shapes: dict()
39
+ frozen_param_fragments: dict()
40
+
41
+
42
+ debug = 0
43
+
44
+ # load to cpu
45
+ device = torch.device('cpu')
46
+
47
+
48
+ def atoi(text):
49
+ return int(text) if text.isdigit() else text
50
+
51
+
52
+ def natural_keys(text):
53
+ '''
54
+ alist.sort(key=natural_keys) sorts in human order
55
+ http://nedbatchelder.com/blog/200712/human_sorting.html
56
+ (See Toothy's implementation in the comments)
57
+ '''
58
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
59
+
60
+
61
+ def get_model_state_file(checkpoint_dir, zero_stage):
62
+ if not os.path.isdir(checkpoint_dir):
63
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
64
+
65
+ # there should be only one file
66
+ if zero_stage <= 2:
67
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
68
+ elif zero_stage == 3:
69
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
70
+
71
+ if not os.path.exists(file):
72
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
73
+
74
+ return file
75
+
76
+
77
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
78
+ # XXX: need to test that this simple glob rule works for multi-node setup too
79
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
80
+
81
+ if len(ckpt_files) == 0:
82
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
83
+
84
+ return ckpt_files
85
+
86
+
87
+ def get_optim_files(checkpoint_dir):
88
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
89
+
90
+
91
+ def get_model_state_files(checkpoint_dir):
92
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
93
+
94
+
95
+ def parse_model_states(files):
96
+ zero_model_states = []
97
+ for file in files:
98
+ state_dict = torch.load(file, map_location=device)
99
+
100
+ if BUFFER_NAMES not in state_dict:
101
+ raise ValueError(f"{file} is not a model state checkpoint")
102
+ buffer_names = state_dict[BUFFER_NAMES]
103
+ if debug:
104
+ print("Found buffers:", buffer_names)
105
+
106
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
107
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
108
+ param_shapes = state_dict[PARAM_SHAPES]
109
+
110
+ # collect parameters that are included in param_shapes
111
+ param_names = []
112
+ for s in param_shapes:
113
+ for name in s.keys():
114
+ param_names.append(name)
115
+
116
+ # update with frozen parameters
117
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
118
+ if frozen_param_shapes is not None:
119
+ if debug:
120
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
121
+ param_names += list(frozen_param_shapes.keys())
122
+
123
+ # handle shared params
124
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
125
+
126
+ ds_version = state_dict.get(DS_VERSION, None)
127
+
128
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
129
+
130
+ z_model_state = zero_model_state(buffers=buffers,
131
+ param_shapes=param_shapes,
132
+ shared_params=shared_params,
133
+ ds_version=ds_version,
134
+ frozen_param_shapes=frozen_param_shapes,
135
+ frozen_param_fragments=frozen_param_fragments)
136
+ zero_model_states.append(z_model_state)
137
+
138
+ return zero_model_states
139
+
140
+
141
+ def parse_optim_states(files, ds_checkpoint_dir):
142
+
143
+ total_files = len(files)
144
+ state_dicts = []
145
+ for f in files:
146
+ state_dict = torch.load(f, map_location=device)
147
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
148
+ # and also handle the case where it was already removed by another helper script
149
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
150
+ state_dicts.append(state_dict)
151
+
152
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
153
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
154
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
155
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
156
+
157
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
158
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
159
+ # use the max of the partition_count to get the dp world_size.
160
+
161
+ if type(world_size) is list:
162
+ world_size = max(world_size)
163
+
164
+ if world_size != total_files:
165
+ raise ValueError(
166
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
167
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
168
+ )
169
+
170
+ # the groups are named differently in each stage
171
+ if zero_stage <= 2:
172
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
173
+ elif zero_stage == 3:
174
+ fp32_groups_key = FP32_FLAT_GROUPS
175
+ else:
176
+ raise ValueError(f"unknown zero stage {zero_stage}")
177
+
178
+ if zero_stage <= 2:
179
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
180
+ elif zero_stage == 3:
181
+ # if there is more than one param group, there will be multiple flattened tensors - one
182
+ # flattened tensor per group - for simplicity merge them into a single tensor
183
+ #
184
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
185
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
186
+
187
+ fp32_flat_groups = [
188
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
189
+ ]
190
+
191
+ return zero_stage, world_size, fp32_flat_groups
192
+
193
+
194
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
195
+ """
196
+ Returns fp32 state_dict reconstructed from ds checkpoint
197
+
198
+ Args:
199
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
200
+
201
+ """
202
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
203
+
204
+ optim_files = get_optim_files(ds_checkpoint_dir)
205
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
206
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
207
+
208
+ model_files = get_model_state_files(ds_checkpoint_dir)
209
+
210
+ zero_model_states = parse_model_states(model_files)
211
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
212
+
213
+ if zero_stage <= 2:
214
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
215
+ exclude_frozen_parameters)
216
+ elif zero_stage == 3:
217
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
218
+ exclude_frozen_parameters)
219
+
220
+
221
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
222
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
223
+ return
224
+
225
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
226
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
227
+
228
+ if debug:
229
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
230
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
231
+
232
+ wanted_params = len(frozen_param_shapes)
233
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
234
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
235
+ print(f'Frozen params: Have {avail_numel} numels to process.')
236
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
237
+
238
+ total_params = 0
239
+ total_numel = 0
240
+ for name, shape in frozen_param_shapes.items():
241
+ total_params += 1
242
+ unpartitioned_numel = shape.numel()
243
+ total_numel += unpartitioned_numel
244
+
245
+ state_dict[name] = frozen_param_fragments[name]
246
+
247
+ if debug:
248
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
249
+
250
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
251
+
252
+
253
+ def _has_callable(obj, fn):
254
+ attr = getattr(obj, fn, None)
255
+ return callable(attr)
256
+
257
+
258
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
259
+ param_shapes = zero_model_states[0].param_shapes
260
+
261
+ # Reconstruction protocol:
262
+ #
263
+ # XXX: document this
264
+
265
+ if debug:
266
+ for i in range(world_size):
267
+ for j in range(len(fp32_flat_groups[0])):
268
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
269
+
270
+ # XXX: memory usage doubles here (zero2)
271
+ num_param_groups = len(fp32_flat_groups[0])
272
+ merged_single_partition_of_fp32_groups = []
273
+ for i in range(num_param_groups):
274
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
275
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
276
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
277
+ avail_numel = sum(
278
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
279
+
280
+ if debug:
281
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
282
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
283
+ # not asserting if there is a mismatch due to possible padding
284
+ print(f"Have {avail_numel} numels to process.")
285
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
286
+
287
+ # params
288
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
289
+ # out-of-core computing solution
290
+ total_numel = 0
291
+ total_params = 0
292
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
293
+ offset = 0
294
+ avail_numel = full_single_fp32_vector.numel()
295
+ for name, shape in shapes.items():
296
+
297
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
298
+ total_numel += unpartitioned_numel
299
+ total_params += 1
300
+
301
+ if debug:
302
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
303
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
304
+ offset += unpartitioned_numel
305
+
306
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
307
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
308
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
309
+ # live optimizer object, so we are checking that the numbers are within the right range
310
+ align_to = 2 * world_size
311
+
312
+ def zero2_align(x):
313
+ return align_to * math.ceil(x / align_to)
314
+
315
+ if debug:
316
+ print(f"original offset={offset}, avail_numel={avail_numel}")
317
+
318
+ offset = zero2_align(offset)
319
+ avail_numel = zero2_align(avail_numel)
320
+
321
+ if debug:
322
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
323
+
324
+ # Sanity check
325
+ if offset != avail_numel:
326
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
327
+
328
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
329
+
330
+
331
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
332
+ exclude_frozen_parameters):
333
+ state_dict = OrderedDict()
334
+
335
+ # buffers
336
+ buffers = zero_model_states[0].buffers
337
+ state_dict.update(buffers)
338
+ if debug:
339
+ print(f"added {len(buffers)} buffers")
340
+
341
+ if not exclude_frozen_parameters:
342
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
343
+
344
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
345
+
346
+ # recover shared parameters
347
+ for pair in zero_model_states[0].shared_params:
348
+ if pair[1] in state_dict:
349
+ state_dict[pair[0]] = state_dict[pair[1]]
350
+
351
+ return state_dict
352
+
353
+
354
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
355
+ remainder = unpartitioned_numel % world_size
356
+ padding_numel = (world_size - remainder) if remainder else 0
357
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
358
+ return partitioned_numel, padding_numel
359
+
360
+
361
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
362
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
363
+ return
364
+
365
+ if debug:
366
+ for i in range(world_size):
367
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
368
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
369
+
370
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
371
+ wanted_params = len(frozen_param_shapes)
372
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
373
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
374
+ print(f'Frozen params: Have {avail_numel} numels to process.')
375
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
376
+
377
+ total_params = 0
378
+ total_numel = 0
379
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
380
+ total_params += 1
381
+ unpartitioned_numel = shape.numel()
382
+ total_numel += unpartitioned_numel
383
+
384
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
385
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
386
+
387
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
388
+
389
+ if debug:
390
+ print(
391
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
392
+ )
393
+
394
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
395
+
396
+
397
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
398
+ param_shapes = zero_model_states[0].param_shapes
399
+ avail_numel = fp32_flat_groups[0].numel() * world_size
400
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
401
+ # param, re-consolidating each param, while dealing with padding if any
402
+
403
+ # merge list of dicts, preserving order
404
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
405
+
406
+ if debug:
407
+ for i in range(world_size):
408
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
409
+
410
+ wanted_params = len(param_shapes)
411
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
412
+ # not asserting if there is a mismatch due to possible padding
413
+ avail_numel = fp32_flat_groups[0].numel() * world_size
414
+ print(f"Trainable params: Have {avail_numel} numels to process.")
415
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
416
+
417
+ # params
418
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
419
+ # out-of-core computing solution
420
+ offset = 0
421
+ total_numel = 0
422
+ total_params = 0
423
+ for name, shape in param_shapes.items():
424
+
425
+ unpartitioned_numel = shape.numel()
426
+ total_numel += unpartitioned_numel
427
+ total_params += 1
428
+
429
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
430
+
431
+ if debug:
432
+ print(
433
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
434
+ )
435
+
436
+ # XXX: memory usage doubles here
437
+ state_dict[name] = torch.cat(
438
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
439
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
440
+ offset += partitioned_numel
441
+
442
+ offset *= world_size
443
+
444
+ # Sanity check
445
+ if offset != avail_numel:
446
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
447
+
448
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
449
+
450
+
451
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
452
+ exclude_frozen_parameters):
453
+ state_dict = OrderedDict()
454
+
455
+ # buffers
456
+ buffers = zero_model_states[0].buffers
457
+ state_dict.update(buffers)
458
+ if debug:
459
+ print(f"added {len(buffers)} buffers")
460
+
461
+ if not exclude_frozen_parameters:
462
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
463
+
464
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
465
+
466
+ # recover shared parameters
467
+ for pair in zero_model_states[0].shared_params:
468
+ if pair[1] in state_dict:
469
+ state_dict[pair[0]] = state_dict[pair[1]]
470
+
471
+ return state_dict
472
+
473
+
474
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None, exclude_frozen_parameters=False):
475
+ """
476
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
477
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
478
+ via a model hub.
479
+
480
+ Args:
481
+ - ``checkpoint_dir``: path to the desired checkpoint folder
482
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
483
+ - ``exclude_frozen_parameters``: exclude frozen parameters
484
+
485
+ Returns:
486
+ - pytorch ``state_dict``
487
+
488
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
489
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
490
+ the checkpoint.
491
+
492
+ A typical usage might be ::
493
+
494
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
495
+ # do the training and checkpoint saving
496
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
497
+ model = model.cpu() # move to cpu
498
+ model.load_state_dict(state_dict)
499
+ # submit to model hub or save the model to share with others
500
+
501
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
502
+ application. i.e. you will need to re-initialize the deepspeed engine, since
503
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
504
+
505
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
506
+
507
+ """
508
+ if tag is None:
509
+ latest_path = os.path.join(checkpoint_dir, 'latest')
510
+ if os.path.isfile(latest_path):
511
+ with open(latest_path, 'r') as fd:
512
+ tag = fd.read().strip()
513
+ else:
514
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
515
+
516
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
517
+
518
+ if not os.path.isdir(ds_checkpoint_dir):
519
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
520
+
521
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
522
+
523
+
524
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None, exclude_frozen_parameters=False):
525
+ """
526
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
527
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
528
+
529
+ Args:
530
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
531
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
532
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
533
+ - ``exclude_frozen_parameters``: exclude frozen parameters
534
+ """
535
+
536
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag, exclude_frozen_parameters)
537
+ print(f"Saving fp32 state dict to {output_file}")
538
+ torch.save(state_dict, output_file)
539
+
540
+
541
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
542
+ """
543
+ 1. Put the provided model to cpu
544
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
545
+ 3. Load it into the provided model
546
+
547
+ Args:
548
+ - ``model``: the model object to update
549
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
550
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
551
+
552
+ Returns:
553
+ - ``model`: modified model
554
+
555
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
556
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
557
+ conveniently placed for you in the checkpoint folder.
558
+
559
+ A typical usage might be ::
560
+
561
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
562
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
563
+ # submit to model hub or save the model to share with others
564
+
565
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
566
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
567
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
568
+
569
+ """
570
+ logger.info(f"Extracting fp32 weights")
571
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
572
+
573
+ logger.info(f"Overwriting model with fp32 weights")
574
+ model = model.cpu()
575
+ model.load_state_dict(state_dict, strict=False)
576
+
577
+ return model
578
+
579
+
580
+ if __name__ == "__main__":
581
+
582
+ parser = argparse.ArgumentParser()
583
+ parser.add_argument("checkpoint_dir",
584
+ type=str,
585
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
586
+ parser.add_argument(
587
+ "output_file",
588
+ type=str,
589
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
590
+ parser.add_argument("-t",
591
+ "--tag",
592
+ type=str,
593
+ default=None,
594
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
595
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
596
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
597
+ args = parser.parse_args()
598
+
599
+ debug = args.debug
600
+
601
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
602
+ args.output_file,
603
+ tag=args.tag,
604
+ exclude_frozen_parameters=args.exclude_frozen_parameters)