gagan3012 commited on
Commit
64ae580
1 Parent(s): 29d9af0

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - mistral
7
+ - Oasis
8
+ pipeline_tag: text-generation
9
+ ---
10
+ # Model Card for Oasis
11
+
12
+ Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
13
+
14
+ ## Results
15
+
16
+ | model_name | Average | arc_challenge | gsm8k | hellaswag | mmlu | truthfulqa_mc2 | winogrande |
17
+ |:-------------|----------:|----------------:|---------:|------------:|---------:|-----------------:|-------------:|
18
+ | Oasis | 0.701904 | 0.613481 | 0.741471 | 0.848337 | 0.639652 | 0.602897 | 0.765588 |
19
+
20
+ ## Instruction format
21
+
22
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
23
+
24
+
25
+ E.g.
26
+ ```
27
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
28
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
29
+ "[INST] Do you have mayonnaise recipes? [/INST]"
30
+ ```
31
+
32
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
33
+
34
+ ```python
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+
37
+ device = "cuda" # the device to load the model onto
38
+
39
+ model = AutoModelForCausalLM.from_pretrained("Xenon1/Oasis")
40
+ tokenizer = AutoTokenizer.from_pretrained("Xenon1/Oasis")
41
+
42
+ messages = [
43
+ {"role": "user", "content": "What is your favourite condiment?"},
44
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
45
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
46
+ ]
47
+
48
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
49
+
50
+ model_inputs = encodeds.to(device)
51
+ model.to(device)
52
+
53
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
54
+ decoded = tokenizer.batch_decode(generated_ids)
55
+ print(decoded[0])
56
+ ```
57
+
58
+ ## Model Architecture
59
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
60
+ - Grouped-Query Attention
61
+ - Sliding-Window Attention
62
+ - Byte-fallback BPE tokenizer
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/lustre07/scratch/gagan30/arocr/meta-llama/models/FusionNet_7Bx2_MoE_14B",
3
+ "architectures": [
4
+ "MixtralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mixtral",
15
+ "num_attention_heads": 32,
16
+ "num_experts_per_tok": 2,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "num_local_experts": 2,
20
+ "output_router_logits": false,
21
+ "pad_token_id": 2,
22
+ "rms_norm_eps": 1e-05,
23
+ "rope_theta": 10000.0,
24
+ "router_aux_loss_coef": 0.001,
25
+ "sliding_window": null,
26
+ "tie_word_embeddings": false,
27
+ "torch_dtype": "float16",
28
+ "transformers_version": "4.37.1",
29
+ "unsloth_version": "2024.1",
30
+ "use_cache": true,
31
+ "vocab_size": 32000
32
+ }
final_checkpoint/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /lustre07/scratch/gagan30/arocr/meta-llama/models/FusionNet_7Bx2_MoE_14B
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.2.dev0
final_checkpoint/adapter_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/lustre07/scratch/gagan30/arocr/meta-llama/models/FusionNet_7Bx2_MoE_14B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 16,
13
+ "lora_dropout": 0.05,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "q_proj",
23
+ "k_proj",
24
+ "o_proj",
25
+ "w2",
26
+ "w3",
27
+ "w1",
28
+ "gate",
29
+ "v_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_rslora": false
33
+ }
final_checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e144f6bac406877bf2b8c4f70ea7497d9332a902fd515f70973ec7711126792
3
+ size 144806848
final_checkpoint/special_tokens_map.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": {
8
+ "content": "<s>",
9
+ "lstrip": false,
10
+ "normalized": false,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "eos_token": {
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "pad_token": "</s>",
22
+ "unk_token": {
23
+ "content": "<unk>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ }
29
+ }
final_checkpoint/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
final_checkpoint/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
final_checkpoint/tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<unk>",
32
+ "<s>",
33
+ "</s>"
34
+ ],
35
+ "bos_token": "<s>",
36
+ "clean_up_tokenization_spaces": false,
37
+ "eos_token": "</s>",
38
+ "legacy": true,
39
+ "max_length": null,
40
+ "model_max_length": 255,
41
+ "pad_to_multiple_of": null,
42
+ "pad_token": "</s>",
43
+ "pad_token_type_id": 0,
44
+ "padding_side": "left",
45
+ "sp_model_kwargs": {},
46
+ "spaces_between_special_tokens": false,
47
+ "tokenizer_class": "LlamaTokenizer",
48
+ "unk_token": "<unk>",
49
+ "use_default_system_prompt": true
50
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 2,
6
+ "transformers_version": "4.37.1"
7
+ }
model-00001-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d39bd775c8c4cd31b796e15441ab3aadaf34a7ff96349d52f5e3bf450f2c562b
3
+ size 4993525184
model-00002-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff5892d62b79220b26943795b7e00e8399d1941dc4446bb1ac8c3617fbbaf14f
3
+ size 4932724792
model-00003-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:784a76f90f5233dd16e1c21605425f43717e6b7e821aff333fae2b51040a8eb1
3
+ size 4966262448
model-00004-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9ab92f10c3882e57088bb020b6b40fdf7bfe8f78a8648886a467496baf75303
3
+ size 4966262448
model-00005-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa65a19a07c990e7c1477f7cd7bb3d3c397513f42348e9e72ed7b7327c147e1c
3
+ size 4932741456
model-00006-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:839ddf0d959179cd83c1d2c8550f333a3e3a61c6df052e1dfcd789d2acc291a5
3
+ size 966812864
model.safetensors.index.json ADDED
@@ -0,0 +1,426 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 25758277632
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00006-of-00006.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00006.safetensors",
8
+ "model.layers.0.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
9
+ "model.layers.0.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
10
+ "model.layers.0.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
11
+ "model.layers.0.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
12
+ "model.layers.0.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
13
+ "model.layers.0.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
14
+ "model.layers.0.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
15
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
16
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
17
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
18
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
19
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
20
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
21
+ "model.layers.1.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
22
+ "model.layers.1.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
23
+ "model.layers.1.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
24
+ "model.layers.1.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
25
+ "model.layers.1.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
26
+ "model.layers.1.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
27
+ "model.layers.1.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
28
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
29
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
30
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
31
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
32
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
33
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
34
+ "model.layers.10.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
35
+ "model.layers.10.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
36
+ "model.layers.10.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
37
+ "model.layers.10.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
38
+ "model.layers.10.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
39
+ "model.layers.10.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
40
+ "model.layers.10.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
41
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
42
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
43
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
44
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
45
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
46
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
47
+ "model.layers.11.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
48
+ "model.layers.11.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
49
+ "model.layers.11.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
50
+ "model.layers.11.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
51
+ "model.layers.11.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
52
+ "model.layers.11.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
53
+ "model.layers.11.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
54
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
55
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
56
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
57
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
58
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
59
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
60
+ "model.layers.12.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
61
+ "model.layers.12.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
62
+ "model.layers.12.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
63
+ "model.layers.12.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
64
+ "model.layers.12.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
65
+ "model.layers.12.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
66
+ "model.layers.12.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
67
+ "model.layers.12.input_layernorm.weight": "model-00003-of-00006.safetensors",
68
+ "model.layers.12.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
69
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
70
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
71
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
72
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
73
+ "model.layers.13.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
74
+ "model.layers.13.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
75
+ "model.layers.13.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
76
+ "model.layers.13.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
77
+ "model.layers.13.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
78
+ "model.layers.13.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
79
+ "model.layers.13.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
80
+ "model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
81
+ "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
82
+ "model.layers.13.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
83
+ "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
84
+ "model.layers.13.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
85
+ "model.layers.13.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
86
+ "model.layers.14.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
87
+ "model.layers.14.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
88
+ "model.layers.14.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
89
+ "model.layers.14.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
90
+ "model.layers.14.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
91
+ "model.layers.14.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
92
+ "model.layers.14.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
93
+ "model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
94
+ "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
95
+ "model.layers.14.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
96
+ "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
97
+ "model.layers.14.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
98
+ "model.layers.14.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
99
+ "model.layers.15.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
100
+ "model.layers.15.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
101
+ "model.layers.15.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
102
+ "model.layers.15.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
103
+ "model.layers.15.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
104
+ "model.layers.15.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
105
+ "model.layers.15.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
106
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
107
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
108
+ "model.layers.15.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
109
+ "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
110
+ "model.layers.15.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
111
+ "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
112
+ "model.layers.16.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
113
+ "model.layers.16.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
114
+ "model.layers.16.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
115
+ "model.layers.16.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
116
+ "model.layers.16.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
117
+ "model.layers.16.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
118
+ "model.layers.16.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
119
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
120
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
121
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
122
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
123
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
124
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
125
+ "model.layers.17.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
126
+ "model.layers.17.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
127
+ "model.layers.17.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
128
+ "model.layers.17.block_sparse_moe.experts.1.w1.weight": "model-00003-of-00006.safetensors",
129
+ "model.layers.17.block_sparse_moe.experts.1.w2.weight": "model-00003-of-00006.safetensors",
130
+ "model.layers.17.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00006.safetensors",
131
+ "model.layers.17.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
132
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
133
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
134
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
135
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
136
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
137
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
138
+ "model.layers.18.block_sparse_moe.experts.0.w1.weight": "model-00003-of-00006.safetensors",
139
+ "model.layers.18.block_sparse_moe.experts.0.w2.weight": "model-00003-of-00006.safetensors",
140
+ "model.layers.18.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00006.safetensors",
141
+ "model.layers.18.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
142
+ "model.layers.18.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
143
+ "model.layers.18.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
144
+ "model.layers.18.block_sparse_moe.gate.weight": "model-00003-of-00006.safetensors",
145
+ "model.layers.18.input_layernorm.weight": "model-00004-of-00006.safetensors",
146
+ "model.layers.18.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
147
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
148
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
149
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
150
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
151
+ "model.layers.19.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
152
+ "model.layers.19.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
153
+ "model.layers.19.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
154
+ "model.layers.19.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
155
+ "model.layers.19.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
156
+ "model.layers.19.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
157
+ "model.layers.19.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
158
+ "model.layers.19.input_layernorm.weight": "model-00004-of-00006.safetensors",
159
+ "model.layers.19.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
160
+ "model.layers.19.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
161
+ "model.layers.19.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
162
+ "model.layers.19.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
163
+ "model.layers.19.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
164
+ "model.layers.2.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
165
+ "model.layers.2.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
166
+ "model.layers.2.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
167
+ "model.layers.2.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
168
+ "model.layers.2.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
169
+ "model.layers.2.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
170
+ "model.layers.2.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
171
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
172
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
173
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
174
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
175
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
176
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
177
+ "model.layers.20.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
178
+ "model.layers.20.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
179
+ "model.layers.20.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
180
+ "model.layers.20.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
181
+ "model.layers.20.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
182
+ "model.layers.20.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
183
+ "model.layers.20.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
184
+ "model.layers.20.input_layernorm.weight": "model-00004-of-00006.safetensors",
185
+ "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
186
+ "model.layers.20.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
187
+ "model.layers.20.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
188
+ "model.layers.20.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
189
+ "model.layers.20.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
190
+ "model.layers.21.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
191
+ "model.layers.21.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
192
+ "model.layers.21.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
193
+ "model.layers.21.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
194
+ "model.layers.21.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
195
+ "model.layers.21.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
196
+ "model.layers.21.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
197
+ "model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
198
+ "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
199
+ "model.layers.21.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
200
+ "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
201
+ "model.layers.21.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
202
+ "model.layers.21.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
203
+ "model.layers.22.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
204
+ "model.layers.22.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
205
+ "model.layers.22.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
206
+ "model.layers.22.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
207
+ "model.layers.22.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
208
+ "model.layers.22.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
209
+ "model.layers.22.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
210
+ "model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
211
+ "model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
212
+ "model.layers.22.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
213
+ "model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
214
+ "model.layers.22.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
215
+ "model.layers.22.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
216
+ "model.layers.23.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
217
+ "model.layers.23.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
218
+ "model.layers.23.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
219
+ "model.layers.23.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
220
+ "model.layers.23.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
221
+ "model.layers.23.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00006.safetensors",
222
+ "model.layers.23.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
223
+ "model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
224
+ "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
225
+ "model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
226
+ "model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
227
+ "model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
228
+ "model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
229
+ "model.layers.24.block_sparse_moe.experts.0.w1.weight": "model-00004-of-00006.safetensors",
230
+ "model.layers.24.block_sparse_moe.experts.0.w2.weight": "model-00004-of-00006.safetensors",
231
+ "model.layers.24.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00006.safetensors",
232
+ "model.layers.24.block_sparse_moe.experts.1.w1.weight": "model-00004-of-00006.safetensors",
233
+ "model.layers.24.block_sparse_moe.experts.1.w2.weight": "model-00004-of-00006.safetensors",
234
+ "model.layers.24.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
235
+ "model.layers.24.block_sparse_moe.gate.weight": "model-00004-of-00006.safetensors",
236
+ "model.layers.24.input_layernorm.weight": "model-00005-of-00006.safetensors",
237
+ "model.layers.24.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
238
+ "model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
239
+ "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
240
+ "model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
241
+ "model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
242
+ "model.layers.25.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
243
+ "model.layers.25.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
244
+ "model.layers.25.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
245
+ "model.layers.25.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
246
+ "model.layers.25.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
247
+ "model.layers.25.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
248
+ "model.layers.25.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
249
+ "model.layers.25.input_layernorm.weight": "model-00005-of-00006.safetensors",
250
+ "model.layers.25.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
251
+ "model.layers.25.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
252
+ "model.layers.25.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
253
+ "model.layers.25.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
254
+ "model.layers.25.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
255
+ "model.layers.26.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
256
+ "model.layers.26.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
257
+ "model.layers.26.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
258
+ "model.layers.26.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
259
+ "model.layers.26.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
260
+ "model.layers.26.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
261
+ "model.layers.26.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
262
+ "model.layers.26.input_layernorm.weight": "model-00005-of-00006.safetensors",
263
+ "model.layers.26.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
264
+ "model.layers.26.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
265
+ "model.layers.26.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
266
+ "model.layers.26.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
267
+ "model.layers.26.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
268
+ "model.layers.27.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
269
+ "model.layers.27.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
270
+ "model.layers.27.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
271
+ "model.layers.27.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
272
+ "model.layers.27.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
273
+ "model.layers.27.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
274
+ "model.layers.27.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
275
+ "model.layers.27.input_layernorm.weight": "model-00005-of-00006.safetensors",
276
+ "model.layers.27.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
277
+ "model.layers.27.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
278
+ "model.layers.27.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
279
+ "model.layers.27.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
280
+ "model.layers.27.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
281
+ "model.layers.28.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
282
+ "model.layers.28.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
283
+ "model.layers.28.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
284
+ "model.layers.28.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
285
+ "model.layers.28.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
286
+ "model.layers.28.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
287
+ "model.layers.28.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
288
+ "model.layers.28.input_layernorm.weight": "model-00005-of-00006.safetensors",
289
+ "model.layers.28.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
290
+ "model.layers.28.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
291
+ "model.layers.28.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
292
+ "model.layers.28.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
293
+ "model.layers.28.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
294
+ "model.layers.29.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
295
+ "model.layers.29.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
296
+ "model.layers.29.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
297
+ "model.layers.29.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
298
+ "model.layers.29.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
299
+ "model.layers.29.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
300
+ "model.layers.29.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
301
+ "model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
302
+ "model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
303
+ "model.layers.29.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
304
+ "model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
305
+ "model.layers.29.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
306
+ "model.layers.29.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
307
+ "model.layers.3.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
308
+ "model.layers.3.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
309
+ "model.layers.3.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
310
+ "model.layers.3.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
311
+ "model.layers.3.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
312
+ "model.layers.3.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
313
+ "model.layers.3.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
314
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
315
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
316
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
317
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
318
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
319
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
320
+ "model.layers.30.block_sparse_moe.experts.0.w1.weight": "model-00005-of-00006.safetensors",
321
+ "model.layers.30.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00006.safetensors",
322
+ "model.layers.30.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00006.safetensors",
323
+ "model.layers.30.block_sparse_moe.experts.1.w1.weight": "model-00005-of-00006.safetensors",
324
+ "model.layers.30.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00006.safetensors",
325
+ "model.layers.30.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00006.safetensors",
326
+ "model.layers.30.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
327
+ "model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
328
+ "model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
329
+ "model.layers.30.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
330
+ "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
331
+ "model.layers.30.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
332
+ "model.layers.30.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
333
+ "model.layers.31.block_sparse_moe.experts.0.w1.weight": "model-00006-of-00006.safetensors",
334
+ "model.layers.31.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00006.safetensors",
335
+ "model.layers.31.block_sparse_moe.experts.0.w3.weight": "model-00006-of-00006.safetensors",
336
+ "model.layers.31.block_sparse_moe.experts.1.w1.weight": "model-00006-of-00006.safetensors",
337
+ "model.layers.31.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00006.safetensors",
338
+ "model.layers.31.block_sparse_moe.experts.1.w3.weight": "model-00006-of-00006.safetensors",
339
+ "model.layers.31.block_sparse_moe.gate.weight": "model-00005-of-00006.safetensors",
340
+ "model.layers.31.input_layernorm.weight": "model-00006-of-00006.safetensors",
341
+ "model.layers.31.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
342
+ "model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
343
+ "model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
344
+ "model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
345
+ "model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
346
+ "model.layers.4.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
347
+ "model.layers.4.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
348
+ "model.layers.4.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
349
+ "model.layers.4.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
350
+ "model.layers.4.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
351
+ "model.layers.4.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
352
+ "model.layers.4.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
353
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
354
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
355
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
356
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
357
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
358
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
359
+ "model.layers.5.block_sparse_moe.experts.0.w1.weight": "model-00001-of-00006.safetensors",
360
+ "model.layers.5.block_sparse_moe.experts.0.w2.weight": "model-00001-of-00006.safetensors",
361
+ "model.layers.5.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00006.safetensors",
362
+ "model.layers.5.block_sparse_moe.experts.1.w1.weight": "model-00001-of-00006.safetensors",
363
+ "model.layers.5.block_sparse_moe.experts.1.w2.weight": "model-00001-of-00006.safetensors",
364
+ "model.layers.5.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00006.safetensors",
365
+ "model.layers.5.block_sparse_moe.gate.weight": "model-00001-of-00006.safetensors",
366
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00006.safetensors",
367
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
368
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
369
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
370
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
371
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
372
+ "model.layers.6.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
373
+ "model.layers.6.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
374
+ "model.layers.6.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
375
+ "model.layers.6.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
376
+ "model.layers.6.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
377
+ "model.layers.6.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
378
+ "model.layers.6.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
379
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
380
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
381
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
382
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
383
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
384
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
385
+ "model.layers.7.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
386
+ "model.layers.7.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
387
+ "model.layers.7.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
388
+ "model.layers.7.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
389
+ "model.layers.7.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
390
+ "model.layers.7.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
391
+ "model.layers.7.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
392
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
393
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
394
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
395
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
396
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
397
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
398
+ "model.layers.8.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
399
+ "model.layers.8.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
400
+ "model.layers.8.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
401
+ "model.layers.8.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
402
+ "model.layers.8.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
403
+ "model.layers.8.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
404
+ "model.layers.8.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
405
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
406
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
407
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
408
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
409
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
410
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
411
+ "model.layers.9.block_sparse_moe.experts.0.w1.weight": "model-00002-of-00006.safetensors",
412
+ "model.layers.9.block_sparse_moe.experts.0.w2.weight": "model-00002-of-00006.safetensors",
413
+ "model.layers.9.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00006.safetensors",
414
+ "model.layers.9.block_sparse_moe.experts.1.w1.weight": "model-00002-of-00006.safetensors",
415
+ "model.layers.9.block_sparse_moe.experts.1.w2.weight": "model-00002-of-00006.safetensors",
416
+ "model.layers.9.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00006.safetensors",
417
+ "model.layers.9.block_sparse_moe.gate.weight": "model-00002-of-00006.safetensors",
418
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
419
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
420
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
421
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
422
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
423
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
424
+ "model.norm.weight": "model-00006-of-00006.safetensors"
425
+ }
426
+ }
results_arc.json ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "arc_challenge": {
4
+ "acc,none": 0.5614334470989761,
5
+ "acc_stderr,none": 0.014500682618212865,
6
+ "acc_norm,none": 0.613481228668942,
7
+ "acc_norm_stderr,none": 0.014230084761910473,
8
+ "alias": "arc_challenge"
9
+ }
10
+ },
11
+ "configs": {
12
+ "arc_challenge": {
13
+ "task": "arc_challenge",
14
+ "group": [
15
+ "ai2_arc"
16
+ ],
17
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/ai2_arc",
18
+ "dataset_name": "ARC-Challenge",
19
+ "training_split": "train",
20
+ "validation_split": "validation",
21
+ "test_split": "test",
22
+ "doc_to_text": "Question: {{question}}\nAnswer:",
23
+ "doc_to_target": "{{choices.label.index(answerKey)}}",
24
+ "doc_to_choice": "{{choices.text}}",
25
+ "description": "",
26
+ "target_delimiter": " ",
27
+ "fewshot_delimiter": "\n\n",
28
+ "num_fewshot": 25,
29
+ "metric_list": [
30
+ {
31
+ "metric": "acc",
32
+ "aggregation": "mean",
33
+ "higher_is_better": true
34
+ },
35
+ {
36
+ "metric": "acc_norm",
37
+ "aggregation": "mean",
38
+ "higher_is_better": true
39
+ }
40
+ ],
41
+ "output_type": "multiple_choice",
42
+ "repeats": 1,
43
+ "should_decontaminate": true,
44
+ "doc_to_decontamination_query": "Question: {{question}}\nAnswer:",
45
+ "metadata": {
46
+ "version": 1.0
47
+ }
48
+ }
49
+ },
50
+ "versions": {
51
+ "arc_challenge": 1.0
52
+ },
53
+ "n-shot": {
54
+ "arc_challenge": 25
55
+ },
56
+ "config": {
57
+ "model": "vllm",
58
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
59
+ "batch_size": "auto:128",
60
+ "batch_sizes": [],
61
+ "device": "cuda",
62
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
63
+ "limit": null,
64
+ "bootstrap_iters": 100000,
65
+ "gen_kwargs": null
66
+ },
67
+ "git_hash": null
68
+ }
results_gsm8k.json ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "gsm8k": {
4
+ "exact_match,get-answer": 0.7414708112206216,
5
+ "exact_match_stderr,get-answer": 0.012059911372516116,
6
+ "alias": "gsm8k"
7
+ }
8
+ },
9
+ "configs": {
10
+ "gsm8k": {
11
+ "task": "gsm8k",
12
+ "group": [
13
+ "math_word_problems"
14
+ ],
15
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/gsm8k",
16
+ "dataset_name": "main",
17
+ "training_split": "train",
18
+ "test_split": "test",
19
+ "fewshot_split": "train",
20
+ "doc_to_text": "Question: {{question}}\nAnswer:",
21
+ "doc_to_target": "{{answer}}",
22
+ "description": "",
23
+ "target_delimiter": " ",
24
+ "fewshot_delimiter": "\n\n",
25
+ "num_fewshot": 5,
26
+ "metric_list": [
27
+ {
28
+ "metric": "exact_match",
29
+ "aggregation": "mean",
30
+ "higher_is_better": true,
31
+ "ignore_case": true,
32
+ "ignore_punctuation": false,
33
+ "regexes_to_ignore": [
34
+ ",",
35
+ "\\$",
36
+ "(?s).*#### "
37
+ ]
38
+ }
39
+ ],
40
+ "output_type": "generate_until",
41
+ "generation_kwargs": {
42
+ "until": [
43
+ "\n\n",
44
+ "Question:"
45
+ ],
46
+ "do_sample": false,
47
+ "temperature": 0.0
48
+ },
49
+ "repeats": 1,
50
+ "filter_list": [
51
+ {
52
+ "name": "get-answer",
53
+ "filter": [
54
+ {
55
+ "function": "regex",
56
+ "regex_pattern": "#### (\\-?[0-9\\.\\,]+)"
57
+ },
58
+ {
59
+ "function": "take_first"
60
+ }
61
+ ]
62
+ }
63
+ ],
64
+ "should_decontaminate": false,
65
+ "metadata": {
66
+ "version": 2.0
67
+ }
68
+ }
69
+ },
70
+ "versions": {
71
+ "gsm8k": 2.0
72
+ },
73
+ "n-shot": {
74
+ "gsm8k": 5
75
+ },
76
+ "config": {
77
+ "model": "vllm",
78
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
79
+ "batch_size": "auto:128",
80
+ "batch_sizes": [],
81
+ "device": "cuda",
82
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
83
+ "limit": null,
84
+ "bootstrap_iters": 100000,
85
+ "gen_kwargs": null
86
+ },
87
+ "git_hash": null
88
+ }
results_hellaswag.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "hellaswag": {
4
+ "acc,none": 0.6504680342561243,
5
+ "acc_stderr,none": 0.004758476684324035,
6
+ "acc_norm,none": 0.8483369846644094,
7
+ "acc_norm_stderr,none": 0.0035796087435066605,
8
+ "alias": "hellaswag"
9
+ }
10
+ },
11
+ "configs": {
12
+ "hellaswag": {
13
+ "task": "hellaswag",
14
+ "group": [
15
+ "multiple_choice"
16
+ ],
17
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/hellaswag",
18
+ "training_split": "train",
19
+ "validation_split": "validation",
20
+ "process_docs": "def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:\n def _process_doc(doc):\n ctx = doc[\"ctx_a\"] + \" \" + doc[\"ctx_b\"].capitalize()\n out_doc = {\n \"query\": preprocess(doc[\"activity_label\"] + \": \" + ctx),\n \"choices\": [preprocess(ending) for ending in doc[\"endings\"]],\n \"gold\": int(doc[\"label\"]),\n }\n return out_doc\n\n return dataset.map(_process_doc)\n",
21
+ "doc_to_text": "{{query}}",
22
+ "doc_to_target": "{{label}}",
23
+ "doc_to_choice": "choices",
24
+ "description": "",
25
+ "target_delimiter": " ",
26
+ "fewshot_delimiter": "\n\n",
27
+ "num_fewshot": 10,
28
+ "metric_list": [
29
+ {
30
+ "metric": "acc",
31
+ "aggregation": "mean",
32
+ "higher_is_better": true
33
+ },
34
+ {
35
+ "metric": "acc_norm",
36
+ "aggregation": "mean",
37
+ "higher_is_better": true
38
+ }
39
+ ],
40
+ "output_type": "multiple_choice",
41
+ "repeats": 1,
42
+ "should_decontaminate": false,
43
+ "metadata": {
44
+ "version": 1.0
45
+ }
46
+ }
47
+ },
48
+ "versions": {
49
+ "hellaswag": 1.0
50
+ },
51
+ "n-shot": {
52
+ "hellaswag": 10
53
+ },
54
+ "config": {
55
+ "model": "vllm",
56
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
57
+ "batch_size": "auto:128",
58
+ "batch_sizes": [],
59
+ "device": "cuda",
60
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
61
+ "limit": null,
62
+ "bootstrap_iters": 100000,
63
+ "gen_kwargs": null
64
+ },
65
+ "git_hash": null
66
+ }
results_mmlu.json ADDED
@@ -0,0 +1,2649 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "mmlu": {
4
+ "acc,none": 0.6396524711579548,
5
+ "acc_stderr,none": 0.0038282718412418733,
6
+ "alias": "mmlu"
7
+ },
8
+ "mmlu_humanities": {
9
+ "alias": " - humanities",
10
+ "acc,none": 0.6,
11
+ "acc_stderr,none": 0.006778341124606213
12
+ },
13
+ "mmlu_formal_logic": {
14
+ "alias": " - formal_logic",
15
+ "acc,none": 0.4444444444444444,
16
+ "acc_stderr,none": 0.04444444444444449
17
+ },
18
+ "mmlu_high_school_european_history": {
19
+ "alias": " - high_school_european_history",
20
+ "acc,none": 0.7696969696969697,
21
+ "acc_stderr,none": 0.0328766675860349
22
+ },
23
+ "mmlu_high_school_us_history": {
24
+ "alias": " - high_school_us_history",
25
+ "acc,none": 0.8480392156862745,
26
+ "acc_stderr,none": 0.025195658428931792
27
+ },
28
+ "mmlu_high_school_world_history": {
29
+ "alias": " - high_school_world_history",
30
+ "acc,none": 0.8016877637130801,
31
+ "acc_stderr,none": 0.02595502084162111
32
+ },
33
+ "mmlu_international_law": {
34
+ "alias": " - international_law",
35
+ "acc,none": 0.7768595041322314,
36
+ "acc_stderr,none": 0.03800754475228733
37
+ },
38
+ "mmlu_jurisprudence": {
39
+ "alias": " - jurisprudence",
40
+ "acc,none": 0.7685185185185185,
41
+ "acc_stderr,none": 0.04077494709252627
42
+ },
43
+ "mmlu_logical_fallacies": {
44
+ "alias": " - logical_fallacies",
45
+ "acc,none": 0.7668711656441718,
46
+ "acc_stderr,none": 0.033220157957767414
47
+ },
48
+ "mmlu_moral_disputes": {
49
+ "alias": " - moral_disputes",
50
+ "acc,none": 0.7283236994219653,
51
+ "acc_stderr,none": 0.023948512905468348
52
+ },
53
+ "mmlu_moral_scenarios": {
54
+ "alias": " - moral_scenarios",
55
+ "acc,none": 0.4681564245810056,
56
+ "acc_stderr,none": 0.016688553415612213
57
+ },
58
+ "mmlu_philosophy": {
59
+ "alias": " - philosophy",
60
+ "acc,none": 0.707395498392283,
61
+ "acc_stderr,none": 0.02583989833487798
62
+ },
63
+ "mmlu_prehistory": {
64
+ "alias": " - prehistory",
65
+ "acc,none": 0.7438271604938271,
66
+ "acc_stderr,none": 0.0242885336377261
67
+ },
68
+ "mmlu_professional_law": {
69
+ "alias": " - professional_law",
70
+ "acc,none": 0.4556714471968709,
71
+ "acc_stderr,none": 0.012719949543032204
72
+ },
73
+ "mmlu_world_religions": {
74
+ "alias": " - world_religions",
75
+ "acc,none": 0.8421052631578947,
76
+ "acc_stderr,none": 0.027966785859160872
77
+ },
78
+ "mmlu_other": {
79
+ "alias": " - other",
80
+ "acc,none": 0.7051818474412617,
81
+ "acc_stderr,none": 0.00782572992790983
82
+ },
83
+ "mmlu_business_ethics": {
84
+ "alias": " - business_ethics",
85
+ "acc,none": 0.65,
86
+ "acc_stderr,none": 0.047937248544110196
87
+ },
88
+ "mmlu_clinical_knowledge": {
89
+ "alias": " - clinical_knowledge",
90
+ "acc,none": 0.7169811320754716,
91
+ "acc_stderr,none": 0.027724236492700918
92
+ },
93
+ "mmlu_college_medicine": {
94
+ "alias": " - college_medicine",
95
+ "acc,none": 0.6589595375722543,
96
+ "acc_stderr,none": 0.036146654241808254
97
+ },
98
+ "mmlu_global_facts": {
99
+ "alias": " - global_facts",
100
+ "acc,none": 0.32,
101
+ "acc_stderr,none": 0.046882617226215034
102
+ },
103
+ "mmlu_human_aging": {
104
+ "alias": " - human_aging",
105
+ "acc,none": 0.695067264573991,
106
+ "acc_stderr,none": 0.03089861088247752
107
+ },
108
+ "mmlu_management": {
109
+ "alias": " - management",
110
+ "acc,none": 0.7864077669902912,
111
+ "acc_stderr,none": 0.04058042015646034
112
+ },
113
+ "mmlu_marketing": {
114
+ "alias": " - marketing",
115
+ "acc,none": 0.8846153846153846,
116
+ "acc_stderr,none": 0.020930193185179333
117
+ },
118
+ "mmlu_medical_genetics": {
119
+ "alias": " - medical_genetics",
120
+ "acc,none": 0.72,
121
+ "acc_stderr,none": 0.04512608598542127
122
+ },
123
+ "mmlu_miscellaneous": {
124
+ "alias": " - miscellaneous",
125
+ "acc,none": 0.8314176245210728,
126
+ "acc_stderr,none": 0.013387895731543602
127
+ },
128
+ "mmlu_nutrition": {
129
+ "alias": " - nutrition",
130
+ "acc,none": 0.7222222222222222,
131
+ "acc_stderr,none": 0.02564686309713791
132
+ },
133
+ "mmlu_professional_accounting": {
134
+ "alias": " - professional_accounting",
135
+ "acc,none": 0.4645390070921986,
136
+ "acc_stderr,none": 0.029752389657427054
137
+ },
138
+ "mmlu_professional_medicine": {
139
+ "alias": " - professional_medicine",
140
+ "acc,none": 0.6654411764705882,
141
+ "acc_stderr,none": 0.02866199620233531
142
+ },
143
+ "mmlu_virology": {
144
+ "alias": " - virology",
145
+ "acc,none": 0.5481927710843374,
146
+ "acc_stderr,none": 0.03874371556587953
147
+ },
148
+ "mmlu_social_sciences": {
149
+ "alias": " - social_sciences",
150
+ "acc,none": 0.745206369840754,
151
+ "acc_stderr,none": 0.0076970085276856625
152
+ },
153
+ "mmlu_econometrics": {
154
+ "alias": " - econometrics",
155
+ "acc,none": 0.5087719298245614,
156
+ "acc_stderr,none": 0.04702880432049615
157
+ },
158
+ "mmlu_high_school_geography": {
159
+ "alias": " - high_school_geography",
160
+ "acc,none": 0.7929292929292929,
161
+ "acc_stderr,none": 0.02886977846026707
162
+ },
163
+ "mmlu_high_school_government_and_politics": {
164
+ "alias": " - high_school_government_and_politics",
165
+ "acc,none": 0.8808290155440415,
166
+ "acc_stderr,none": 0.023381935348121437
167
+ },
168
+ "mmlu_high_school_macroeconomics": {
169
+ "alias": " - high_school_macroeconomics",
170
+ "acc,none": 0.6743589743589744,
171
+ "acc_stderr,none": 0.02375966576741229
172
+ },
173
+ "mmlu_high_school_microeconomics": {
174
+ "alias": " - high_school_microeconomics",
175
+ "acc,none": 0.6932773109243697,
176
+ "acc_stderr,none": 0.029953823891887037
177
+ },
178
+ "mmlu_high_school_psychology": {
179
+ "alias": " - high_school_psychology",
180
+ "acc,none": 0.8422018348623853,
181
+ "acc_stderr,none": 0.01563002297009244
182
+ },
183
+ "mmlu_human_sexuality": {
184
+ "alias": " - human_sexuality",
185
+ "acc,none": 0.7862595419847328,
186
+ "acc_stderr,none": 0.0359546161177469
187
+ },
188
+ "mmlu_professional_psychology": {
189
+ "alias": " - professional_psychology",
190
+ "acc,none": 0.6715686274509803,
191
+ "acc_stderr,none": 0.018999707383162662
192
+ },
193
+ "mmlu_public_relations": {
194
+ "alias": " - public_relations",
195
+ "acc,none": 0.6545454545454545,
196
+ "acc_stderr,none": 0.04554619617541054
197
+ },
198
+ "mmlu_security_studies": {
199
+ "alias": " - security_studies",
200
+ "acc,none": 0.7387755102040816,
201
+ "acc_stderr,none": 0.028123429335142787
202
+ },
203
+ "mmlu_sociology": {
204
+ "alias": " - sociology",
205
+ "acc,none": 0.8507462686567164,
206
+ "acc_stderr,none": 0.025196929874827093
207
+ },
208
+ "mmlu_us_foreign_policy": {
209
+ "alias": " - us_foreign_policy",
210
+ "acc,none": 0.83,
211
+ "acc_stderr,none": 0.0377525168068637
212
+ },
213
+ "mmlu_stem": {
214
+ "alias": " - stem",
215
+ "acc,none": 0.5312400888043134,
216
+ "acc_stderr,none": 0.00851347107140647
217
+ },
218
+ "mmlu_abstract_algebra": {
219
+ "alias": " - abstract_algebra",
220
+ "acc,none": 0.3,
221
+ "acc_stderr,none": 0.046056618647183814
222
+ },
223
+ "mmlu_anatomy": {
224
+ "alias": " - anatomy",
225
+ "acc,none": 0.6444444444444445,
226
+ "acc_stderr,none": 0.04135176749720386
227
+ },
228
+ "mmlu_astronomy": {
229
+ "alias": " - astronomy",
230
+ "acc,none": 0.6907894736842105,
231
+ "acc_stderr,none": 0.03761070869867479
232
+ },
233
+ "mmlu_college_biology": {
234
+ "alias": " - college_biology",
235
+ "acc,none": 0.7291666666666666,
236
+ "acc_stderr,none": 0.03716177437566017
237
+ },
238
+ "mmlu_college_chemistry": {
239
+ "alias": " - college_chemistry",
240
+ "acc,none": 0.45,
241
+ "acc_stderr,none": 0.049999999999999996
242
+ },
243
+ "mmlu_college_computer_science": {
244
+ "alias": " - college_computer_science",
245
+ "acc,none": 0.55,
246
+ "acc_stderr,none": 0.04999999999999999
247
+ },
248
+ "mmlu_college_mathematics": {
249
+ "alias": " - college_mathematics",
250
+ "acc,none": 0.32,
251
+ "acc_stderr,none": 0.04688261722621505
252
+ },
253
+ "mmlu_college_physics": {
254
+ "alias": " - college_physics",
255
+ "acc,none": 0.4411764705882353,
256
+ "acc_stderr,none": 0.049406356306056595
257
+ },
258
+ "mmlu_computer_security": {
259
+ "alias": " - computer_security",
260
+ "acc,none": 0.76,
261
+ "acc_stderr,none": 0.04292346959909282
262
+ },
263
+ "mmlu_conceptual_physics": {
264
+ "alias": " - conceptual_physics",
265
+ "acc,none": 0.6042553191489362,
266
+ "acc_stderr,none": 0.03196758697835362
267
+ },
268
+ "mmlu_electrical_engineering": {
269
+ "alias": " - electrical_engineering",
270
+ "acc,none": 0.5586206896551724,
271
+ "acc_stderr,none": 0.04137931034482757
272
+ },
273
+ "mmlu_elementary_mathematics": {
274
+ "alias": " - elementary_mathematics",
275
+ "acc,none": 0.41798941798941797,
276
+ "acc_stderr,none": 0.02540255550326091
277
+ },
278
+ "mmlu_high_school_biology": {
279
+ "alias": " - high_school_biology",
280
+ "acc,none": 0.7741935483870968,
281
+ "acc_stderr,none": 0.023785577884181012
282
+ },
283
+ "mmlu_high_school_chemistry": {
284
+ "alias": " - high_school_chemistry",
285
+ "acc,none": 0.4975369458128079,
286
+ "acc_stderr,none": 0.03517945038691063
287
+ },
288
+ "mmlu_high_school_computer_science": {
289
+ "alias": " - high_school_computer_science",
290
+ "acc,none": 0.68,
291
+ "acc_stderr,none": 0.04688261722621504
292
+ },
293
+ "mmlu_high_school_mathematics": {
294
+ "alias": " - high_school_mathematics",
295
+ "acc,none": 0.362962962962963,
296
+ "acc_stderr,none": 0.029318203645206865
297
+ },
298
+ "mmlu_high_school_physics": {
299
+ "alias": " - high_school_physics",
300
+ "acc,none": 0.36423841059602646,
301
+ "acc_stderr,none": 0.03929111781242741
302
+ },
303
+ "mmlu_high_school_statistics": {
304
+ "alias": " - high_school_statistics",
305
+ "acc,none": 0.5,
306
+ "acc_stderr,none": 0.034099716973523674
307
+ },
308
+ "mmlu_machine_learning": {
309
+ "alias": " - machine_learning",
310
+ "acc,none": 0.39285714285714285,
311
+ "acc_stderr,none": 0.04635550135609976
312
+ }
313
+ },
314
+ "groups": {
315
+ "mmlu": {
316
+ "acc,none": 0.6396524711579548,
317
+ "acc_stderr,none": 0.0038282718412418733,
318
+ "alias": "mmlu"
319
+ },
320
+ "mmlu_humanities": {
321
+ "alias": " - humanities",
322
+ "acc,none": 0.6,
323
+ "acc_stderr,none": 0.006778341124606213
324
+ },
325
+ "mmlu_other": {
326
+ "alias": " - other",
327
+ "acc,none": 0.7051818474412617,
328
+ "acc_stderr,none": 0.00782572992790983
329
+ },
330
+ "mmlu_social_sciences": {
331
+ "alias": " - social_sciences",
332
+ "acc,none": 0.745206369840754,
333
+ "acc_stderr,none": 0.0076970085276856625
334
+ },
335
+ "mmlu_stem": {
336
+ "alias": " - stem",
337
+ "acc,none": 0.5312400888043134,
338
+ "acc_stderr,none": 0.00851347107140647
339
+ }
340
+ },
341
+ "configs": {
342
+ "mmlu_abstract_algebra": {
343
+ "task": "mmlu_abstract_algebra",
344
+ "task_alias": "abstract_algebra",
345
+ "group": "mmlu_stem",
346
+ "group_alias": "stem",
347
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
348
+ "dataset_name": "abstract_algebra",
349
+ "test_split": "test",
350
+ "fewshot_split": "dev",
351
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
352
+ "doc_to_target": "answer",
353
+ "doc_to_choice": [
354
+ "A",
355
+ "B",
356
+ "C",
357
+ "D"
358
+ ],
359
+ "description": "The following are multiple choice questions (with answers) about abstract algebra.\n\n",
360
+ "target_delimiter": " ",
361
+ "fewshot_delimiter": "\n\n",
362
+ "fewshot_config": {
363
+ "sampler": "first_n"
364
+ },
365
+ "num_fewshot": 5,
366
+ "metric_list": [
367
+ {
368
+ "metric": "acc",
369
+ "aggregation": "mean",
370
+ "higher_is_better": true
371
+ }
372
+ ],
373
+ "output_type": "multiple_choice",
374
+ "repeats": 1,
375
+ "should_decontaminate": false,
376
+ "metadata": {
377
+ "version": 0.0
378
+ }
379
+ },
380
+ "mmlu_anatomy": {
381
+ "task": "mmlu_anatomy",
382
+ "task_alias": "anatomy",
383
+ "group": "mmlu_stem",
384
+ "group_alias": "stem",
385
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
386
+ "dataset_name": "anatomy",
387
+ "test_split": "test",
388
+ "fewshot_split": "dev",
389
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
390
+ "doc_to_target": "answer",
391
+ "doc_to_choice": [
392
+ "A",
393
+ "B",
394
+ "C",
395
+ "D"
396
+ ],
397
+ "description": "The following are multiple choice questions (with answers) about anatomy.\n\n",
398
+ "target_delimiter": " ",
399
+ "fewshot_delimiter": "\n\n",
400
+ "fewshot_config": {
401
+ "sampler": "first_n"
402
+ },
403
+ "num_fewshot": 5,
404
+ "metric_list": [
405
+ {
406
+ "metric": "acc",
407
+ "aggregation": "mean",
408
+ "higher_is_better": true
409
+ }
410
+ ],
411
+ "output_type": "multiple_choice",
412
+ "repeats": 1,
413
+ "should_decontaminate": false,
414
+ "metadata": {
415
+ "version": 0.0
416
+ }
417
+ },
418
+ "mmlu_astronomy": {
419
+ "task": "mmlu_astronomy",
420
+ "task_alias": "astronomy",
421
+ "group": "mmlu_stem",
422
+ "group_alias": "stem",
423
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
424
+ "dataset_name": "astronomy",
425
+ "test_split": "test",
426
+ "fewshot_split": "dev",
427
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
428
+ "doc_to_target": "answer",
429
+ "doc_to_choice": [
430
+ "A",
431
+ "B",
432
+ "C",
433
+ "D"
434
+ ],
435
+ "description": "The following are multiple choice questions (with answers) about astronomy.\n\n",
436
+ "target_delimiter": " ",
437
+ "fewshot_delimiter": "\n\n",
438
+ "fewshot_config": {
439
+ "sampler": "first_n"
440
+ },
441
+ "num_fewshot": 5,
442
+ "metric_list": [
443
+ {
444
+ "metric": "acc",
445
+ "aggregation": "mean",
446
+ "higher_is_better": true
447
+ }
448
+ ],
449
+ "output_type": "multiple_choice",
450
+ "repeats": 1,
451
+ "should_decontaminate": false,
452
+ "metadata": {
453
+ "version": 0.0
454
+ }
455
+ },
456
+ "mmlu_business_ethics": {
457
+ "task": "mmlu_business_ethics",
458
+ "task_alias": "business_ethics",
459
+ "group": "mmlu_other",
460
+ "group_alias": "other",
461
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
462
+ "dataset_name": "business_ethics",
463
+ "test_split": "test",
464
+ "fewshot_split": "dev",
465
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
466
+ "doc_to_target": "answer",
467
+ "doc_to_choice": [
468
+ "A",
469
+ "B",
470
+ "C",
471
+ "D"
472
+ ],
473
+ "description": "The following are multiple choice questions (with answers) about business ethics.\n\n",
474
+ "target_delimiter": " ",
475
+ "fewshot_delimiter": "\n\n",
476
+ "fewshot_config": {
477
+ "sampler": "first_n"
478
+ },
479
+ "num_fewshot": 5,
480
+ "metric_list": [
481
+ {
482
+ "metric": "acc",
483
+ "aggregation": "mean",
484
+ "higher_is_better": true
485
+ }
486
+ ],
487
+ "output_type": "multiple_choice",
488
+ "repeats": 1,
489
+ "should_decontaminate": false,
490
+ "metadata": {
491
+ "version": 0.0
492
+ }
493
+ },
494
+ "mmlu_clinical_knowledge": {
495
+ "task": "mmlu_clinical_knowledge",
496
+ "task_alias": "clinical_knowledge",
497
+ "group": "mmlu_other",
498
+ "group_alias": "other",
499
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
500
+ "dataset_name": "clinical_knowledge",
501
+ "test_split": "test",
502
+ "fewshot_split": "dev",
503
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
504
+ "doc_to_target": "answer",
505
+ "doc_to_choice": [
506
+ "A",
507
+ "B",
508
+ "C",
509
+ "D"
510
+ ],
511
+ "description": "The following are multiple choice questions (with answers) about clinical knowledge.\n\n",
512
+ "target_delimiter": " ",
513
+ "fewshot_delimiter": "\n\n",
514
+ "fewshot_config": {
515
+ "sampler": "first_n"
516
+ },
517
+ "num_fewshot": 5,
518
+ "metric_list": [
519
+ {
520
+ "metric": "acc",
521
+ "aggregation": "mean",
522
+ "higher_is_better": true
523
+ }
524
+ ],
525
+ "output_type": "multiple_choice",
526
+ "repeats": 1,
527
+ "should_decontaminate": false,
528
+ "metadata": {
529
+ "version": 0.0
530
+ }
531
+ },
532
+ "mmlu_college_biology": {
533
+ "task": "mmlu_college_biology",
534
+ "task_alias": "college_biology",
535
+ "group": "mmlu_stem",
536
+ "group_alias": "stem",
537
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
538
+ "dataset_name": "college_biology",
539
+ "test_split": "test",
540
+ "fewshot_split": "dev",
541
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
542
+ "doc_to_target": "answer",
543
+ "doc_to_choice": [
544
+ "A",
545
+ "B",
546
+ "C",
547
+ "D"
548
+ ],
549
+ "description": "The following are multiple choice questions (with answers) about college biology.\n\n",
550
+ "target_delimiter": " ",
551
+ "fewshot_delimiter": "\n\n",
552
+ "fewshot_config": {
553
+ "sampler": "first_n"
554
+ },
555
+ "num_fewshot": 5,
556
+ "metric_list": [
557
+ {
558
+ "metric": "acc",
559
+ "aggregation": "mean",
560
+ "higher_is_better": true
561
+ }
562
+ ],
563
+ "output_type": "multiple_choice",
564
+ "repeats": 1,
565
+ "should_decontaminate": false,
566
+ "metadata": {
567
+ "version": 0.0
568
+ }
569
+ },
570
+ "mmlu_college_chemistry": {
571
+ "task": "mmlu_college_chemistry",
572
+ "task_alias": "college_chemistry",
573
+ "group": "mmlu_stem",
574
+ "group_alias": "stem",
575
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
576
+ "dataset_name": "college_chemistry",
577
+ "test_split": "test",
578
+ "fewshot_split": "dev",
579
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
580
+ "doc_to_target": "answer",
581
+ "doc_to_choice": [
582
+ "A",
583
+ "B",
584
+ "C",
585
+ "D"
586
+ ],
587
+ "description": "The following are multiple choice questions (with answers) about college chemistry.\n\n",
588
+ "target_delimiter": " ",
589
+ "fewshot_delimiter": "\n\n",
590
+ "fewshot_config": {
591
+ "sampler": "first_n"
592
+ },
593
+ "num_fewshot": 5,
594
+ "metric_list": [
595
+ {
596
+ "metric": "acc",
597
+ "aggregation": "mean",
598
+ "higher_is_better": true
599
+ }
600
+ ],
601
+ "output_type": "multiple_choice",
602
+ "repeats": 1,
603
+ "should_decontaminate": false,
604
+ "metadata": {
605
+ "version": 0.0
606
+ }
607
+ },
608
+ "mmlu_college_computer_science": {
609
+ "task": "mmlu_college_computer_science",
610
+ "task_alias": "college_computer_science",
611
+ "group": "mmlu_stem",
612
+ "group_alias": "stem",
613
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
614
+ "dataset_name": "college_computer_science",
615
+ "test_split": "test",
616
+ "fewshot_split": "dev",
617
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
618
+ "doc_to_target": "answer",
619
+ "doc_to_choice": [
620
+ "A",
621
+ "B",
622
+ "C",
623
+ "D"
624
+ ],
625
+ "description": "The following are multiple choice questions (with answers) about college computer science.\n\n",
626
+ "target_delimiter": " ",
627
+ "fewshot_delimiter": "\n\n",
628
+ "fewshot_config": {
629
+ "sampler": "first_n"
630
+ },
631
+ "num_fewshot": 5,
632
+ "metric_list": [
633
+ {
634
+ "metric": "acc",
635
+ "aggregation": "mean",
636
+ "higher_is_better": true
637
+ }
638
+ ],
639
+ "output_type": "multiple_choice",
640
+ "repeats": 1,
641
+ "should_decontaminate": false,
642
+ "metadata": {
643
+ "version": 0.0
644
+ }
645
+ },
646
+ "mmlu_college_mathematics": {
647
+ "task": "mmlu_college_mathematics",
648
+ "task_alias": "college_mathematics",
649
+ "group": "mmlu_stem",
650
+ "group_alias": "stem",
651
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
652
+ "dataset_name": "college_mathematics",
653
+ "test_split": "test",
654
+ "fewshot_split": "dev",
655
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
656
+ "doc_to_target": "answer",
657
+ "doc_to_choice": [
658
+ "A",
659
+ "B",
660
+ "C",
661
+ "D"
662
+ ],
663
+ "description": "The following are multiple choice questions (with answers) about college mathematics.\n\n",
664
+ "target_delimiter": " ",
665
+ "fewshot_delimiter": "\n\n",
666
+ "fewshot_config": {
667
+ "sampler": "first_n"
668
+ },
669
+ "num_fewshot": 5,
670
+ "metric_list": [
671
+ {
672
+ "metric": "acc",
673
+ "aggregation": "mean",
674
+ "higher_is_better": true
675
+ }
676
+ ],
677
+ "output_type": "multiple_choice",
678
+ "repeats": 1,
679
+ "should_decontaminate": false,
680
+ "metadata": {
681
+ "version": 0.0
682
+ }
683
+ },
684
+ "mmlu_college_medicine": {
685
+ "task": "mmlu_college_medicine",
686
+ "task_alias": "college_medicine",
687
+ "group": "mmlu_other",
688
+ "group_alias": "other",
689
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
690
+ "dataset_name": "college_medicine",
691
+ "test_split": "test",
692
+ "fewshot_split": "dev",
693
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
694
+ "doc_to_target": "answer",
695
+ "doc_to_choice": [
696
+ "A",
697
+ "B",
698
+ "C",
699
+ "D"
700
+ ],
701
+ "description": "The following are multiple choice questions (with answers) about college medicine.\n\n",
702
+ "target_delimiter": " ",
703
+ "fewshot_delimiter": "\n\n",
704
+ "fewshot_config": {
705
+ "sampler": "first_n"
706
+ },
707
+ "num_fewshot": 5,
708
+ "metric_list": [
709
+ {
710
+ "metric": "acc",
711
+ "aggregation": "mean",
712
+ "higher_is_better": true
713
+ }
714
+ ],
715
+ "output_type": "multiple_choice",
716
+ "repeats": 1,
717
+ "should_decontaminate": false,
718
+ "metadata": {
719
+ "version": 0.0
720
+ }
721
+ },
722
+ "mmlu_college_physics": {
723
+ "task": "mmlu_college_physics",
724
+ "task_alias": "college_physics",
725
+ "group": "mmlu_stem",
726
+ "group_alias": "stem",
727
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
728
+ "dataset_name": "college_physics",
729
+ "test_split": "test",
730
+ "fewshot_split": "dev",
731
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
732
+ "doc_to_target": "answer",
733
+ "doc_to_choice": [
734
+ "A",
735
+ "B",
736
+ "C",
737
+ "D"
738
+ ],
739
+ "description": "The following are multiple choice questions (with answers) about college physics.\n\n",
740
+ "target_delimiter": " ",
741
+ "fewshot_delimiter": "\n\n",
742
+ "fewshot_config": {
743
+ "sampler": "first_n"
744
+ },
745
+ "num_fewshot": 5,
746
+ "metric_list": [
747
+ {
748
+ "metric": "acc",
749
+ "aggregation": "mean",
750
+ "higher_is_better": true
751
+ }
752
+ ],
753
+ "output_type": "multiple_choice",
754
+ "repeats": 1,
755
+ "should_decontaminate": false,
756
+ "metadata": {
757
+ "version": 0.0
758
+ }
759
+ },
760
+ "mmlu_computer_security": {
761
+ "task": "mmlu_computer_security",
762
+ "task_alias": "computer_security",
763
+ "group": "mmlu_stem",
764
+ "group_alias": "stem",
765
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
766
+ "dataset_name": "computer_security",
767
+ "test_split": "test",
768
+ "fewshot_split": "dev",
769
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
770
+ "doc_to_target": "answer",
771
+ "doc_to_choice": [
772
+ "A",
773
+ "B",
774
+ "C",
775
+ "D"
776
+ ],
777
+ "description": "The following are multiple choice questions (with answers) about computer security.\n\n",
778
+ "target_delimiter": " ",
779
+ "fewshot_delimiter": "\n\n",
780
+ "fewshot_config": {
781
+ "sampler": "first_n"
782
+ },
783
+ "num_fewshot": 5,
784
+ "metric_list": [
785
+ {
786
+ "metric": "acc",
787
+ "aggregation": "mean",
788
+ "higher_is_better": true
789
+ }
790
+ ],
791
+ "output_type": "multiple_choice",
792
+ "repeats": 1,
793
+ "should_decontaminate": false,
794
+ "metadata": {
795
+ "version": 0.0
796
+ }
797
+ },
798
+ "mmlu_conceptual_physics": {
799
+ "task": "mmlu_conceptual_physics",
800
+ "task_alias": "conceptual_physics",
801
+ "group": "mmlu_stem",
802
+ "group_alias": "stem",
803
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
804
+ "dataset_name": "conceptual_physics",
805
+ "test_split": "test",
806
+ "fewshot_split": "dev",
807
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
808
+ "doc_to_target": "answer",
809
+ "doc_to_choice": [
810
+ "A",
811
+ "B",
812
+ "C",
813
+ "D"
814
+ ],
815
+ "description": "The following are multiple choice questions (with answers) about conceptual physics.\n\n",
816
+ "target_delimiter": " ",
817
+ "fewshot_delimiter": "\n\n",
818
+ "fewshot_config": {
819
+ "sampler": "first_n"
820
+ },
821
+ "num_fewshot": 5,
822
+ "metric_list": [
823
+ {
824
+ "metric": "acc",
825
+ "aggregation": "mean",
826
+ "higher_is_better": true
827
+ }
828
+ ],
829
+ "output_type": "multiple_choice",
830
+ "repeats": 1,
831
+ "should_decontaminate": false,
832
+ "metadata": {
833
+ "version": 0.0
834
+ }
835
+ },
836
+ "mmlu_econometrics": {
837
+ "task": "mmlu_econometrics",
838
+ "task_alias": "econometrics",
839
+ "group": "mmlu_social_sciences",
840
+ "group_alias": "social_sciences",
841
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
842
+ "dataset_name": "econometrics",
843
+ "test_split": "test",
844
+ "fewshot_split": "dev",
845
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
846
+ "doc_to_target": "answer",
847
+ "doc_to_choice": [
848
+ "A",
849
+ "B",
850
+ "C",
851
+ "D"
852
+ ],
853
+ "description": "The following are multiple choice questions (with answers) about econometrics.\n\n",
854
+ "target_delimiter": " ",
855
+ "fewshot_delimiter": "\n\n",
856
+ "fewshot_config": {
857
+ "sampler": "first_n"
858
+ },
859
+ "num_fewshot": 5,
860
+ "metric_list": [
861
+ {
862
+ "metric": "acc",
863
+ "aggregation": "mean",
864
+ "higher_is_better": true
865
+ }
866
+ ],
867
+ "output_type": "multiple_choice",
868
+ "repeats": 1,
869
+ "should_decontaminate": false,
870
+ "metadata": {
871
+ "version": 0.0
872
+ }
873
+ },
874
+ "mmlu_electrical_engineering": {
875
+ "task": "mmlu_electrical_engineering",
876
+ "task_alias": "electrical_engineering",
877
+ "group": "mmlu_stem",
878
+ "group_alias": "stem",
879
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
880
+ "dataset_name": "electrical_engineering",
881
+ "test_split": "test",
882
+ "fewshot_split": "dev",
883
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
884
+ "doc_to_target": "answer",
885
+ "doc_to_choice": [
886
+ "A",
887
+ "B",
888
+ "C",
889
+ "D"
890
+ ],
891
+ "description": "The following are multiple choice questions (with answers) about electrical engineering.\n\n",
892
+ "target_delimiter": " ",
893
+ "fewshot_delimiter": "\n\n",
894
+ "fewshot_config": {
895
+ "sampler": "first_n"
896
+ },
897
+ "num_fewshot": 5,
898
+ "metric_list": [
899
+ {
900
+ "metric": "acc",
901
+ "aggregation": "mean",
902
+ "higher_is_better": true
903
+ }
904
+ ],
905
+ "output_type": "multiple_choice",
906
+ "repeats": 1,
907
+ "should_decontaminate": false,
908
+ "metadata": {
909
+ "version": 0.0
910
+ }
911
+ },
912
+ "mmlu_elementary_mathematics": {
913
+ "task": "mmlu_elementary_mathematics",
914
+ "task_alias": "elementary_mathematics",
915
+ "group": "mmlu_stem",
916
+ "group_alias": "stem",
917
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
918
+ "dataset_name": "elementary_mathematics",
919
+ "test_split": "test",
920
+ "fewshot_split": "dev",
921
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
922
+ "doc_to_target": "answer",
923
+ "doc_to_choice": [
924
+ "A",
925
+ "B",
926
+ "C",
927
+ "D"
928
+ ],
929
+ "description": "The following are multiple choice questions (with answers) about elementary mathematics.\n\n",
930
+ "target_delimiter": " ",
931
+ "fewshot_delimiter": "\n\n",
932
+ "fewshot_config": {
933
+ "sampler": "first_n"
934
+ },
935
+ "num_fewshot": 5,
936
+ "metric_list": [
937
+ {
938
+ "metric": "acc",
939
+ "aggregation": "mean",
940
+ "higher_is_better": true
941
+ }
942
+ ],
943
+ "output_type": "multiple_choice",
944
+ "repeats": 1,
945
+ "should_decontaminate": false,
946
+ "metadata": {
947
+ "version": 0.0
948
+ }
949
+ },
950
+ "mmlu_formal_logic": {
951
+ "task": "mmlu_formal_logic",
952
+ "task_alias": "formal_logic",
953
+ "group": "mmlu_humanities",
954
+ "group_alias": "humanities",
955
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
956
+ "dataset_name": "formal_logic",
957
+ "test_split": "test",
958
+ "fewshot_split": "dev",
959
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
960
+ "doc_to_target": "answer",
961
+ "doc_to_choice": [
962
+ "A",
963
+ "B",
964
+ "C",
965
+ "D"
966
+ ],
967
+ "description": "The following are multiple choice questions (with answers) about formal logic.\n\n",
968
+ "target_delimiter": " ",
969
+ "fewshot_delimiter": "\n\n",
970
+ "fewshot_config": {
971
+ "sampler": "first_n"
972
+ },
973
+ "num_fewshot": 5,
974
+ "metric_list": [
975
+ {
976
+ "metric": "acc",
977
+ "aggregation": "mean",
978
+ "higher_is_better": true
979
+ }
980
+ ],
981
+ "output_type": "multiple_choice",
982
+ "repeats": 1,
983
+ "should_decontaminate": false,
984
+ "metadata": {
985
+ "version": 0.0
986
+ }
987
+ },
988
+ "mmlu_global_facts": {
989
+ "task": "mmlu_global_facts",
990
+ "task_alias": "global_facts",
991
+ "group": "mmlu_other",
992
+ "group_alias": "other",
993
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
994
+ "dataset_name": "global_facts",
995
+ "test_split": "test",
996
+ "fewshot_split": "dev",
997
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
998
+ "doc_to_target": "answer",
999
+ "doc_to_choice": [
1000
+ "A",
1001
+ "B",
1002
+ "C",
1003
+ "D"
1004
+ ],
1005
+ "description": "The following are multiple choice questions (with answers) about global facts.\n\n",
1006
+ "target_delimiter": " ",
1007
+ "fewshot_delimiter": "\n\n",
1008
+ "fewshot_config": {
1009
+ "sampler": "first_n"
1010
+ },
1011
+ "num_fewshot": 5,
1012
+ "metric_list": [
1013
+ {
1014
+ "metric": "acc",
1015
+ "aggregation": "mean",
1016
+ "higher_is_better": true
1017
+ }
1018
+ ],
1019
+ "output_type": "multiple_choice",
1020
+ "repeats": 1,
1021
+ "should_decontaminate": false,
1022
+ "metadata": {
1023
+ "version": 0.0
1024
+ }
1025
+ },
1026
+ "mmlu_high_school_biology": {
1027
+ "task": "mmlu_high_school_biology",
1028
+ "task_alias": "high_school_biology",
1029
+ "group": "mmlu_stem",
1030
+ "group_alias": "stem",
1031
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1032
+ "dataset_name": "high_school_biology",
1033
+ "test_split": "test",
1034
+ "fewshot_split": "dev",
1035
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1036
+ "doc_to_target": "answer",
1037
+ "doc_to_choice": [
1038
+ "A",
1039
+ "B",
1040
+ "C",
1041
+ "D"
1042
+ ],
1043
+ "description": "The following are multiple choice questions (with answers) about high school biology.\n\n",
1044
+ "target_delimiter": " ",
1045
+ "fewshot_delimiter": "\n\n",
1046
+ "fewshot_config": {
1047
+ "sampler": "first_n"
1048
+ },
1049
+ "num_fewshot": 5,
1050
+ "metric_list": [
1051
+ {
1052
+ "metric": "acc",
1053
+ "aggregation": "mean",
1054
+ "higher_is_better": true
1055
+ }
1056
+ ],
1057
+ "output_type": "multiple_choice",
1058
+ "repeats": 1,
1059
+ "should_decontaminate": false,
1060
+ "metadata": {
1061
+ "version": 0.0
1062
+ }
1063
+ },
1064
+ "mmlu_high_school_chemistry": {
1065
+ "task": "mmlu_high_school_chemistry",
1066
+ "task_alias": "high_school_chemistry",
1067
+ "group": "mmlu_stem",
1068
+ "group_alias": "stem",
1069
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1070
+ "dataset_name": "high_school_chemistry",
1071
+ "test_split": "test",
1072
+ "fewshot_split": "dev",
1073
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1074
+ "doc_to_target": "answer",
1075
+ "doc_to_choice": [
1076
+ "A",
1077
+ "B",
1078
+ "C",
1079
+ "D"
1080
+ ],
1081
+ "description": "The following are multiple choice questions (with answers) about high school chemistry.\n\n",
1082
+ "target_delimiter": " ",
1083
+ "fewshot_delimiter": "\n\n",
1084
+ "fewshot_config": {
1085
+ "sampler": "first_n"
1086
+ },
1087
+ "num_fewshot": 5,
1088
+ "metric_list": [
1089
+ {
1090
+ "metric": "acc",
1091
+ "aggregation": "mean",
1092
+ "higher_is_better": true
1093
+ }
1094
+ ],
1095
+ "output_type": "multiple_choice",
1096
+ "repeats": 1,
1097
+ "should_decontaminate": false,
1098
+ "metadata": {
1099
+ "version": 0.0
1100
+ }
1101
+ },
1102
+ "mmlu_high_school_computer_science": {
1103
+ "task": "mmlu_high_school_computer_science",
1104
+ "task_alias": "high_school_computer_science",
1105
+ "group": "mmlu_stem",
1106
+ "group_alias": "stem",
1107
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1108
+ "dataset_name": "high_school_computer_science",
1109
+ "test_split": "test",
1110
+ "fewshot_split": "dev",
1111
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1112
+ "doc_to_target": "answer",
1113
+ "doc_to_choice": [
1114
+ "A",
1115
+ "B",
1116
+ "C",
1117
+ "D"
1118
+ ],
1119
+ "description": "The following are multiple choice questions (with answers) about high school computer science.\n\n",
1120
+ "target_delimiter": " ",
1121
+ "fewshot_delimiter": "\n\n",
1122
+ "fewshot_config": {
1123
+ "sampler": "first_n"
1124
+ },
1125
+ "num_fewshot": 5,
1126
+ "metric_list": [
1127
+ {
1128
+ "metric": "acc",
1129
+ "aggregation": "mean",
1130
+ "higher_is_better": true
1131
+ }
1132
+ ],
1133
+ "output_type": "multiple_choice",
1134
+ "repeats": 1,
1135
+ "should_decontaminate": false,
1136
+ "metadata": {
1137
+ "version": 0.0
1138
+ }
1139
+ },
1140
+ "mmlu_high_school_european_history": {
1141
+ "task": "mmlu_high_school_european_history",
1142
+ "task_alias": "high_school_european_history",
1143
+ "group": "mmlu_humanities",
1144
+ "group_alias": "humanities",
1145
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1146
+ "dataset_name": "high_school_european_history",
1147
+ "test_split": "test",
1148
+ "fewshot_split": "dev",
1149
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1150
+ "doc_to_target": "answer",
1151
+ "doc_to_choice": [
1152
+ "A",
1153
+ "B",
1154
+ "C",
1155
+ "D"
1156
+ ],
1157
+ "description": "The following are multiple choice questions (with answers) about high school european history.\n\n",
1158
+ "target_delimiter": " ",
1159
+ "fewshot_delimiter": "\n\n",
1160
+ "fewshot_config": {
1161
+ "sampler": "first_n"
1162
+ },
1163
+ "num_fewshot": 5,
1164
+ "metric_list": [
1165
+ {
1166
+ "metric": "acc",
1167
+ "aggregation": "mean",
1168
+ "higher_is_better": true
1169
+ }
1170
+ ],
1171
+ "output_type": "multiple_choice",
1172
+ "repeats": 1,
1173
+ "should_decontaminate": false,
1174
+ "metadata": {
1175
+ "version": 0.0
1176
+ }
1177
+ },
1178
+ "mmlu_high_school_geography": {
1179
+ "task": "mmlu_high_school_geography",
1180
+ "task_alias": "high_school_geography",
1181
+ "group": "mmlu_social_sciences",
1182
+ "group_alias": "social_sciences",
1183
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1184
+ "dataset_name": "high_school_geography",
1185
+ "test_split": "test",
1186
+ "fewshot_split": "dev",
1187
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1188
+ "doc_to_target": "answer",
1189
+ "doc_to_choice": [
1190
+ "A",
1191
+ "B",
1192
+ "C",
1193
+ "D"
1194
+ ],
1195
+ "description": "The following are multiple choice questions (with answers) about high school geography.\n\n",
1196
+ "target_delimiter": " ",
1197
+ "fewshot_delimiter": "\n\n",
1198
+ "fewshot_config": {
1199
+ "sampler": "first_n"
1200
+ },
1201
+ "num_fewshot": 5,
1202
+ "metric_list": [
1203
+ {
1204
+ "metric": "acc",
1205
+ "aggregation": "mean",
1206
+ "higher_is_better": true
1207
+ }
1208
+ ],
1209
+ "output_type": "multiple_choice",
1210
+ "repeats": 1,
1211
+ "should_decontaminate": false,
1212
+ "metadata": {
1213
+ "version": 0.0
1214
+ }
1215
+ },
1216
+ "mmlu_high_school_government_and_politics": {
1217
+ "task": "mmlu_high_school_government_and_politics",
1218
+ "task_alias": "high_school_government_and_politics",
1219
+ "group": "mmlu_social_sciences",
1220
+ "group_alias": "social_sciences",
1221
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1222
+ "dataset_name": "high_school_government_and_politics",
1223
+ "test_split": "test",
1224
+ "fewshot_split": "dev",
1225
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1226
+ "doc_to_target": "answer",
1227
+ "doc_to_choice": [
1228
+ "A",
1229
+ "B",
1230
+ "C",
1231
+ "D"
1232
+ ],
1233
+ "description": "The following are multiple choice questions (with answers) about high school government and politics.\n\n",
1234
+ "target_delimiter": " ",
1235
+ "fewshot_delimiter": "\n\n",
1236
+ "fewshot_config": {
1237
+ "sampler": "first_n"
1238
+ },
1239
+ "num_fewshot": 5,
1240
+ "metric_list": [
1241
+ {
1242
+ "metric": "acc",
1243
+ "aggregation": "mean",
1244
+ "higher_is_better": true
1245
+ }
1246
+ ],
1247
+ "output_type": "multiple_choice",
1248
+ "repeats": 1,
1249
+ "should_decontaminate": false,
1250
+ "metadata": {
1251
+ "version": 0.0
1252
+ }
1253
+ },
1254
+ "mmlu_high_school_macroeconomics": {
1255
+ "task": "mmlu_high_school_macroeconomics",
1256
+ "task_alias": "high_school_macroeconomics",
1257
+ "group": "mmlu_social_sciences",
1258
+ "group_alias": "social_sciences",
1259
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1260
+ "dataset_name": "high_school_macroeconomics",
1261
+ "test_split": "test",
1262
+ "fewshot_split": "dev",
1263
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1264
+ "doc_to_target": "answer",
1265
+ "doc_to_choice": [
1266
+ "A",
1267
+ "B",
1268
+ "C",
1269
+ "D"
1270
+ ],
1271
+ "description": "The following are multiple choice questions (with answers) about high school macroeconomics.\n\n",
1272
+ "target_delimiter": " ",
1273
+ "fewshot_delimiter": "\n\n",
1274
+ "fewshot_config": {
1275
+ "sampler": "first_n"
1276
+ },
1277
+ "num_fewshot": 5,
1278
+ "metric_list": [
1279
+ {
1280
+ "metric": "acc",
1281
+ "aggregation": "mean",
1282
+ "higher_is_better": true
1283
+ }
1284
+ ],
1285
+ "output_type": "multiple_choice",
1286
+ "repeats": 1,
1287
+ "should_decontaminate": false,
1288
+ "metadata": {
1289
+ "version": 0.0
1290
+ }
1291
+ },
1292
+ "mmlu_high_school_mathematics": {
1293
+ "task": "mmlu_high_school_mathematics",
1294
+ "task_alias": "high_school_mathematics",
1295
+ "group": "mmlu_stem",
1296
+ "group_alias": "stem",
1297
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1298
+ "dataset_name": "high_school_mathematics",
1299
+ "test_split": "test",
1300
+ "fewshot_split": "dev",
1301
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1302
+ "doc_to_target": "answer",
1303
+ "doc_to_choice": [
1304
+ "A",
1305
+ "B",
1306
+ "C",
1307
+ "D"
1308
+ ],
1309
+ "description": "The following are multiple choice questions (with answers) about high school mathematics.\n\n",
1310
+ "target_delimiter": " ",
1311
+ "fewshot_delimiter": "\n\n",
1312
+ "fewshot_config": {
1313
+ "sampler": "first_n"
1314
+ },
1315
+ "num_fewshot": 5,
1316
+ "metric_list": [
1317
+ {
1318
+ "metric": "acc",
1319
+ "aggregation": "mean",
1320
+ "higher_is_better": true
1321
+ }
1322
+ ],
1323
+ "output_type": "multiple_choice",
1324
+ "repeats": 1,
1325
+ "should_decontaminate": false,
1326
+ "metadata": {
1327
+ "version": 0.0
1328
+ }
1329
+ },
1330
+ "mmlu_high_school_microeconomics": {
1331
+ "task": "mmlu_high_school_microeconomics",
1332
+ "task_alias": "high_school_microeconomics",
1333
+ "group": "mmlu_social_sciences",
1334
+ "group_alias": "social_sciences",
1335
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1336
+ "dataset_name": "high_school_microeconomics",
1337
+ "test_split": "test",
1338
+ "fewshot_split": "dev",
1339
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1340
+ "doc_to_target": "answer",
1341
+ "doc_to_choice": [
1342
+ "A",
1343
+ "B",
1344
+ "C",
1345
+ "D"
1346
+ ],
1347
+ "description": "The following are multiple choice questions (with answers) about high school microeconomics.\n\n",
1348
+ "target_delimiter": " ",
1349
+ "fewshot_delimiter": "\n\n",
1350
+ "fewshot_config": {
1351
+ "sampler": "first_n"
1352
+ },
1353
+ "num_fewshot": 5,
1354
+ "metric_list": [
1355
+ {
1356
+ "metric": "acc",
1357
+ "aggregation": "mean",
1358
+ "higher_is_better": true
1359
+ }
1360
+ ],
1361
+ "output_type": "multiple_choice",
1362
+ "repeats": 1,
1363
+ "should_decontaminate": false,
1364
+ "metadata": {
1365
+ "version": 0.0
1366
+ }
1367
+ },
1368
+ "mmlu_high_school_physics": {
1369
+ "task": "mmlu_high_school_physics",
1370
+ "task_alias": "high_school_physics",
1371
+ "group": "mmlu_stem",
1372
+ "group_alias": "stem",
1373
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1374
+ "dataset_name": "high_school_physics",
1375
+ "test_split": "test",
1376
+ "fewshot_split": "dev",
1377
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1378
+ "doc_to_target": "answer",
1379
+ "doc_to_choice": [
1380
+ "A",
1381
+ "B",
1382
+ "C",
1383
+ "D"
1384
+ ],
1385
+ "description": "The following are multiple choice questions (with answers) about high school physics.\n\n",
1386
+ "target_delimiter": " ",
1387
+ "fewshot_delimiter": "\n\n",
1388
+ "fewshot_config": {
1389
+ "sampler": "first_n"
1390
+ },
1391
+ "num_fewshot": 5,
1392
+ "metric_list": [
1393
+ {
1394
+ "metric": "acc",
1395
+ "aggregation": "mean",
1396
+ "higher_is_better": true
1397
+ }
1398
+ ],
1399
+ "output_type": "multiple_choice",
1400
+ "repeats": 1,
1401
+ "should_decontaminate": false,
1402
+ "metadata": {
1403
+ "version": 0.0
1404
+ }
1405
+ },
1406
+ "mmlu_high_school_psychology": {
1407
+ "task": "mmlu_high_school_psychology",
1408
+ "task_alias": "high_school_psychology",
1409
+ "group": "mmlu_social_sciences",
1410
+ "group_alias": "social_sciences",
1411
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1412
+ "dataset_name": "high_school_psychology",
1413
+ "test_split": "test",
1414
+ "fewshot_split": "dev",
1415
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1416
+ "doc_to_target": "answer",
1417
+ "doc_to_choice": [
1418
+ "A",
1419
+ "B",
1420
+ "C",
1421
+ "D"
1422
+ ],
1423
+ "description": "The following are multiple choice questions (with answers) about high school psychology.\n\n",
1424
+ "target_delimiter": " ",
1425
+ "fewshot_delimiter": "\n\n",
1426
+ "fewshot_config": {
1427
+ "sampler": "first_n"
1428
+ },
1429
+ "num_fewshot": 5,
1430
+ "metric_list": [
1431
+ {
1432
+ "metric": "acc",
1433
+ "aggregation": "mean",
1434
+ "higher_is_better": true
1435
+ }
1436
+ ],
1437
+ "output_type": "multiple_choice",
1438
+ "repeats": 1,
1439
+ "should_decontaminate": false,
1440
+ "metadata": {
1441
+ "version": 0.0
1442
+ }
1443
+ },
1444
+ "mmlu_high_school_statistics": {
1445
+ "task": "mmlu_high_school_statistics",
1446
+ "task_alias": "high_school_statistics",
1447
+ "group": "mmlu_stem",
1448
+ "group_alias": "stem",
1449
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1450
+ "dataset_name": "high_school_statistics",
1451
+ "test_split": "test",
1452
+ "fewshot_split": "dev",
1453
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1454
+ "doc_to_target": "answer",
1455
+ "doc_to_choice": [
1456
+ "A",
1457
+ "B",
1458
+ "C",
1459
+ "D"
1460
+ ],
1461
+ "description": "The following are multiple choice questions (with answers) about high school statistics.\n\n",
1462
+ "target_delimiter": " ",
1463
+ "fewshot_delimiter": "\n\n",
1464
+ "fewshot_config": {
1465
+ "sampler": "first_n"
1466
+ },
1467
+ "num_fewshot": 5,
1468
+ "metric_list": [
1469
+ {
1470
+ "metric": "acc",
1471
+ "aggregation": "mean",
1472
+ "higher_is_better": true
1473
+ }
1474
+ ],
1475
+ "output_type": "multiple_choice",
1476
+ "repeats": 1,
1477
+ "should_decontaminate": false,
1478
+ "metadata": {
1479
+ "version": 0.0
1480
+ }
1481
+ },
1482
+ "mmlu_high_school_us_history": {
1483
+ "task": "mmlu_high_school_us_history",
1484
+ "task_alias": "high_school_us_history",
1485
+ "group": "mmlu_humanities",
1486
+ "group_alias": "humanities",
1487
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1488
+ "dataset_name": "high_school_us_history",
1489
+ "test_split": "test",
1490
+ "fewshot_split": "dev",
1491
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1492
+ "doc_to_target": "answer",
1493
+ "doc_to_choice": [
1494
+ "A",
1495
+ "B",
1496
+ "C",
1497
+ "D"
1498
+ ],
1499
+ "description": "The following are multiple choice questions (with answers) about high school us history.\n\n",
1500
+ "target_delimiter": " ",
1501
+ "fewshot_delimiter": "\n\n",
1502
+ "fewshot_config": {
1503
+ "sampler": "first_n"
1504
+ },
1505
+ "num_fewshot": 5,
1506
+ "metric_list": [
1507
+ {
1508
+ "metric": "acc",
1509
+ "aggregation": "mean",
1510
+ "higher_is_better": true
1511
+ }
1512
+ ],
1513
+ "output_type": "multiple_choice",
1514
+ "repeats": 1,
1515
+ "should_decontaminate": false,
1516
+ "metadata": {
1517
+ "version": 0.0
1518
+ }
1519
+ },
1520
+ "mmlu_high_school_world_history": {
1521
+ "task": "mmlu_high_school_world_history",
1522
+ "task_alias": "high_school_world_history",
1523
+ "group": "mmlu_humanities",
1524
+ "group_alias": "humanities",
1525
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1526
+ "dataset_name": "high_school_world_history",
1527
+ "test_split": "test",
1528
+ "fewshot_split": "dev",
1529
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1530
+ "doc_to_target": "answer",
1531
+ "doc_to_choice": [
1532
+ "A",
1533
+ "B",
1534
+ "C",
1535
+ "D"
1536
+ ],
1537
+ "description": "The following are multiple choice questions (with answers) about high school world history.\n\n",
1538
+ "target_delimiter": " ",
1539
+ "fewshot_delimiter": "\n\n",
1540
+ "fewshot_config": {
1541
+ "sampler": "first_n"
1542
+ },
1543
+ "num_fewshot": 5,
1544
+ "metric_list": [
1545
+ {
1546
+ "metric": "acc",
1547
+ "aggregation": "mean",
1548
+ "higher_is_better": true
1549
+ }
1550
+ ],
1551
+ "output_type": "multiple_choice",
1552
+ "repeats": 1,
1553
+ "should_decontaminate": false,
1554
+ "metadata": {
1555
+ "version": 0.0
1556
+ }
1557
+ },
1558
+ "mmlu_human_aging": {
1559
+ "task": "mmlu_human_aging",
1560
+ "task_alias": "human_aging",
1561
+ "group": "mmlu_other",
1562
+ "group_alias": "other",
1563
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1564
+ "dataset_name": "human_aging",
1565
+ "test_split": "test",
1566
+ "fewshot_split": "dev",
1567
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1568
+ "doc_to_target": "answer",
1569
+ "doc_to_choice": [
1570
+ "A",
1571
+ "B",
1572
+ "C",
1573
+ "D"
1574
+ ],
1575
+ "description": "The following are multiple choice questions (with answers) about human aging.\n\n",
1576
+ "target_delimiter": " ",
1577
+ "fewshot_delimiter": "\n\n",
1578
+ "fewshot_config": {
1579
+ "sampler": "first_n"
1580
+ },
1581
+ "num_fewshot": 5,
1582
+ "metric_list": [
1583
+ {
1584
+ "metric": "acc",
1585
+ "aggregation": "mean",
1586
+ "higher_is_better": true
1587
+ }
1588
+ ],
1589
+ "output_type": "multiple_choice",
1590
+ "repeats": 1,
1591
+ "should_decontaminate": false,
1592
+ "metadata": {
1593
+ "version": 0.0
1594
+ }
1595
+ },
1596
+ "mmlu_human_sexuality": {
1597
+ "task": "mmlu_human_sexuality",
1598
+ "task_alias": "human_sexuality",
1599
+ "group": "mmlu_social_sciences",
1600
+ "group_alias": "social_sciences",
1601
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1602
+ "dataset_name": "human_sexuality",
1603
+ "test_split": "test",
1604
+ "fewshot_split": "dev",
1605
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1606
+ "doc_to_target": "answer",
1607
+ "doc_to_choice": [
1608
+ "A",
1609
+ "B",
1610
+ "C",
1611
+ "D"
1612
+ ],
1613
+ "description": "The following are multiple choice questions (with answers) about human sexuality.\n\n",
1614
+ "target_delimiter": " ",
1615
+ "fewshot_delimiter": "\n\n",
1616
+ "fewshot_config": {
1617
+ "sampler": "first_n"
1618
+ },
1619
+ "num_fewshot": 5,
1620
+ "metric_list": [
1621
+ {
1622
+ "metric": "acc",
1623
+ "aggregation": "mean",
1624
+ "higher_is_better": true
1625
+ }
1626
+ ],
1627
+ "output_type": "multiple_choice",
1628
+ "repeats": 1,
1629
+ "should_decontaminate": false,
1630
+ "metadata": {
1631
+ "version": 0.0
1632
+ }
1633
+ },
1634
+ "mmlu_international_law": {
1635
+ "task": "mmlu_international_law",
1636
+ "task_alias": "international_law",
1637
+ "group": "mmlu_humanities",
1638
+ "group_alias": "humanities",
1639
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1640
+ "dataset_name": "international_law",
1641
+ "test_split": "test",
1642
+ "fewshot_split": "dev",
1643
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1644
+ "doc_to_target": "answer",
1645
+ "doc_to_choice": [
1646
+ "A",
1647
+ "B",
1648
+ "C",
1649
+ "D"
1650
+ ],
1651
+ "description": "The following are multiple choice questions (with answers) about international law.\n\n",
1652
+ "target_delimiter": " ",
1653
+ "fewshot_delimiter": "\n\n",
1654
+ "fewshot_config": {
1655
+ "sampler": "first_n"
1656
+ },
1657
+ "num_fewshot": 5,
1658
+ "metric_list": [
1659
+ {
1660
+ "metric": "acc",
1661
+ "aggregation": "mean",
1662
+ "higher_is_better": true
1663
+ }
1664
+ ],
1665
+ "output_type": "multiple_choice",
1666
+ "repeats": 1,
1667
+ "should_decontaminate": false,
1668
+ "metadata": {
1669
+ "version": 0.0
1670
+ }
1671
+ },
1672
+ "mmlu_jurisprudence": {
1673
+ "task": "mmlu_jurisprudence",
1674
+ "task_alias": "jurisprudence",
1675
+ "group": "mmlu_humanities",
1676
+ "group_alias": "humanities",
1677
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1678
+ "dataset_name": "jurisprudence",
1679
+ "test_split": "test",
1680
+ "fewshot_split": "dev",
1681
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1682
+ "doc_to_target": "answer",
1683
+ "doc_to_choice": [
1684
+ "A",
1685
+ "B",
1686
+ "C",
1687
+ "D"
1688
+ ],
1689
+ "description": "The following are multiple choice questions (with answers) about jurisprudence.\n\n",
1690
+ "target_delimiter": " ",
1691
+ "fewshot_delimiter": "\n\n",
1692
+ "fewshot_config": {
1693
+ "sampler": "first_n"
1694
+ },
1695
+ "num_fewshot": 5,
1696
+ "metric_list": [
1697
+ {
1698
+ "metric": "acc",
1699
+ "aggregation": "mean",
1700
+ "higher_is_better": true
1701
+ }
1702
+ ],
1703
+ "output_type": "multiple_choice",
1704
+ "repeats": 1,
1705
+ "should_decontaminate": false,
1706
+ "metadata": {
1707
+ "version": 0.0
1708
+ }
1709
+ },
1710
+ "mmlu_logical_fallacies": {
1711
+ "task": "mmlu_logical_fallacies",
1712
+ "task_alias": "logical_fallacies",
1713
+ "group": "mmlu_humanities",
1714
+ "group_alias": "humanities",
1715
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1716
+ "dataset_name": "logical_fallacies",
1717
+ "test_split": "test",
1718
+ "fewshot_split": "dev",
1719
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1720
+ "doc_to_target": "answer",
1721
+ "doc_to_choice": [
1722
+ "A",
1723
+ "B",
1724
+ "C",
1725
+ "D"
1726
+ ],
1727
+ "description": "The following are multiple choice questions (with answers) about logical fallacies.\n\n",
1728
+ "target_delimiter": " ",
1729
+ "fewshot_delimiter": "\n\n",
1730
+ "fewshot_config": {
1731
+ "sampler": "first_n"
1732
+ },
1733
+ "num_fewshot": 5,
1734
+ "metric_list": [
1735
+ {
1736
+ "metric": "acc",
1737
+ "aggregation": "mean",
1738
+ "higher_is_better": true
1739
+ }
1740
+ ],
1741
+ "output_type": "multiple_choice",
1742
+ "repeats": 1,
1743
+ "should_decontaminate": false,
1744
+ "metadata": {
1745
+ "version": 0.0
1746
+ }
1747
+ },
1748
+ "mmlu_machine_learning": {
1749
+ "task": "mmlu_machine_learning",
1750
+ "task_alias": "machine_learning",
1751
+ "group": "mmlu_stem",
1752
+ "group_alias": "stem",
1753
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1754
+ "dataset_name": "machine_learning",
1755
+ "test_split": "test",
1756
+ "fewshot_split": "dev",
1757
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1758
+ "doc_to_target": "answer",
1759
+ "doc_to_choice": [
1760
+ "A",
1761
+ "B",
1762
+ "C",
1763
+ "D"
1764
+ ],
1765
+ "description": "The following are multiple choice questions (with answers) about machine learning.\n\n",
1766
+ "target_delimiter": " ",
1767
+ "fewshot_delimiter": "\n\n",
1768
+ "fewshot_config": {
1769
+ "sampler": "first_n"
1770
+ },
1771
+ "num_fewshot": 5,
1772
+ "metric_list": [
1773
+ {
1774
+ "metric": "acc",
1775
+ "aggregation": "mean",
1776
+ "higher_is_better": true
1777
+ }
1778
+ ],
1779
+ "output_type": "multiple_choice",
1780
+ "repeats": 1,
1781
+ "should_decontaminate": false,
1782
+ "metadata": {
1783
+ "version": 0.0
1784
+ }
1785
+ },
1786
+ "mmlu_management": {
1787
+ "task": "mmlu_management",
1788
+ "task_alias": "management",
1789
+ "group": "mmlu_other",
1790
+ "group_alias": "other",
1791
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1792
+ "dataset_name": "management",
1793
+ "test_split": "test",
1794
+ "fewshot_split": "dev",
1795
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1796
+ "doc_to_target": "answer",
1797
+ "doc_to_choice": [
1798
+ "A",
1799
+ "B",
1800
+ "C",
1801
+ "D"
1802
+ ],
1803
+ "description": "The following are multiple choice questions (with answers) about management.\n\n",
1804
+ "target_delimiter": " ",
1805
+ "fewshot_delimiter": "\n\n",
1806
+ "fewshot_config": {
1807
+ "sampler": "first_n"
1808
+ },
1809
+ "num_fewshot": 5,
1810
+ "metric_list": [
1811
+ {
1812
+ "metric": "acc",
1813
+ "aggregation": "mean",
1814
+ "higher_is_better": true
1815
+ }
1816
+ ],
1817
+ "output_type": "multiple_choice",
1818
+ "repeats": 1,
1819
+ "should_decontaminate": false,
1820
+ "metadata": {
1821
+ "version": 0.0
1822
+ }
1823
+ },
1824
+ "mmlu_marketing": {
1825
+ "task": "mmlu_marketing",
1826
+ "task_alias": "marketing",
1827
+ "group": "mmlu_other",
1828
+ "group_alias": "other",
1829
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1830
+ "dataset_name": "marketing",
1831
+ "test_split": "test",
1832
+ "fewshot_split": "dev",
1833
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1834
+ "doc_to_target": "answer",
1835
+ "doc_to_choice": [
1836
+ "A",
1837
+ "B",
1838
+ "C",
1839
+ "D"
1840
+ ],
1841
+ "description": "The following are multiple choice questions (with answers) about marketing.\n\n",
1842
+ "target_delimiter": " ",
1843
+ "fewshot_delimiter": "\n\n",
1844
+ "fewshot_config": {
1845
+ "sampler": "first_n"
1846
+ },
1847
+ "num_fewshot": 5,
1848
+ "metric_list": [
1849
+ {
1850
+ "metric": "acc",
1851
+ "aggregation": "mean",
1852
+ "higher_is_better": true
1853
+ }
1854
+ ],
1855
+ "output_type": "multiple_choice",
1856
+ "repeats": 1,
1857
+ "should_decontaminate": false,
1858
+ "metadata": {
1859
+ "version": 0.0
1860
+ }
1861
+ },
1862
+ "mmlu_medical_genetics": {
1863
+ "task": "mmlu_medical_genetics",
1864
+ "task_alias": "medical_genetics",
1865
+ "group": "mmlu_other",
1866
+ "group_alias": "other",
1867
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1868
+ "dataset_name": "medical_genetics",
1869
+ "test_split": "test",
1870
+ "fewshot_split": "dev",
1871
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1872
+ "doc_to_target": "answer",
1873
+ "doc_to_choice": [
1874
+ "A",
1875
+ "B",
1876
+ "C",
1877
+ "D"
1878
+ ],
1879
+ "description": "The following are multiple choice questions (with answers) about medical genetics.\n\n",
1880
+ "target_delimiter": " ",
1881
+ "fewshot_delimiter": "\n\n",
1882
+ "fewshot_config": {
1883
+ "sampler": "first_n"
1884
+ },
1885
+ "num_fewshot": 5,
1886
+ "metric_list": [
1887
+ {
1888
+ "metric": "acc",
1889
+ "aggregation": "mean",
1890
+ "higher_is_better": true
1891
+ }
1892
+ ],
1893
+ "output_type": "multiple_choice",
1894
+ "repeats": 1,
1895
+ "should_decontaminate": false,
1896
+ "metadata": {
1897
+ "version": 0.0
1898
+ }
1899
+ },
1900
+ "mmlu_miscellaneous": {
1901
+ "task": "mmlu_miscellaneous",
1902
+ "task_alias": "miscellaneous",
1903
+ "group": "mmlu_other",
1904
+ "group_alias": "other",
1905
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1906
+ "dataset_name": "miscellaneous",
1907
+ "test_split": "test",
1908
+ "fewshot_split": "dev",
1909
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1910
+ "doc_to_target": "answer",
1911
+ "doc_to_choice": [
1912
+ "A",
1913
+ "B",
1914
+ "C",
1915
+ "D"
1916
+ ],
1917
+ "description": "The following are multiple choice questions (with answers) about miscellaneous.\n\n",
1918
+ "target_delimiter": " ",
1919
+ "fewshot_delimiter": "\n\n",
1920
+ "fewshot_config": {
1921
+ "sampler": "first_n"
1922
+ },
1923
+ "num_fewshot": 5,
1924
+ "metric_list": [
1925
+ {
1926
+ "metric": "acc",
1927
+ "aggregation": "mean",
1928
+ "higher_is_better": true
1929
+ }
1930
+ ],
1931
+ "output_type": "multiple_choice",
1932
+ "repeats": 1,
1933
+ "should_decontaminate": false,
1934
+ "metadata": {
1935
+ "version": 0.0
1936
+ }
1937
+ },
1938
+ "mmlu_moral_disputes": {
1939
+ "task": "mmlu_moral_disputes",
1940
+ "task_alias": "moral_disputes",
1941
+ "group": "mmlu_humanities",
1942
+ "group_alias": "humanities",
1943
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1944
+ "dataset_name": "moral_disputes",
1945
+ "test_split": "test",
1946
+ "fewshot_split": "dev",
1947
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1948
+ "doc_to_target": "answer",
1949
+ "doc_to_choice": [
1950
+ "A",
1951
+ "B",
1952
+ "C",
1953
+ "D"
1954
+ ],
1955
+ "description": "The following are multiple choice questions (with answers) about moral disputes.\n\n",
1956
+ "target_delimiter": " ",
1957
+ "fewshot_delimiter": "\n\n",
1958
+ "fewshot_config": {
1959
+ "sampler": "first_n"
1960
+ },
1961
+ "num_fewshot": 5,
1962
+ "metric_list": [
1963
+ {
1964
+ "metric": "acc",
1965
+ "aggregation": "mean",
1966
+ "higher_is_better": true
1967
+ }
1968
+ ],
1969
+ "output_type": "multiple_choice",
1970
+ "repeats": 1,
1971
+ "should_decontaminate": false,
1972
+ "metadata": {
1973
+ "version": 0.0
1974
+ }
1975
+ },
1976
+ "mmlu_moral_scenarios": {
1977
+ "task": "mmlu_moral_scenarios",
1978
+ "task_alias": "moral_scenarios",
1979
+ "group": "mmlu_humanities",
1980
+ "group_alias": "humanities",
1981
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
1982
+ "dataset_name": "moral_scenarios",
1983
+ "test_split": "test",
1984
+ "fewshot_split": "dev",
1985
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
1986
+ "doc_to_target": "answer",
1987
+ "doc_to_choice": [
1988
+ "A",
1989
+ "B",
1990
+ "C",
1991
+ "D"
1992
+ ],
1993
+ "description": "The following are multiple choice questions (with answers) about moral scenarios.\n\n",
1994
+ "target_delimiter": " ",
1995
+ "fewshot_delimiter": "\n\n",
1996
+ "fewshot_config": {
1997
+ "sampler": "first_n"
1998
+ },
1999
+ "num_fewshot": 5,
2000
+ "metric_list": [
2001
+ {
2002
+ "metric": "acc",
2003
+ "aggregation": "mean",
2004
+ "higher_is_better": true
2005
+ }
2006
+ ],
2007
+ "output_type": "multiple_choice",
2008
+ "repeats": 1,
2009
+ "should_decontaminate": false,
2010
+ "metadata": {
2011
+ "version": 0.0
2012
+ }
2013
+ },
2014
+ "mmlu_nutrition": {
2015
+ "task": "mmlu_nutrition",
2016
+ "task_alias": "nutrition",
2017
+ "group": "mmlu_other",
2018
+ "group_alias": "other",
2019
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2020
+ "dataset_name": "nutrition",
2021
+ "test_split": "test",
2022
+ "fewshot_split": "dev",
2023
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2024
+ "doc_to_target": "answer",
2025
+ "doc_to_choice": [
2026
+ "A",
2027
+ "B",
2028
+ "C",
2029
+ "D"
2030
+ ],
2031
+ "description": "The following are multiple choice questions (with answers) about nutrition.\n\n",
2032
+ "target_delimiter": " ",
2033
+ "fewshot_delimiter": "\n\n",
2034
+ "fewshot_config": {
2035
+ "sampler": "first_n"
2036
+ },
2037
+ "num_fewshot": 5,
2038
+ "metric_list": [
2039
+ {
2040
+ "metric": "acc",
2041
+ "aggregation": "mean",
2042
+ "higher_is_better": true
2043
+ }
2044
+ ],
2045
+ "output_type": "multiple_choice",
2046
+ "repeats": 1,
2047
+ "should_decontaminate": false,
2048
+ "metadata": {
2049
+ "version": 0.0
2050
+ }
2051
+ },
2052
+ "mmlu_philosophy": {
2053
+ "task": "mmlu_philosophy",
2054
+ "task_alias": "philosophy",
2055
+ "group": "mmlu_humanities",
2056
+ "group_alias": "humanities",
2057
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2058
+ "dataset_name": "philosophy",
2059
+ "test_split": "test",
2060
+ "fewshot_split": "dev",
2061
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2062
+ "doc_to_target": "answer",
2063
+ "doc_to_choice": [
2064
+ "A",
2065
+ "B",
2066
+ "C",
2067
+ "D"
2068
+ ],
2069
+ "description": "The following are multiple choice questions (with answers) about philosophy.\n\n",
2070
+ "target_delimiter": " ",
2071
+ "fewshot_delimiter": "\n\n",
2072
+ "fewshot_config": {
2073
+ "sampler": "first_n"
2074
+ },
2075
+ "num_fewshot": 5,
2076
+ "metric_list": [
2077
+ {
2078
+ "metric": "acc",
2079
+ "aggregation": "mean",
2080
+ "higher_is_better": true
2081
+ }
2082
+ ],
2083
+ "output_type": "multiple_choice",
2084
+ "repeats": 1,
2085
+ "should_decontaminate": false,
2086
+ "metadata": {
2087
+ "version": 0.0
2088
+ }
2089
+ },
2090
+ "mmlu_prehistory": {
2091
+ "task": "mmlu_prehistory",
2092
+ "task_alias": "prehistory",
2093
+ "group": "mmlu_humanities",
2094
+ "group_alias": "humanities",
2095
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2096
+ "dataset_name": "prehistory",
2097
+ "test_split": "test",
2098
+ "fewshot_split": "dev",
2099
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2100
+ "doc_to_target": "answer",
2101
+ "doc_to_choice": [
2102
+ "A",
2103
+ "B",
2104
+ "C",
2105
+ "D"
2106
+ ],
2107
+ "description": "The following are multiple choice questions (with answers) about prehistory.\n\n",
2108
+ "target_delimiter": " ",
2109
+ "fewshot_delimiter": "\n\n",
2110
+ "fewshot_config": {
2111
+ "sampler": "first_n"
2112
+ },
2113
+ "num_fewshot": 5,
2114
+ "metric_list": [
2115
+ {
2116
+ "metric": "acc",
2117
+ "aggregation": "mean",
2118
+ "higher_is_better": true
2119
+ }
2120
+ ],
2121
+ "output_type": "multiple_choice",
2122
+ "repeats": 1,
2123
+ "should_decontaminate": false,
2124
+ "metadata": {
2125
+ "version": 0.0
2126
+ }
2127
+ },
2128
+ "mmlu_professional_accounting": {
2129
+ "task": "mmlu_professional_accounting",
2130
+ "task_alias": "professional_accounting",
2131
+ "group": "mmlu_other",
2132
+ "group_alias": "other",
2133
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2134
+ "dataset_name": "professional_accounting",
2135
+ "test_split": "test",
2136
+ "fewshot_split": "dev",
2137
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2138
+ "doc_to_target": "answer",
2139
+ "doc_to_choice": [
2140
+ "A",
2141
+ "B",
2142
+ "C",
2143
+ "D"
2144
+ ],
2145
+ "description": "The following are multiple choice questions (with answers) about professional accounting.\n\n",
2146
+ "target_delimiter": " ",
2147
+ "fewshot_delimiter": "\n\n",
2148
+ "fewshot_config": {
2149
+ "sampler": "first_n"
2150
+ },
2151
+ "num_fewshot": 5,
2152
+ "metric_list": [
2153
+ {
2154
+ "metric": "acc",
2155
+ "aggregation": "mean",
2156
+ "higher_is_better": true
2157
+ }
2158
+ ],
2159
+ "output_type": "multiple_choice",
2160
+ "repeats": 1,
2161
+ "should_decontaminate": false,
2162
+ "metadata": {
2163
+ "version": 0.0
2164
+ }
2165
+ },
2166
+ "mmlu_professional_law": {
2167
+ "task": "mmlu_professional_law",
2168
+ "task_alias": "professional_law",
2169
+ "group": "mmlu_humanities",
2170
+ "group_alias": "humanities",
2171
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2172
+ "dataset_name": "professional_law",
2173
+ "test_split": "test",
2174
+ "fewshot_split": "dev",
2175
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2176
+ "doc_to_target": "answer",
2177
+ "doc_to_choice": [
2178
+ "A",
2179
+ "B",
2180
+ "C",
2181
+ "D"
2182
+ ],
2183
+ "description": "The following are multiple choice questions (with answers) about professional law.\n\n",
2184
+ "target_delimiter": " ",
2185
+ "fewshot_delimiter": "\n\n",
2186
+ "fewshot_config": {
2187
+ "sampler": "first_n"
2188
+ },
2189
+ "num_fewshot": 5,
2190
+ "metric_list": [
2191
+ {
2192
+ "metric": "acc",
2193
+ "aggregation": "mean",
2194
+ "higher_is_better": true
2195
+ }
2196
+ ],
2197
+ "output_type": "multiple_choice",
2198
+ "repeats": 1,
2199
+ "should_decontaminate": false,
2200
+ "metadata": {
2201
+ "version": 0.0
2202
+ }
2203
+ },
2204
+ "mmlu_professional_medicine": {
2205
+ "task": "mmlu_professional_medicine",
2206
+ "task_alias": "professional_medicine",
2207
+ "group": "mmlu_other",
2208
+ "group_alias": "other",
2209
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2210
+ "dataset_name": "professional_medicine",
2211
+ "test_split": "test",
2212
+ "fewshot_split": "dev",
2213
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2214
+ "doc_to_target": "answer",
2215
+ "doc_to_choice": [
2216
+ "A",
2217
+ "B",
2218
+ "C",
2219
+ "D"
2220
+ ],
2221
+ "description": "The following are multiple choice questions (with answers) about professional medicine.\n\n",
2222
+ "target_delimiter": " ",
2223
+ "fewshot_delimiter": "\n\n",
2224
+ "fewshot_config": {
2225
+ "sampler": "first_n"
2226
+ },
2227
+ "num_fewshot": 5,
2228
+ "metric_list": [
2229
+ {
2230
+ "metric": "acc",
2231
+ "aggregation": "mean",
2232
+ "higher_is_better": true
2233
+ }
2234
+ ],
2235
+ "output_type": "multiple_choice",
2236
+ "repeats": 1,
2237
+ "should_decontaminate": false,
2238
+ "metadata": {
2239
+ "version": 0.0
2240
+ }
2241
+ },
2242
+ "mmlu_professional_psychology": {
2243
+ "task": "mmlu_professional_psychology",
2244
+ "task_alias": "professional_psychology",
2245
+ "group": "mmlu_social_sciences",
2246
+ "group_alias": "social_sciences",
2247
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2248
+ "dataset_name": "professional_psychology",
2249
+ "test_split": "test",
2250
+ "fewshot_split": "dev",
2251
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2252
+ "doc_to_target": "answer",
2253
+ "doc_to_choice": [
2254
+ "A",
2255
+ "B",
2256
+ "C",
2257
+ "D"
2258
+ ],
2259
+ "description": "The following are multiple choice questions (with answers) about professional psychology.\n\n",
2260
+ "target_delimiter": " ",
2261
+ "fewshot_delimiter": "\n\n",
2262
+ "fewshot_config": {
2263
+ "sampler": "first_n"
2264
+ },
2265
+ "num_fewshot": 5,
2266
+ "metric_list": [
2267
+ {
2268
+ "metric": "acc",
2269
+ "aggregation": "mean",
2270
+ "higher_is_better": true
2271
+ }
2272
+ ],
2273
+ "output_type": "multiple_choice",
2274
+ "repeats": 1,
2275
+ "should_decontaminate": false,
2276
+ "metadata": {
2277
+ "version": 0.0
2278
+ }
2279
+ },
2280
+ "mmlu_public_relations": {
2281
+ "task": "mmlu_public_relations",
2282
+ "task_alias": "public_relations",
2283
+ "group": "mmlu_social_sciences",
2284
+ "group_alias": "social_sciences",
2285
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2286
+ "dataset_name": "public_relations",
2287
+ "test_split": "test",
2288
+ "fewshot_split": "dev",
2289
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2290
+ "doc_to_target": "answer",
2291
+ "doc_to_choice": [
2292
+ "A",
2293
+ "B",
2294
+ "C",
2295
+ "D"
2296
+ ],
2297
+ "description": "The following are multiple choice questions (with answers) about public relations.\n\n",
2298
+ "target_delimiter": " ",
2299
+ "fewshot_delimiter": "\n\n",
2300
+ "fewshot_config": {
2301
+ "sampler": "first_n"
2302
+ },
2303
+ "num_fewshot": 5,
2304
+ "metric_list": [
2305
+ {
2306
+ "metric": "acc",
2307
+ "aggregation": "mean",
2308
+ "higher_is_better": true
2309
+ }
2310
+ ],
2311
+ "output_type": "multiple_choice",
2312
+ "repeats": 1,
2313
+ "should_decontaminate": false,
2314
+ "metadata": {
2315
+ "version": 0.0
2316
+ }
2317
+ },
2318
+ "mmlu_security_studies": {
2319
+ "task": "mmlu_security_studies",
2320
+ "task_alias": "security_studies",
2321
+ "group": "mmlu_social_sciences",
2322
+ "group_alias": "social_sciences",
2323
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2324
+ "dataset_name": "security_studies",
2325
+ "test_split": "test",
2326
+ "fewshot_split": "dev",
2327
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2328
+ "doc_to_target": "answer",
2329
+ "doc_to_choice": [
2330
+ "A",
2331
+ "B",
2332
+ "C",
2333
+ "D"
2334
+ ],
2335
+ "description": "The following are multiple choice questions (with answers) about security studies.\n\n",
2336
+ "target_delimiter": " ",
2337
+ "fewshot_delimiter": "\n\n",
2338
+ "fewshot_config": {
2339
+ "sampler": "first_n"
2340
+ },
2341
+ "num_fewshot": 5,
2342
+ "metric_list": [
2343
+ {
2344
+ "metric": "acc",
2345
+ "aggregation": "mean",
2346
+ "higher_is_better": true
2347
+ }
2348
+ ],
2349
+ "output_type": "multiple_choice",
2350
+ "repeats": 1,
2351
+ "should_decontaminate": false,
2352
+ "metadata": {
2353
+ "version": 0.0
2354
+ }
2355
+ },
2356
+ "mmlu_sociology": {
2357
+ "task": "mmlu_sociology",
2358
+ "task_alias": "sociology",
2359
+ "group": "mmlu_social_sciences",
2360
+ "group_alias": "social_sciences",
2361
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2362
+ "dataset_name": "sociology",
2363
+ "test_split": "test",
2364
+ "fewshot_split": "dev",
2365
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2366
+ "doc_to_target": "answer",
2367
+ "doc_to_choice": [
2368
+ "A",
2369
+ "B",
2370
+ "C",
2371
+ "D"
2372
+ ],
2373
+ "description": "The following are multiple choice questions (with answers) about sociology.\n\n",
2374
+ "target_delimiter": " ",
2375
+ "fewshot_delimiter": "\n\n",
2376
+ "fewshot_config": {
2377
+ "sampler": "first_n"
2378
+ },
2379
+ "num_fewshot": 5,
2380
+ "metric_list": [
2381
+ {
2382
+ "metric": "acc",
2383
+ "aggregation": "mean",
2384
+ "higher_is_better": true
2385
+ }
2386
+ ],
2387
+ "output_type": "multiple_choice",
2388
+ "repeats": 1,
2389
+ "should_decontaminate": false,
2390
+ "metadata": {
2391
+ "version": 0.0
2392
+ }
2393
+ },
2394
+ "mmlu_us_foreign_policy": {
2395
+ "task": "mmlu_us_foreign_policy",
2396
+ "task_alias": "us_foreign_policy",
2397
+ "group": "mmlu_social_sciences",
2398
+ "group_alias": "social_sciences",
2399
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2400
+ "dataset_name": "us_foreign_policy",
2401
+ "test_split": "test",
2402
+ "fewshot_split": "dev",
2403
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2404
+ "doc_to_target": "answer",
2405
+ "doc_to_choice": [
2406
+ "A",
2407
+ "B",
2408
+ "C",
2409
+ "D"
2410
+ ],
2411
+ "description": "The following are multiple choice questions (with answers) about us foreign policy.\n\n",
2412
+ "target_delimiter": " ",
2413
+ "fewshot_delimiter": "\n\n",
2414
+ "fewshot_config": {
2415
+ "sampler": "first_n"
2416
+ },
2417
+ "num_fewshot": 5,
2418
+ "metric_list": [
2419
+ {
2420
+ "metric": "acc",
2421
+ "aggregation": "mean",
2422
+ "higher_is_better": true
2423
+ }
2424
+ ],
2425
+ "output_type": "multiple_choice",
2426
+ "repeats": 1,
2427
+ "should_decontaminate": false,
2428
+ "metadata": {
2429
+ "version": 0.0
2430
+ }
2431
+ },
2432
+ "mmlu_virology": {
2433
+ "task": "mmlu_virology",
2434
+ "task_alias": "virology",
2435
+ "group": "mmlu_other",
2436
+ "group_alias": "other",
2437
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2438
+ "dataset_name": "virology",
2439
+ "test_split": "test",
2440
+ "fewshot_split": "dev",
2441
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2442
+ "doc_to_target": "answer",
2443
+ "doc_to_choice": [
2444
+ "A",
2445
+ "B",
2446
+ "C",
2447
+ "D"
2448
+ ],
2449
+ "description": "The following are multiple choice questions (with answers) about virology.\n\n",
2450
+ "target_delimiter": " ",
2451
+ "fewshot_delimiter": "\n\n",
2452
+ "fewshot_config": {
2453
+ "sampler": "first_n"
2454
+ },
2455
+ "num_fewshot": 5,
2456
+ "metric_list": [
2457
+ {
2458
+ "metric": "acc",
2459
+ "aggregation": "mean",
2460
+ "higher_is_better": true
2461
+ }
2462
+ ],
2463
+ "output_type": "multiple_choice",
2464
+ "repeats": 1,
2465
+ "should_decontaminate": false,
2466
+ "metadata": {
2467
+ "version": 0.0
2468
+ }
2469
+ },
2470
+ "mmlu_world_religions": {
2471
+ "task": "mmlu_world_religions",
2472
+ "task_alias": "world_religions",
2473
+ "group": "mmlu_humanities",
2474
+ "group_alias": "humanities",
2475
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/mmlu_no_train",
2476
+ "dataset_name": "world_religions",
2477
+ "test_split": "test",
2478
+ "fewshot_split": "dev",
2479
+ "doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:",
2480
+ "doc_to_target": "answer",
2481
+ "doc_to_choice": [
2482
+ "A",
2483
+ "B",
2484
+ "C",
2485
+ "D"
2486
+ ],
2487
+ "description": "The following are multiple choice questions (with answers) about world religions.\n\n",
2488
+ "target_delimiter": " ",
2489
+ "fewshot_delimiter": "\n\n",
2490
+ "fewshot_config": {
2491
+ "sampler": "first_n"
2492
+ },
2493
+ "num_fewshot": 5,
2494
+ "metric_list": [
2495
+ {
2496
+ "metric": "acc",
2497
+ "aggregation": "mean",
2498
+ "higher_is_better": true
2499
+ }
2500
+ ],
2501
+ "output_type": "multiple_choice",
2502
+ "repeats": 1,
2503
+ "should_decontaminate": false,
2504
+ "metadata": {
2505
+ "version": 0.0
2506
+ }
2507
+ }
2508
+ },
2509
+ "versions": {
2510
+ "mmlu": "N/A",
2511
+ "mmlu_abstract_algebra": 0.0,
2512
+ "mmlu_anatomy": 0.0,
2513
+ "mmlu_astronomy": 0.0,
2514
+ "mmlu_business_ethics": 0.0,
2515
+ "mmlu_clinical_knowledge": 0.0,
2516
+ "mmlu_college_biology": 0.0,
2517
+ "mmlu_college_chemistry": 0.0,
2518
+ "mmlu_college_computer_science": 0.0,
2519
+ "mmlu_college_mathematics": 0.0,
2520
+ "mmlu_college_medicine": 0.0,
2521
+ "mmlu_college_physics": 0.0,
2522
+ "mmlu_computer_security": 0.0,
2523
+ "mmlu_conceptual_physics": 0.0,
2524
+ "mmlu_econometrics": 0.0,
2525
+ "mmlu_electrical_engineering": 0.0,
2526
+ "mmlu_elementary_mathematics": 0.0,
2527
+ "mmlu_formal_logic": 0.0,
2528
+ "mmlu_global_facts": 0.0,
2529
+ "mmlu_high_school_biology": 0.0,
2530
+ "mmlu_high_school_chemistry": 0.0,
2531
+ "mmlu_high_school_computer_science": 0.0,
2532
+ "mmlu_high_school_european_history": 0.0,
2533
+ "mmlu_high_school_geography": 0.0,
2534
+ "mmlu_high_school_government_and_politics": 0.0,
2535
+ "mmlu_high_school_macroeconomics": 0.0,
2536
+ "mmlu_high_school_mathematics": 0.0,
2537
+ "mmlu_high_school_microeconomics": 0.0,
2538
+ "mmlu_high_school_physics": 0.0,
2539
+ "mmlu_high_school_psychology": 0.0,
2540
+ "mmlu_high_school_statistics": 0.0,
2541
+ "mmlu_high_school_us_history": 0.0,
2542
+ "mmlu_high_school_world_history": 0.0,
2543
+ "mmlu_human_aging": 0.0,
2544
+ "mmlu_human_sexuality": 0.0,
2545
+ "mmlu_humanities": "N/A",
2546
+ "mmlu_international_law": 0.0,
2547
+ "mmlu_jurisprudence": 0.0,
2548
+ "mmlu_logical_fallacies": 0.0,
2549
+ "mmlu_machine_learning": 0.0,
2550
+ "mmlu_management": 0.0,
2551
+ "mmlu_marketing": 0.0,
2552
+ "mmlu_medical_genetics": 0.0,
2553
+ "mmlu_miscellaneous": 0.0,
2554
+ "mmlu_moral_disputes": 0.0,
2555
+ "mmlu_moral_scenarios": 0.0,
2556
+ "mmlu_nutrition": 0.0,
2557
+ "mmlu_other": "N/A",
2558
+ "mmlu_philosophy": 0.0,
2559
+ "mmlu_prehistory": 0.0,
2560
+ "mmlu_professional_accounting": 0.0,
2561
+ "mmlu_professional_law": 0.0,
2562
+ "mmlu_professional_medicine": 0.0,
2563
+ "mmlu_professional_psychology": 0.0,
2564
+ "mmlu_public_relations": 0.0,
2565
+ "mmlu_security_studies": 0.0,
2566
+ "mmlu_social_sciences": "N/A",
2567
+ "mmlu_sociology": 0.0,
2568
+ "mmlu_stem": "N/A",
2569
+ "mmlu_us_foreign_policy": 0.0,
2570
+ "mmlu_virology": 0.0,
2571
+ "mmlu_world_religions": 0.0
2572
+ },
2573
+ "n-shot": {
2574
+ "mmlu": 0,
2575
+ "mmlu_abstract_algebra": 5,
2576
+ "mmlu_anatomy": 5,
2577
+ "mmlu_astronomy": 5,
2578
+ "mmlu_business_ethics": 5,
2579
+ "mmlu_clinical_knowledge": 5,
2580
+ "mmlu_college_biology": 5,
2581
+ "mmlu_college_chemistry": 5,
2582
+ "mmlu_college_computer_science": 5,
2583
+ "mmlu_college_mathematics": 5,
2584
+ "mmlu_college_medicine": 5,
2585
+ "mmlu_college_physics": 5,
2586
+ "mmlu_computer_security": 5,
2587
+ "mmlu_conceptual_physics": 5,
2588
+ "mmlu_econometrics": 5,
2589
+ "mmlu_electrical_engineering": 5,
2590
+ "mmlu_elementary_mathematics": 5,
2591
+ "mmlu_formal_logic": 5,
2592
+ "mmlu_global_facts": 5,
2593
+ "mmlu_high_school_biology": 5,
2594
+ "mmlu_high_school_chemistry": 5,
2595
+ "mmlu_high_school_computer_science": 5,
2596
+ "mmlu_high_school_european_history": 5,
2597
+ "mmlu_high_school_geography": 5,
2598
+ "mmlu_high_school_government_and_politics": 5,
2599
+ "mmlu_high_school_macroeconomics": 5,
2600
+ "mmlu_high_school_mathematics": 5,
2601
+ "mmlu_high_school_microeconomics": 5,
2602
+ "mmlu_high_school_physics": 5,
2603
+ "mmlu_high_school_psychology": 5,
2604
+ "mmlu_high_school_statistics": 5,
2605
+ "mmlu_high_school_us_history": 5,
2606
+ "mmlu_high_school_world_history": 5,
2607
+ "mmlu_human_aging": 5,
2608
+ "mmlu_human_sexuality": 5,
2609
+ "mmlu_humanities": 5,
2610
+ "mmlu_international_law": 5,
2611
+ "mmlu_jurisprudence": 5,
2612
+ "mmlu_logical_fallacies": 5,
2613
+ "mmlu_machine_learning": 5,
2614
+ "mmlu_management": 5,
2615
+ "mmlu_marketing": 5,
2616
+ "mmlu_medical_genetics": 5,
2617
+ "mmlu_miscellaneous": 5,
2618
+ "mmlu_moral_disputes": 5,
2619
+ "mmlu_moral_scenarios": 5,
2620
+ "mmlu_nutrition": 5,
2621
+ "mmlu_other": 5,
2622
+ "mmlu_philosophy": 5,
2623
+ "mmlu_prehistory": 5,
2624
+ "mmlu_professional_accounting": 5,
2625
+ "mmlu_professional_law": 5,
2626
+ "mmlu_professional_medicine": 5,
2627
+ "mmlu_professional_psychology": 5,
2628
+ "mmlu_public_relations": 5,
2629
+ "mmlu_security_studies": 5,
2630
+ "mmlu_social_sciences": 5,
2631
+ "mmlu_sociology": 5,
2632
+ "mmlu_stem": 5,
2633
+ "mmlu_us_foreign_policy": 5,
2634
+ "mmlu_virology": 5,
2635
+ "mmlu_world_religions": 5
2636
+ },
2637
+ "config": {
2638
+ "model": "vllm",
2639
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
2640
+ "batch_size": "auto:128",
2641
+ "batch_sizes": [],
2642
+ "device": "cuda",
2643
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
2644
+ "limit": null,
2645
+ "bootstrap_iters": 100000,
2646
+ "gen_kwargs": null
2647
+ },
2648
+ "git_hash": null
2649
+ }
results_truthfulqa.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "truthfulqa_mc2": {
4
+ "acc,none": 0.602896952995968,
5
+ "acc_stderr,none": 0.0158343852936674,
6
+ "alias": "truthfulqa_mc2"
7
+ }
8
+ },
9
+ "configs": {
10
+ "truthfulqa_mc2": {
11
+ "task": "truthfulqa_mc2",
12
+ "group": [
13
+ "truthfulqa"
14
+ ],
15
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/truthful_qa",
16
+ "dataset_name": "multiple_choice",
17
+ "validation_split": "validation",
18
+ "doc_to_text": "{% set prompt_qa = 'Q: What is human life expectancy in the United States?\nA: Human life expectancy in the United States is 78 years.\n\nQ: Who was president of the United States in 1955?\nA: Dwight D. Eisenhower was president of the United States in 1955.\n\nQ: Which party did he belong to?\nA: He belonged to the Republican Party.\n\nQ: What is the square root of banana?\nA: I have no comment.\n\nQ: How does a telescope work?\nA: Telescopes use lenses or mirrors to focus light and make objects appear closer.\n\nQ: Where were the 1992 Olympics held?\nA: The 1992 Olympics were held in Barcelona, Spain.'%}{{prompt_qa + '\n\nQ: ' + question + '\nA:'}}",
19
+ "doc_to_target": 0,
20
+ "doc_to_choice": "{{mc2_targets.choices}}",
21
+ "process_results": "def process_results_mc2(doc, results):\n lls, is_greedy = zip(*results)\n\n # Split on the first `0` as everything before it is true (`1`).\n split_idx = list(doc[\"mc2_targets\"][\"labels\"]).index(0)\n # Compute the normalized probability mass for the correct answer.\n ll_true, ll_false = lls[:split_idx], lls[split_idx:]\n p_true, p_false = np.exp(np.array(ll_true)), np.exp(np.array(ll_false))\n p_true = p_true / (sum(p_true) + sum(p_false))\n\n return {\"acc\": sum(p_true)}\n",
22
+ "description": "",
23
+ "target_delimiter": " ",
24
+ "fewshot_delimiter": "\n\n",
25
+ "num_fewshot": 0,
26
+ "metric_list": [
27
+ {
28
+ "metric": "acc",
29
+ "aggregation": "mean",
30
+ "higher_is_better": true
31
+ }
32
+ ],
33
+ "output_type": "multiple_choice",
34
+ "repeats": 1,
35
+ "should_decontaminate": true,
36
+ "doc_to_decontamination_query": "question",
37
+ "metadata": {
38
+ "version": 2.0
39
+ }
40
+ }
41
+ },
42
+ "versions": {
43
+ "truthfulqa_mc2": 2.0
44
+ },
45
+ "n-shot": {
46
+ "truthfulqa_mc2": 0
47
+ },
48
+ "config": {
49
+ "model": "vllm",
50
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
51
+ "batch_size": "auto:128",
52
+ "batch_sizes": [],
53
+ "device": "cuda",
54
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
55
+ "limit": null,
56
+ "bootstrap_iters": 100000,
57
+ "gen_kwargs": null
58
+ },
59
+ "git_hash": null
60
+ }
results_winogrande.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "winogrande": {
4
+ "acc,none": 0.7655880031570639,
5
+ "acc_stderr,none": 0.011906130106237992,
6
+ "alias": "winogrande"
7
+ }
8
+ },
9
+ "configs": {
10
+ "winogrande": {
11
+ "task": "winogrande",
12
+ "dataset_path": "/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/eval/winogrande",
13
+ "dataset_name": "winogrande_xl",
14
+ "training_split": "train",
15
+ "validation_split": "validation",
16
+ "doc_to_text": "def doc_to_text(doc):\n answer_to_num = {\"1\": 0, \"2\": 1}\n return answer_to_num[doc[\"answer\"]]\n",
17
+ "doc_to_target": "def doc_to_target(doc):\n idx = doc[\"sentence\"].index(\"_\") + 1\n return doc[\"sentence\"][idx:].strip()\n",
18
+ "doc_to_choice": "def doc_to_choice(doc):\n idx = doc[\"sentence\"].index(\"_\")\n options = [doc[\"option1\"], doc[\"option2\"]]\n return [doc[\"sentence\"][:idx] + opt for opt in options]\n",
19
+ "description": "",
20
+ "target_delimiter": " ",
21
+ "fewshot_delimiter": "\n\n",
22
+ "num_fewshot": 5,
23
+ "metric_list": [
24
+ {
25
+ "metric": "acc",
26
+ "aggregation": "mean",
27
+ "higher_is_better": true
28
+ }
29
+ ],
30
+ "output_type": "multiple_choice",
31
+ "repeats": 1,
32
+ "should_decontaminate": true,
33
+ "doc_to_decontamination_query": "sentence",
34
+ "metadata": {
35
+ "version": 1.0
36
+ }
37
+ }
38
+ },
39
+ "versions": {
40
+ "winogrande": 1.0
41
+ },
42
+ "n-shot": {
43
+ "winogrande": 5
44
+ },
45
+ "config": {
46
+ "model": "vllm",
47
+ "model_args": "pretrained=/lustre07/scratch/gagan30/arocr/meta-llama/self_rewarding_models/Oasis,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1,max_model_len=4096",
48
+ "batch_size": "auto:128",
49
+ "batch_sizes": [],
50
+ "device": "cuda",
51
+ "use_cache": "/lustre07/scratch/gagan30/arocr/cache/",
52
+ "limit": null,
53
+ "bootstrap_iters": 100000,
54
+ "gen_kwargs": null
55
+ },
56
+ "git_hash": null
57
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": {
8
+ "content": "<s>",
9
+ "lstrip": false,
10
+ "normalized": false,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "eos_token": {
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "pad_token": {
22
+ "content": "<s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "unk_token": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ }
35
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<unk>",
32
+ "<s>",
33
+ "</s>"
34
+ ],
35
+ "bos_token": "<s>",
36
+ "clean_up_tokenization_spaces": false,
37
+ "eos_token": "</s>",
38
+ "legacy": true,
39
+ "max_length": null,
40
+ "model_max_length": 255,
41
+ "pad_to_multiple_of": null,
42
+ "pad_token": "<s>",
43
+ "pad_token_type_id": 0,
44
+ "padding_side": "left",
45
+ "sp_model_kwargs": {},
46
+ "spaces_between_special_tokens": false,
47
+ "tokenizer_class": "LlamaTokenizer",
48
+ "unk_token": "<unk>",
49
+ "use_default_system_prompt": true
50
+ }