Anders Johan Andreassen commited on
Commit
3f0d9fa
1 Parent(s): 3b11da2

BIG-bench (#4125)

Browse files

* Draft PR for bigbench datasets

* Update handeling tasks with non-rectangular targets

* updates before PR

* Modifying behavior for min num examples

* Updating error message

* adding strip to inputs and targets to match bbseqio implementation

* Updated readme

* added config doc string

* trimming linewidths to pass tests

* reformatting using black

* reformatting using black with py36

* using isort to fix import order

* formatting for flake8

* updated readme

* added YAML tags

* Adding bigbench github to additional requirements to pass tests

* replaing print statement with logger

* isort formatting

* adding missing t5 dependency to additional requirements to pass tests

* adding missing seqio dependency to additional requirements to pass tests

* adding missing tfds dependency to additional requirements to pass tests

* adding missing tf metadata dependency to additional requirements to pass tests

* Edits to make tests pass. Some modifications to args (task_name -> name), some edits to logging errors

* fixed configs and moved checks for subtask and task names

* update error messages and some formatting

* more test requirements

* file formatting

* adding dummy.zip

* additional requirements again

* tensorflow hub requirement

* removing dummy.zip file. wrong format

* adding individual dummy_data.zip for each config

* immutable dict requirement

* removing yaml flags text

* added en to languages

* adding dataset_infos.json

* Update datasets/bigbench/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/bigbench/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* adding bigbench to setup.py

* removing dependencies from additional-test-requirements.txt as they will now be in setup.py

* cleaning up bigbench import

* delete dummy data

* adding back in abstract_narrative_understanding dummy data

* updating setup dependencies to temp branch that imports bigbench without datasets=1.17. PR in bigbench on the way

* fixing isort format issue

* more formatting for isort and flake8

* commenting out bigbench to test apache-beam issue

* adding bigbench back into setup.py

* updating language section in readme

* Updated Dataset Creation in readme

* More updates to README

* updated index in readme

* update path to bigbench install

* dummy update to trigger tests

* switching to experimental bigbench branch to resolve dependency issues

* switching to experimental bigbench branch to resolve dependency issues

* trigger test

* trigger test

* trigger test

* trigger test

* add version

* switching back to tar install for bigbenhc

* formatting

* trigger test

* trigger test

* Adding back in empty sections to readme

* fix sections of readme

* Modifying dummy data. Passing empty folder to test failure

* deleting dummy data

* pin s3 dependencies to speed up pip backtracking

* disable s3 test on windows for now

* disable other test too

* explicitely require sentencepiece

* Apply suggestions from code review

* style

* Minor updates to README

* trigger test

* temporarily commenting out 4/5 test to focus on windows issue.

* switching to private github repo for bigbench while debugging current errors

* long filenames workaround attempt

* trigger new tests after adding editdistance on bigbench requirements remotely

* adding sacremoses

* changing to bigbench tar file

* reverting circleci to run all test

* Experiment: pinning apache-beam and removing sacremoses

* reverting circleci edit

* adding sacremoses back in and reverting apache-beam version

* someone changed dependencies of bigbench. testing to see if it causes issues

* updated bigbench requirements. triggering tests

* Update format_fn after breaking change from bigbench

* code formatting

Co-authored-by: Anders Johan Andreassen <ajandreassen@google.com>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>

Commit from https://github.com/huggingface/datasets/commit/72e8679dad1f7e1c0856dd2a6fb8c0149a8dc79c

Files changed (3) hide show
  1. README.md +230 -0
  2. bigbench.py +271 -0
  3. dataset_infos.json +0 -0
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - expert-generated
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ - machine-generated
10
+ - other
11
+ languages:
12
+ - en
13
+ licenses:
14
+ - apache-2.0
15
+ multilinguality:
16
+ - multilingual
17
+ - monolingual
18
+ pretty_name: bigbench
19
+ size_categories:
20
+ - unknown
21
+ source_datasets:
22
+ - original
23
+ task_categories:
24
+ - multiple-choice
25
+ - question-answering
26
+ - text-classification
27
+ - text-generation
28
+ - zero-shot-classification
29
+ - other
30
+ task_ids:
31
+ - multiple-choice-qa
32
+ - extractive-qa
33
+ - open-domain-qa
34
+ - closed-domain-qa
35
+ - fact-checking
36
+ - acceptability-classification
37
+ - intent-classification
38
+ - multi-class-classification
39
+ - multi-label-classification
40
+ - text-scoring
41
+ - hate-speech-detection
42
+ - language-modeling
43
+ ---
44
+
45
+ # Dataset Card for BIG-bench
46
+
47
+ ## Table of Contents
48
+ - [Table of Contents](#table-of-contents)
49
+ - [Dataset Description](#dataset-description)
50
+ - [Dataset Summary](#dataset-summary)
51
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
52
+ - [Languages](#languages)
53
+ - [Dataset Structure](#dataset-structure)
54
+ - [Data Instances](#data-instances)
55
+ - [Data Fields](#data-fields)
56
+ - [Data Splits](#data-splits)
57
+ - [Dataset Creation](#dataset-creation)
58
+ - [Curation Rationale](#curation-rationale)
59
+ - [Source Data](#source-data)
60
+ - [Annotations](#annotations)
61
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
62
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
63
+ - [Social Impact of Dataset](#social-impact-of-dataset)
64
+ - [Discussion of Biases](#discussion-of-biases)
65
+ - [Other Known Limitations](#other-known-limitations)
66
+ - [Additional Information](#additional-information)
67
+ - [Dataset Curators](#dataset-curators)
68
+ - [Licensing Information](#licensing-information)
69
+ - [Citation Information](#citation-information)
70
+ - [Contributions](#contributions)
71
+
72
+ ## Dataset Description
73
+
74
+ - **Homepage/Repository:** [https://github.com/google/BIG-bench](https://github.com/google/BIG-bench)
75
+ - **Paper:** In progress
76
+ - **Leaderboard:**
77
+ - **Point of Contact:** [bigbench@googlegroups.com](mailto:bigbench@googlegroups.com)
78
+
79
+
80
+ ### Dataset Summary
81
+
82
+ The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities. Tasks included in BIG-bench are summarized by keyword [here](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md), and by task name [here](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/README.md). A paper introducing the benchmark, including evaluation results on large language models, is currently in preparation.
83
+
84
+ ### Supported Tasks and Leaderboards
85
+
86
+ BIG-Bench consists of both json and programmatic tasks.
87
+ This implementation in HuggingFace datasets implements
88
+
89
+ - 24 BIG-bench Lite tasks
90
+
91
+ - 167 BIG-bench json tasks (includes BIG-bench Lite)
92
+
93
+ To study the remaining programmatic tasks, please see the [BIG-bench GitHub repo](https://github.com/google/BIG-bench)
94
+
95
+ ### Languages
96
+
97
+ Although predominantly English, BIG-bench contains tasks in over 1000 written languages, as well as some synthetic and programming languages.
98
+ See [BIG-bench organized by keywords](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md). Relevant keywords include `multilingual`, `non-english`, `low-resource-language`, `translation`.
99
+
100
+ For tasks specifically targeting low-resource languages, see the table below:
101
+
102
+ Task Name | Languages |
103
+ --|--|
104
+ Conlang Translation Problems | English, German, Finnish, Abma, Apinayé, Inapuri, Ndebele, Palauan|
105
+ Kannada Riddles | Kannada|
106
+ Language Identification | 1000 languages |
107
+ Swahili English Proverbs | Swahili |
108
+ Which Wiki Edit | English, Russian, Spanish, German, French, Turkish, Japanese, Vietnamese, Chinese, Arabic, Norwegian, Tagalog|
109
+
110
+
111
+
112
+
113
+ ## Dataset Structure
114
+
115
+ ### Data Instances
116
+
117
+ Each dataset contains 5 features. For example an instance from the `emoji_movie` task is:
118
+
119
+ ```
120
+ {
121
+ "idx": 0,
122
+ "inputs": "Q: What movie does this emoji describe? 👦👓⚡️\n choice: harry potter\n. choice: shutter island\n. choice: inglourious basterds\n. choice: die hard\n. choice: moonlight\nA:"
123
+ "targets": ["harry potter"],
124
+ "multiple_choice_targets":["harry potter", "shutter island", "die hard", "inglourious basterds", "moonlight"],
125
+ "multiple_choice_scores": [1, 0, 0, 0, 0]
126
+ }
127
+ ```
128
+
129
+ For tasks that do not have multiple choice targets, the lists are empty.
130
+
131
+
132
+ ### Data Fields
133
+
134
+ Every example has the following fields
135
+ - `idx`: an `int` feature
136
+ - `inputs`: a `string` feature
137
+ - `targets`: a sequence of `string` feature
138
+ - `multiple_choice_targets`: a sequence of `string` features
139
+ - `multiple_choice_scores`: a sequence of `int` features
140
+
141
+ ### Data Splits
142
+
143
+ Each task has a `default`, `train` and `validation` split.
144
+ The split `default` uses all the samples for each task (and it's the same as `all` used in the `bigbench.bbseqio` implementation.)
145
+ For standard evaluation on BIG-bench, we recommend using the `default` split, and the `train` and `validation` split is to be used if one wants to train a model on BIG-bench.
146
+
147
+ ## Dataset Creation
148
+
149
+ BIG-bench tasks were collaboratively submitted through GitHub pull requests.
150
+
151
+ Each task went through a review and meta-review process with criteria outlined in the [BIG-bench repository documentation](https://github.com/google/BIG-bench/blob/main/docs/doc.md#submission-review-process).
152
+ Each task was required to describe the data source and curation methods on the task README page.
153
+
154
+ ### Curation Rationale
155
+
156
+ [More Information Needed]
157
+
158
+ ### Source Data
159
+
160
+ #### Initial Data Collection and Normalization
161
+
162
+ [More Information Needed]
163
+
164
+ #### Who are the source language producers?
165
+
166
+ [More Information Needed]
167
+
168
+
169
+ ### Annotations
170
+
171
+ #### Annotation process
172
+
173
+ [More Information Needed]
174
+
175
+ #### Who are the annotators?
176
+
177
+ [More Information Needed]
178
+
179
+
180
+ ### Personal and Sensitive Information
181
+
182
+ [More Information Needed]
183
+
184
+
185
+ ## Considerations for Using the Data
186
+
187
+ BIG-bench contains a wide range of tasks, some of which are sensitive and should be used with care.
188
+
189
+ Some tasks are specifically designed to test biases and failures common to large language models, and so may elicit inappropriate or harmful responses.
190
+ For a more thorough discussion see the [BIG-bench paper](in progress).
191
+
192
+ To view tasks designed to probe pro-social behavior, including alignment, social, racial, gender, religious or political bias; toxicity; inclusion; and other issues please see tasks under the [pro-social behavior keywords](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#pro-social-behavior) on the BIG-bench repository.
193
+
194
+
195
+ ### Social Impact of Dataset
196
+
197
+ [More Information Needed]
198
+
199
+
200
+ ### Discussion of Biases
201
+
202
+ [More Information Needed]
203
+
204
+
205
+ ### Other Known Limitations
206
+
207
+ [More Information Needed]
208
+
209
+
210
+ ## Additional Information
211
+
212
+ For a more thorough discussion of all aspects of BIG-bench including dataset creation and evaluations see the BIG-bench repository [https://github.com/google/BIG-bench](https://github.com/google/BIG-bench) and paper []
213
+
214
+ ### Dataset Curators
215
+
216
+ [More Information Needed]
217
+
218
+
219
+ ### Licensing Information
220
+
221
+ [Apache License 2.0](https://github.com/google/BIG-bench/blob/main/LICENSE)
222
+
223
+ ### Citation Information
224
+
225
+ To be added soon !
226
+
227
+ ### Contributions
228
+ For a full list of contributors to the BIG-bench dataset, see the paper.
229
+
230
+ Thanks to [@andersjohanandreassen](https://github.com/andersjohanandreassen) and [@ethansdyer](https://github.com/ethansdyer) for adding this dataset to HuggingFace.
bigbench.py ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """HuggingFace datasets implementation of the json tasks in the BIG-Bench Dataset.
16
+ For the programatic tasks, please use the BIG-Bench API on github.com/google/BIG-bench.
17
+ """
18
+
19
+
20
+ from typing import Optional
21
+
22
+ import bigbench.api.util as bb_utils # From: "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"
23
+ import bigbench.bbseqio.bigbench_bridge as bbb
24
+ from bigbench.api import json_task
25
+ from bigbench.bbseqio import bigbench_json_paths as bb_json_paths
26
+ from sentencepiece import sentencepiece_model_pb2 # noqa: this is also required by bigbench.api.util
27
+
28
+ import datasets
29
+
30
+
31
+ logger = datasets.logging.get_logger(__name__)
32
+
33
+ _CITATION = """\
34
+ @InProceedings{bigbench,
35
+ title = {Beyond the Imitation Game: Quantifying and extrapolating the
36
+ capabilities of language models},
37
+ author={BIG-Bench Collaboration
38
+ },
39
+ year={2022}
40
+ }
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to
45
+ probe large language models, and extrapolate their future capabilities.
46
+ """
47
+
48
+ _HOMEPAGE = "https://github.com/google/BIG-bench"
49
+
50
+ _LICENSE = "Apache License 2.0"
51
+
52
+
53
+ def div_or_none(x, y):
54
+ return x // y if x else x
55
+
56
+
57
+ def validate_task_name(task_name: str) -> None:
58
+ """Check that the requested task name is a valid bigbench json task."""
59
+ if task_name in bb_utils.get_all_json_task_names():
60
+ return
61
+ elif task_name in bb_utils.get_all_programmatic_task_names():
62
+ raise ValueError(
63
+ "BIG-Bench does not support programmatic tasks through HuggingFace datasets"
64
+ f"Please see {_HOMEPAGE} for more information for how to interact with the programmatic tasks."
65
+ )
66
+ else:
67
+ raise ValueError(
68
+ f"Invalid task_name. Got task_name = {task_name}. Please choose one from:\n -- "
69
+ + "\n -- ".join(bb_utils.get_all_json_task_names())
70
+ )
71
+
72
+
73
+ def validate_subtask_name(task_name: str, subtask_name: str) -> None:
74
+ """Check that the requested subtask name is a valid bigbench subtask."""
75
+ subtasks = [name.split(":")[-1] for name in bb_utils.get_subtask_names_from_task(task_name)]
76
+ if not subtasks:
77
+ raise ValueError(f"Task {task_name} has no subtasks. Got subtask_name = {subtask_name}.")
78
+ elif subtask_name not in subtasks:
79
+ raise ValueError(
80
+ f"Invalid subtask_name {subtask_name} for task {task_name}. Please choose one from:\n -- "
81
+ + "\n -- ".join(subtasks)
82
+ )
83
+
84
+
85
+ class BigBenchConfig(datasets.BuilderConfig):
86
+ def __init__(
87
+ self,
88
+ name,
89
+ subtask_name: Optional[str] = None,
90
+ num_shots: int = 0,
91
+ max_examples: Optional[int] = None,
92
+ **kwargs,
93
+ ):
94
+ super().__init__(
95
+ name=name,
96
+ **kwargs,
97
+ )
98
+ """BIG-bench configuration.
99
+
100
+ Args:
101
+ name: BIG-bench task name.
102
+ subtask_name: BIG-bench subtask name. Accepts both "task_name:subtask_name" and "subtask_name" formats.
103
+ num_shots: Number of few-shot examples in input prompt. Default is zero.
104
+ max_examples: Limit number of examples for each task. Default is including all examples.
105
+ """
106
+ self.task_name = name
107
+ self.subtask_name = subtask_name
108
+ self.num_shots = num_shots
109
+ self.max_examples = max_examples
110
+
111
+
112
+ class Bigbench(datasets.GeneratorBasedBuilder):
113
+ """The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark
114
+ intended to probe large language models, and extrapolate their future capabilities."""
115
+
116
+ VERSION = datasets.Version("1.0.0")
117
+
118
+ BUILDER_CONFIG_CLASS = BigBenchConfig
119
+
120
+ BUILDER_CONFIGS = [
121
+ BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names()
122
+ ]
123
+
124
+ def _info(self):
125
+ features = datasets.Features(
126
+ {
127
+ "idx": datasets.Value("int32"),
128
+ "inputs": datasets.Value("string"),
129
+ "targets": datasets.Sequence(datasets.Value("string")),
130
+ "multiple_choice_targets": datasets.Sequence(datasets.Value("string")),
131
+ "multiple_choice_scores": datasets.Sequence(datasets.Value("int32")),
132
+ }
133
+ )
134
+ return datasets.DatasetInfo(
135
+ # This is the description that will appear on the datasets page.
136
+ description=_DESCRIPTION,
137
+ # This defines the different columns of the dataset and their types
138
+ features=features, # Here we define them above because they are different between the two configurations
139
+ # If there's a common (input, target) tuple from the features,
140
+ # specify them here. They'll be used if as_supervised=True in
141
+ # builder.as_dataset.
142
+ supervised_keys=None,
143
+ # Homepage of the dataset for documentation
144
+ homepage=_HOMEPAGE,
145
+ # License for the dataset if available
146
+ license=_LICENSE,
147
+ # Citation for the dataset
148
+ citation=_CITATION,
149
+ )
150
+
151
+ def _split_generators(self, dl_manager):
152
+ """Returns SplitGenerators."""
153
+ return [
154
+ datasets.SplitGenerator(
155
+ name=datasets.splits.NamedSplit("default"), # TODO(ajandreassen): Is there a way to call this 'all'?
156
+ # These kwargs will be passed to _generate_examples
157
+ gen_kwargs={
158
+ "split": "all",
159
+ },
160
+ ),
161
+ datasets.SplitGenerator(
162
+ name=datasets.Split.TRAIN,
163
+ # These kwargs will be passed to _generate_examples
164
+ gen_kwargs={
165
+ "split": "train",
166
+ },
167
+ ),
168
+ datasets.SplitGenerator(
169
+ name=datasets.Split.VALIDATION,
170
+ # These kwargs will be passed to _generate_examples
171
+ gen_kwargs={
172
+ "split": "validation",
173
+ },
174
+ ),
175
+ ]
176
+
177
+ def _generate_examples(
178
+ self,
179
+ split, # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
180
+ ):
181
+ validate_task_name(self.config.task_name)
182
+ if self.config.subtask_name:
183
+ # Subtasks are sometimes in bigbench written as task_name:subtask_name.
184
+ # We want to remove the task_name from the subtask names:
185
+ self.config.subtask_name = self.config.subtask_name.split(":")[-1]
186
+ validate_subtask_name(self.config.task_name, self.config.subtask_name)
187
+
188
+ """Yields examples as (key, example) tuples."""
189
+ if split == "all":
190
+ # not cutoff in number of examples for 'all' split
191
+ MIN_VALIDATION_EXAMPLES = 0
192
+ else:
193
+ MIN_VALIDATION_EXAMPLES = 16
194
+
195
+ try:
196
+ task_path, json_util = bb_json_paths.get_task_path(self.config.task_name)
197
+
198
+ has_subtasks = bb_json_paths.has_subtasks(self.config.task_name)
199
+ if has_subtasks:
200
+ subtask_names = bb_json_paths.get_subtask_names(self.config.task_name)
201
+ num_subtasks = len(subtask_names)
202
+ min_validation_examples_per_subtask = div_or_none(MIN_VALIDATION_EXAMPLES, num_subtasks)
203
+
204
+ if not has_subtasks:
205
+ ds_fn = bbb.get_dataset_fn(
206
+ task_name=self.config.task_name,
207
+ task_path=task_path,
208
+ subtask_name=None,
209
+ num_shots=self.config.num_shots,
210
+ bigbench_task_type=bbb.BigBenchTaskType.HUGGINGFACE,
211
+ max_examples=self.config.max_examples,
212
+ json_util=json_util,
213
+ min_validation_examples=MIN_VALIDATION_EXAMPLES,
214
+ format_fn=json_task.default_format_fn,
215
+ )
216
+ ds_list = [ds_fn(split)]
217
+ elif self.config.subtask_name is not None:
218
+ ds_fn = bbb.get_dataset_fn(
219
+ task_name=self.config.task_name,
220
+ task_path=task_path,
221
+ subtask_name=self.config.subtask_name,
222
+ num_shots=self.config.num_shots,
223
+ bigbench_task_type=bbb.BigBenchTaskType.HUGGINGFACE,
224
+ max_examples=self.config.max_examples,
225
+ json_util=json_util,
226
+ min_validation_examples=min_validation_examples_per_subtask,
227
+ format_fn=json_task.default_format_fn,
228
+ )
229
+ ds_list = [ds_fn(split)]
230
+ else:
231
+ # Create mixture of all subtasks
232
+ ds_list = []
233
+ for subtask_name in subtask_names:
234
+ subtask_name = subtask_name.split(":")[-1]
235
+ logger.info(f"Loading subtask {split} split", subtask_name)
236
+ ds_fn = bbb.get_dataset_fn(
237
+ task_name=self.config.task_name,
238
+ task_path=task_path,
239
+ subtask_name=subtask_name,
240
+ num_shots=self.config.num_shots,
241
+ bigbench_task_type=bbb.BigBenchTaskType.HUGGINGFACE,
242
+ max_examples=div_or_none(self.config.max_examples, num_subtasks),
243
+ json_util=json_util,
244
+ min_validation_examples=min_validation_examples_per_subtask,
245
+ format_fn=json_task.default_format_fn,
246
+ )
247
+ ds_list.append(ds_fn(split))
248
+ except ValueError as value_error:
249
+ # BIG-Bench requires at least 16 examples to use the train & validation splits,
250
+ # while using 'all'/'default' does not have such a requirement.
251
+ if "has too few examples" in value_error.args[0] and split != "all":
252
+ logger.warning(
253
+ f"-- WARNING: skipping split {split} because it has too few examples. Please use 'default' split."
254
+ )
255
+ logger.warning(value_error)
256
+ return
257
+ raise value_error
258
+
259
+ unique_key_counter = 0
260
+ for ds in ds_list:
261
+ for example in ds:
262
+ unique_key_counter += 1
263
+ yield unique_key_counter, {
264
+ "idx": example["idx"],
265
+ "inputs": example["inputs"].numpy().decode().strip(),
266
+ "targets": [target.numpy().decode().strip() for target in example["targets"]],
267
+ "multiple_choice_targets": [
268
+ targets.decode().strip() for targets in example["multiple_choice_targets"].numpy()
269
+ ],
270
+ "multiple_choice_scores": [scores for scores in example["multiple_choice_scores"].numpy()],
271
+ }
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff