Datasets:
Tasks:
Question Answering
Formats:
parquet
Sub-tasks:
open-domain-qa
Languages:
English
Size:
10K - 100K
License:
Commit
•
dd6edb0
1
Parent(s):
f578e79
Add missing features to openbookqa dataset for additional config (#4278)
Browse files* Clean code
* Add missing features for 'additional' config
* Set main config as default
* Update metadata
* Update version number
* Update metadata
* Fix typo
* Update dummy data
Commit from https://github.com/huggingface/datasets/commit/86995fd86308e34f732cd3a3deb9a4e0cc8945cf
- README.md +34 -12
- dataset_infos.json +1 -1
- dummy/additional/{1.0.0 → 1.0.1}/dummy_data.zip +0 -0
- dummy/main/{1.0.0 → 1.0.1}/dummy_data.zip +0 -0
- openbookqa.py +64 -71
README.md
CHANGED
@@ -79,33 +79,51 @@ a subject.
|
|
79 |
|
80 |
### Data Instances
|
81 |
|
82 |
-
####
|
83 |
|
84 |
- **Size of downloaded dataset files:** 1.38 MB
|
85 |
- **Size of the generated dataset:** 1.38 MB
|
86 |
- **Total amount of disk used:** 2.75 MB
|
87 |
|
88 |
-
An example of 'train' looks as follows
|
89 |
```
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
```
|
92 |
|
93 |
-
####
|
94 |
|
95 |
- **Size of downloaded dataset files:** 1.38 MB
|
96 |
- **Size of the generated dataset:** 1.38 MB
|
97 |
- **Total amount of disk used:** 2.75 MB
|
98 |
|
99 |
-
An example of '
|
100 |
```
|
101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
```
|
103 |
|
104 |
### Data Fields
|
105 |
|
106 |
The data fields are the same among all splits.
|
107 |
|
108 |
-
####
|
109 |
- `id`: a `string` feature.
|
110 |
- `question_stem`: a `string` feature.
|
111 |
- `choices`: a dictionary feature containing:
|
@@ -113,20 +131,24 @@ The data fields are the same among all splits.
|
|
113 |
- `label`: a `string` feature.
|
114 |
- `answerKey`: a `string` feature.
|
115 |
|
116 |
-
####
|
117 |
- `id`: a `string` feature.
|
118 |
- `question_stem`: a `string` feature.
|
119 |
- `choices`: a dictionary feature containing:
|
120 |
- `text`: a `string` feature.
|
121 |
- `label`: a `string` feature.
|
122 |
- `answerKey`: a `string` feature.
|
|
|
|
|
|
|
|
|
123 |
|
124 |
### Data Splits
|
125 |
|
126 |
-
|
|
127 |
-
|
128 |
-
|
|
129 |
-
|
|
130 |
|
131 |
## Dataset Creation
|
132 |
|
|
|
79 |
|
80 |
### Data Instances
|
81 |
|
82 |
+
#### main
|
83 |
|
84 |
- **Size of downloaded dataset files:** 1.38 MB
|
85 |
- **Size of the generated dataset:** 1.38 MB
|
86 |
- **Total amount of disk used:** 2.75 MB
|
87 |
|
88 |
+
An example of 'train' looks as follows:
|
89 |
```
|
90 |
+
{'id': '7-980',
|
91 |
+
'question_stem': 'The sun is responsible for',
|
92 |
+
'choices': {'text': ['puppies learning new tricks',
|
93 |
+
'children growing up and getting old',
|
94 |
+
'flowers wilting in a vase',
|
95 |
+
'plants sprouting, blooming and wilting'],
|
96 |
+
'label': ['A', 'B', 'C', 'D']},
|
97 |
+
'answerKey': 'D'}
|
98 |
```
|
99 |
|
100 |
+
#### additional
|
101 |
|
102 |
- **Size of downloaded dataset files:** 1.38 MB
|
103 |
- **Size of the generated dataset:** 1.38 MB
|
104 |
- **Total amount of disk used:** 2.75 MB
|
105 |
|
106 |
+
An example of 'train' looks as follows:
|
107 |
```
|
108 |
+
{'id': '7-980',
|
109 |
+
'question_stem': 'The sun is responsible for',
|
110 |
+
'choices': {'text': ['puppies learning new tricks',
|
111 |
+
'children growing up and getting old',
|
112 |
+
'flowers wilting in a vase',
|
113 |
+
'plants sprouting, blooming and wilting'],
|
114 |
+
'label': ['A', 'B', 'C', 'D']},
|
115 |
+
'answerKey': 'D',
|
116 |
+
'fact1': 'the sun is the source of energy for physical cycles on Earth',
|
117 |
+
'humanScore': 1.0,
|
118 |
+
'clarity': 2.0,
|
119 |
+
'turkIdAnonymized': 'b356d338b7'}
|
120 |
```
|
121 |
|
122 |
### Data Fields
|
123 |
|
124 |
The data fields are the same among all splits.
|
125 |
|
126 |
+
#### main
|
127 |
- `id`: a `string` feature.
|
128 |
- `question_stem`: a `string` feature.
|
129 |
- `choices`: a dictionary feature containing:
|
|
|
131 |
- `label`: a `string` feature.
|
132 |
- `answerKey`: a `string` feature.
|
133 |
|
134 |
+
#### additional
|
135 |
- `id`: a `string` feature.
|
136 |
- `question_stem`: a `string` feature.
|
137 |
- `choices`: a dictionary feature containing:
|
138 |
- `text`: a `string` feature.
|
139 |
- `label`: a `string` feature.
|
140 |
- `answerKey`: a `string` feature.
|
141 |
+
- `fact1` (`str`): oOriginating common knowledge core fact associated to the question.
|
142 |
+
- `humanScore` (`float`): Human accuracy score.
|
143 |
+
- `clarity` (`float`): Clarity score.
|
144 |
+
- `turkIdAnonymized` (`str`): Anonymized crowd-worker ID.
|
145 |
|
146 |
### Data Splits
|
147 |
|
148 |
+
| name | train | validation | test |
|
149 |
+
|------------|------:|-----------:|-----:|
|
150 |
+
| main | 4957 | 500 | 500 |
|
151 |
+
| additional | 4957 | 500 | 500 |
|
152 |
|
153 |
## Dataset Creation
|
154 |
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"main": {"description": "OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic\n(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In\nparticular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,\nand rich text comprehension.\nOpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding
|
|
|
1 |
+
{"main": {"description": "OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic\n(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In\nparticular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,\nand rich text comprehension.\nOpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding\nof a subject.\n", "citation": "@inproceedings{OpenBookQA2018,\n title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},\n author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},\n booktitle={EMNLP},\n year={2018}\n}\n", "homepage": "https://allenai.org/data/open-book-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question_stem": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "openbookqa", "config_name": "main", "version": {"version_str": "1.0.1", "description": "", "major": 1, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 896034, "num_examples": 4957, "dataset_name": "openbookqa"}, "validation": {"name": "validation", "num_bytes": 95519, "num_examples": 500, "dataset_name": "openbookqa"}, "test": {"name": "test", "num_bytes": 91850, "num_examples": 500, "dataset_name": "openbookqa"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/OpenBookQA-V1-Sep2018.zip": {"num_bytes": 1446098, "checksum": "82368cf05df2e3b309c17d162e10b888b4d768fad6e171e0a041954c8553be46"}}, "download_size": 1446098, "post_processing_size": null, "dataset_size": 1083403, "size_in_bytes": 2529501}, "additional": {"description": "OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic\n(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In\nparticular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,\nand rich text comprehension.\nOpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding\nof a subject.\n", "citation": "@inproceedings{OpenBookQA2018,\n title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},\n author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},\n booktitle={EMNLP},\n year={2018}\n}\n", "homepage": "https://allenai.org/data/open-book-qa", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question_stem": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}, "fact1": {"dtype": "string", "id": null, "_type": "Value"}, "humanScore": {"dtype": "float32", "id": null, "_type": "Value"}, "clarity": {"dtype": "float32", "id": null, "_type": "Value"}, "turkIdAnonymized": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "openbookqa", "config_name": "additional", "version": {"version_str": "1.0.1", "description": "", "major": 1, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 1290473, "num_examples": 4957, "dataset_name": "openbookqa"}, "validation": {"name": "validation", "num_bytes": 136141, "num_examples": 500, "dataset_name": "openbookqa"}, "test": {"name": "test", "num_bytes": 130926, "num_examples": 500, "dataset_name": "openbookqa"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/OpenBookQA-V1-Sep2018.zip": {"num_bytes": 1446098, "checksum": "82368cf05df2e3b309c17d162e10b888b4d768fad6e171e0a041954c8553be46"}}, "download_size": 1446098, "post_processing_size": null, "dataset_size": 1557540, "size_in_bytes": 3003638}}
|
dummy/additional/{1.0.0 → 1.0.1}/dummy_data.zip
RENAMED
File without changes
|
dummy/main/{1.0.0 → 1.0.1}/dummy_data.zip
RENAMED
File without changes
|
openbookqa.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
"""
|
2 |
|
3 |
|
4 |
import json
|
@@ -8,7 +8,17 @@ import textwrap
|
|
8 |
import datasets
|
9 |
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
_CITATION = """\
|
13 |
@inproceedings{OpenBookQA2018,
|
14 |
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
|
@@ -18,39 +28,25 @@ _CITATION = """\
|
|
18 |
}
|
19 |
"""
|
20 |
|
21 |
-
# TODO(openBookQA):
|
22 |
-
_DESCRIPTION = textwrap.dedent(
|
23 |
-
"""\
|
24 |
-
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
|
25 |
-
(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
|
26 |
-
particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,
|
27 |
-
and rich text comprehension.
|
28 |
-
OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of
|
29 |
-
a subject.
|
30 |
-
"""
|
31 |
-
)
|
32 |
_URL = "https://s3-us-west-2.amazonaws.com/ai2-website/data/OpenBookQA-V1-Sep2018.zip"
|
33 |
|
34 |
|
35 |
class OpenbookqaConfig(datasets.BuilderConfig):
|
36 |
-
def __init__(self, data_dir, **kwargs):
|
37 |
"""BuilderConfig for openBookQA dataset
|
38 |
|
39 |
Args:
|
40 |
data_dir: directory for the given dataset name
|
41 |
**kwargs: keyword arguments forwarded to super.
|
42 |
"""
|
43 |
-
|
44 |
-
super().__init__(version=datasets.Version("1.0.0", ""), **kwargs)
|
45 |
-
|
46 |
self.data_dir = data_dir
|
|
|
47 |
|
48 |
|
49 |
class Openbookqa(datasets.GeneratorBasedBuilder):
|
50 |
-
"""
|
51 |
|
52 |
-
# TODO(openBookQA): Set up version.
|
53 |
-
VERSION = datasets.Version("0.1.0")
|
54 |
BUILDER_CONFIGS = [
|
55 |
OpenbookqaConfig(
|
56 |
name="main",
|
@@ -65,6 +61,11 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
65 |
"""
|
66 |
),
|
67 |
data_dir="Main",
|
|
|
|
|
|
|
|
|
|
|
68 |
),
|
69 |
OpenbookqaConfig(
|
70 |
name="additional",
|
@@ -76,18 +77,19 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
76 |
"""
|
77 |
),
|
78 |
data_dir="Additional",
|
|
|
|
|
|
|
|
|
|
|
79 |
),
|
80 |
]
|
|
|
81 |
|
82 |
def _info(self):
|
83 |
-
|
84 |
-
|
85 |
-
# This is the description that will appear on the datasets page.
|
86 |
-
description=_DESCRIPTION,
|
87 |
-
# datasets.features.FeatureConnectors
|
88 |
-
features=datasets.Features(
|
89 |
{
|
90 |
-
# These are the features of your dataset like images, labels ...
|
91 |
"id": datasets.Value("string"),
|
92 |
"question_stem": datasets.Value("string"),
|
93 |
"choices": datasets.features.Sequence(
|
@@ -98,64 +100,51 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
98 |
),
|
99 |
"answerKey": datasets.Value("string"),
|
100 |
}
|
101 |
-
)
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
citation=_CITATION,
|
109 |
)
|
110 |
|
111 |
def _split_generators(self, dl_manager):
|
112 |
"""Returns SplitGenerators."""
|
113 |
-
# TODO(openBookQA): Downloads the data and defines the splits
|
114 |
-
# dl_manager is a datasets.download.DownloadManager that can be used to
|
115 |
-
# download and extract URLs
|
116 |
dl_dir = dl_manager.download_and_extract(_URL)
|
117 |
-
data_dir = os.path.join(dl_dir, "OpenBookQA-V1-Sep2018", "Data")
|
118 |
-
|
119 |
-
train_file = (
|
120 |
-
os.path.join(data_dir, "train.jsonl")
|
121 |
-
if self.config.name == "main"
|
122 |
-
else os.path.join(data_dir, "train_complete.jsonl")
|
123 |
-
)
|
124 |
-
test_file = (
|
125 |
-
os.path.join(data_dir, "test.jsonl")
|
126 |
-
if self.config.name == "main"
|
127 |
-
else os.path.join(data_dir, "test_complete.jsonl")
|
128 |
-
)
|
129 |
-
dev_file = (
|
130 |
-
os.path.join(data_dir, "dev.jsonl")
|
131 |
-
if self.config.name == "main"
|
132 |
-
else os.path.join(data_dir, "dev_complete.jsonl")
|
133 |
-
)
|
134 |
return [
|
135 |
datasets.SplitGenerator(
|
136 |
-
name=
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
datasets.SplitGenerator(
|
141 |
-
name=datasets.Split.TEST,
|
142 |
-
# These kwargs will be passed to _generate_examples
|
143 |
-
gen_kwargs={"filepath": test_file},
|
144 |
-
),
|
145 |
-
datasets.SplitGenerator(
|
146 |
-
name=datasets.Split.VALIDATION,
|
147 |
-
# These kwargs will be passed to _generate_examples
|
148 |
-
gen_kwargs={"filepath": dev_file},
|
149 |
-
),
|
150 |
]
|
151 |
|
152 |
def _generate_examples(self, filepath):
|
153 |
"""Yields examples."""
|
154 |
-
# TODO(openBookQA): Yields (key, example) tuples from the dataset
|
155 |
with open(filepath, encoding="utf-8") as f:
|
156 |
-
for row in f:
|
157 |
data = json.loads(row)
|
158 |
-
|
159 |
"id": data["id"],
|
160 |
"question_stem": data["question"]["stem"],
|
161 |
"choices": {
|
@@ -164,3 +153,7 @@ class Openbookqa(datasets.GeneratorBasedBuilder):
|
|
164 |
},
|
165 |
"answerKey": data["answerKey"],
|
166 |
}
|
|
|
|
|
|
|
|
|
|
1 |
+
"""OpenBookQA dataset."""
|
2 |
|
3 |
|
4 |
import json
|
|
|
8 |
import datasets
|
9 |
|
10 |
|
11 |
+
_HOMEPAGE = "https://allenai.org/data/open-book-qa"
|
12 |
+
|
13 |
+
_DESCRIPTION = """\
|
14 |
+
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
|
15 |
+
(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
|
16 |
+
particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,
|
17 |
+
and rich text comprehension.
|
18 |
+
OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding
|
19 |
+
of a subject.
|
20 |
+
"""
|
21 |
+
|
22 |
_CITATION = """\
|
23 |
@inproceedings{OpenBookQA2018,
|
24 |
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
|
|
|
28 |
}
|
29 |
"""
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
_URL = "https://s3-us-west-2.amazonaws.com/ai2-website/data/OpenBookQA-V1-Sep2018.zip"
|
32 |
|
33 |
|
34 |
class OpenbookqaConfig(datasets.BuilderConfig):
|
35 |
+
def __init__(self, data_dir=None, filenames=None, version=datasets.Version("1.0.1", ""), **kwargs):
|
36 |
"""BuilderConfig for openBookQA dataset
|
37 |
|
38 |
Args:
|
39 |
data_dir: directory for the given dataset name
|
40 |
**kwargs: keyword arguments forwarded to super.
|
41 |
"""
|
42 |
+
super().__init__(version=version, **kwargs)
|
|
|
|
|
43 |
self.data_dir = data_dir
|
44 |
+
self.filenames = filenames
|
45 |
|
46 |
|
47 |
class Openbookqa(datasets.GeneratorBasedBuilder):
|
48 |
+
"""OpenBookQA dataset."""
|
49 |
|
|
|
|
|
50 |
BUILDER_CONFIGS = [
|
51 |
OpenbookqaConfig(
|
52 |
name="main",
|
|
|
61 |
"""
|
62 |
),
|
63 |
data_dir="Main",
|
64 |
+
filenames={
|
65 |
+
"train": "train.jsonl",
|
66 |
+
"validation": "dev.jsonl",
|
67 |
+
"test": "test.jsonl",
|
68 |
+
},
|
69 |
),
|
70 |
OpenbookqaConfig(
|
71 |
name="additional",
|
|
|
77 |
"""
|
78 |
),
|
79 |
data_dir="Additional",
|
80 |
+
filenames={
|
81 |
+
"train": "train_complete.jsonl",
|
82 |
+
"validation": "dev_complete.jsonl",
|
83 |
+
"test": "test_complete.jsonl",
|
84 |
+
},
|
85 |
),
|
86 |
]
|
87 |
+
DEFAULT_CONFIG_NAME = "main"
|
88 |
|
89 |
def _info(self):
|
90 |
+
if self.config.name == "main":
|
91 |
+
features = datasets.Features(
|
|
|
|
|
|
|
|
|
92 |
{
|
|
|
93 |
"id": datasets.Value("string"),
|
94 |
"question_stem": datasets.Value("string"),
|
95 |
"choices": datasets.features.Sequence(
|
|
|
100 |
),
|
101 |
"answerKey": datasets.Value("string"),
|
102 |
}
|
103 |
+
)
|
104 |
+
else:
|
105 |
+
features = datasets.Features(
|
106 |
+
{
|
107 |
+
"id": datasets.Value("string"),
|
108 |
+
"question_stem": datasets.Value("string"),
|
109 |
+
"choices": datasets.features.Sequence(
|
110 |
+
{
|
111 |
+
"text": datasets.Value("string"),
|
112 |
+
"label": datasets.Value("string"),
|
113 |
+
}
|
114 |
+
),
|
115 |
+
"answerKey": datasets.Value("string"),
|
116 |
+
"fact1": datasets.Value("string"),
|
117 |
+
"humanScore": datasets.Value("float"),
|
118 |
+
"clarity": datasets.Value("float"),
|
119 |
+
"turkIdAnonymized": datasets.Value("string"),
|
120 |
+
}
|
121 |
+
)
|
122 |
+
return datasets.DatasetInfo(
|
123 |
+
description=_DESCRIPTION,
|
124 |
+
features=features,
|
125 |
+
homepage=_HOMEPAGE,
|
126 |
citation=_CITATION,
|
127 |
)
|
128 |
|
129 |
def _split_generators(self, dl_manager):
|
130 |
"""Returns SplitGenerators."""
|
|
|
|
|
|
|
131 |
dl_dir = dl_manager.download_and_extract(_URL)
|
132 |
+
data_dir = os.path.join(dl_dir, "OpenBookQA-V1-Sep2018", "Data", self.config.data_dir)
|
133 |
+
splits = [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
return [
|
135 |
datasets.SplitGenerator(
|
136 |
+
name=split,
|
137 |
+
gen_kwargs={"filepath": os.path.join(data_dir, self.config.filenames[split])},
|
138 |
+
)
|
139 |
+
for split in splits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
]
|
141 |
|
142 |
def _generate_examples(self, filepath):
|
143 |
"""Yields examples."""
|
|
|
144 |
with open(filepath, encoding="utf-8") as f:
|
145 |
+
for uid, row in enumerate(f):
|
146 |
data = json.loads(row)
|
147 |
+
example = {
|
148 |
"id": data["id"],
|
149 |
"question_stem": data["question"]["stem"],
|
150 |
"choices": {
|
|
|
153 |
},
|
154 |
"answerKey": data["answerKey"],
|
155 |
}
|
156 |
+
if self.config.name == "additional":
|
157 |
+
for key in ["fact1", "humanScore", "clarity", "turkIdAnonymized"]:
|
158 |
+
example[key] = data[key]
|
159 |
+
yield uid, example
|