system HF staff commited on
Commit
9c02eb9
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - found
5
+ language_creators:
6
+ - expert-generated
7
+ - found
8
+ languages:
9
+ - en
10
+ licenses:
11
+ - unknown
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ BioASQ:
16
+ - 10K<n<100K
17
+ DuoRC:
18
+ - 1K<n<10K
19
+ HotpotQA:
20
+ - 100K<n<1M
21
+ NaturalQuestions:
22
+ - 100K<n<1M
23
+ RelationExtraction:
24
+ - 1K<n<10K
25
+ SQuAD:
26
+ - 100K<n<1M
27
+ SearchQA:
28
+ - n>1M
29
+ TextbookQA:
30
+ - 10K<n<100K
31
+ TriviaQA:
32
+ - n>1M
33
+ source_datasets:
34
+ BioASQ:
35
+ - extended|other-BioASQ
36
+ DuoRC:
37
+ - extended|other-DuoRC
38
+ HotpotQA:
39
+ - extended|other-HotpotQA
40
+ NaturalQuestions:
41
+ - extended|other-Natural-Questions
42
+ RelationExtraction:
43
+ - extended|other-Relation-Extraction
44
+ SQuAD:
45
+ - extended|other-SQuAD
46
+ SearchQA:
47
+ - extended|other-SearchQA
48
+ TextbookQA:
49
+ - extended|other-TextbookQA
50
+ TriviaQA:
51
+ - extended|other-TriviaQA
52
+ task_categories:
53
+ - question-answering
54
+ task_ids:
55
+ - extractive-qa
56
+ - open-domain-qa
57
+ ---
58
+ # Dataset Card for MultiReQA
59
+
60
+ ## Table of Contents
61
+ - [Dataset Description](#dataset-description)
62
+ - [Dataset Summary](#dataset-summary)
63
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
64
+ - [Languages](#languages)
65
+ - [Dataset Structure](#dataset-structure)
66
+ - [Data Instances](#data-instances)
67
+ - [Data Fields](#data-instances)
68
+ - [Data Splits](#data-instances)
69
+ - [Dataset Creation](#dataset-creation)
70
+ - [Curation Rationale](#curation-rationale)
71
+ - [Source Data](#source-data)
72
+ - [Annotations](#annotations)
73
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
74
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
75
+ - [Social Impact of Dataset](#social-impact-of-dataset)
76
+ - [Discussion of Biases](#discussion-of-biases)
77
+ - [Other Known Limitations](#other-known-limitations)
78
+ - [Additional Information](#additional-information)
79
+ - [Dataset Curators](#dataset-curators)
80
+ - [Licensing Information](#licensing-information)
81
+ - [Citation Information](#citation-information)
82
+
83
+ ## Dataset Description
84
+
85
+ - **Homepage:** https://github.com/google-research-datasets/MultiReQA
86
+ - **Repository:** https://github.com/google-research-datasets/MultiReQA
87
+ - **Paper:** https://arxiv.org/pdf/2005.02507.pdf
88
+ - **Leaderboard:**
89
+ - **Point of Contact:**
90
+
91
+ ### Dataset Summary
92
+ MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, in cluding BioASQ, RelationExtraction, TextbookQA, contain only the test data (also includes DuoRC but not specified in the official documentation)
93
+ ### Supported Tasks and Leaderboards
94
+
95
+ - Question answering (QA)
96
+ - Retrieval question answering (ReQA)
97
+ ### Languages
98
+
99
+ Sentence boundary annotation for SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, TextbookQA and DuoRC
100
+
101
+ ## Dataset Structure
102
+
103
+ ### Data Instances
104
+
105
+ The general format is:
106
+ `
107
+ {
108
+ "candidate_id": <candidate_id>,
109
+ "response_start": <response_start>,
110
+ "response_end": <response_end>
111
+ }
112
+ ...
113
+ `
114
+
115
+ An example from SearchQA:
116
+ `{'candidate_id': 'SearchQA_000077f3912049dfb4511db271697bad/_0_1',
117
+ 'response_end': 306,
118
+ 'response_start': 243} `
119
+
120
+ ### Data Fields
121
+
122
+ `
123
+ {
124
+ "candidate_id": <STRING>,
125
+ "response_start": <INT>,
126
+ "response_end": <INT>
127
+ }
128
+ ...
129
+ `
130
+ - **candidate_id:** The candidate id of the candidate sentence. It consists of the original qid from the MRQA shared task.
131
+ - **response_start:** The start index of the sentence with respect to its original context.
132
+ - **response_end:** The end index of the sentence with respect to its original context
133
+
134
+ ### Data Splits
135
+
136
+ Train and Dev splits are available only for the following datasets,
137
+ - SearchQA
138
+ - TriviaQA
139
+ - HotpotQA
140
+ - SQuAD
141
+ - NaturalQuestions
142
+
143
+ Test splits are available only for the following datasets,
144
+ - BioASQ
145
+ - RelationExtraction
146
+ - TextbookQA
147
+
148
+ The number of candidate sentences for each dataset in the table below.
149
+
150
+ | | MultiReQA | |
151
+ |--------------------|-----------|---------|
152
+ | | train | test |
153
+ | SearchQA | 629,160 | 454,836 |
154
+ | TriviaQA | 335,659 | 238,339 |
155
+ | HotpotQA | 104,973 | 52,191 |
156
+ | SQuAD | 87,133 | 10,642 |
157
+ | NaturalQuestions | 106,521 | 22,118 |
158
+ | BioASQ | - | 14,158 |
159
+ | RelationExtraction | - | 3,301 |
160
+ | TextbookQA | - | 3,701 |
161
+
162
+ ## Dataset Creation
163
+
164
+ ### Curation Rationale
165
+
166
+ MultiReQA is a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets from the [MRQA shared task](https://mrqa.github.io/). The dataset was curated by converting existing QA datasets from [MRQA shared task](https://mrqa.github.io/) to the format of MultiReQA benchmark.
167
+ ### Source Data
168
+
169
+ #### Initial Data Collection and Normalization
170
+
171
+ The Initial data collection was performed by converting existing QA datasets from MRQA shared task to the format of MultiReQA benchmark.
172
+ #### Who are the source language producers?
173
+
174
+ [More Information Needed]
175
+
176
+ ### Annotations
177
+
178
+ #### Annotation process
179
+
180
+ [More Information Needed]
181
+
182
+ #### Who are the annotators?
183
+
184
+ The annotators/curators of the dataset are [mandyguo-xyguo](https://github.com/mandyguo-xyguo) and [mwurts4google](https://github.com/mwurts4google), the contributors of the official MultiReQA github repository
185
+ ### Personal and Sensitive Information
186
+
187
+ [More Information Needed]
188
+
189
+ ## Considerations for Using the Data
190
+
191
+ ### Social Impact of Dataset
192
+
193
+ [More Information Needed]
194
+
195
+ ### Discussion of Biases
196
+
197
+ [More Information Needed]
198
+
199
+ ### Other Known Limitations
200
+
201
+ [More Information Needed]
202
+
203
+ ## Additional Information
204
+
205
+ ### Dataset Curators
206
+
207
+ The annotators/curators of the dataset are [mandyguo-xyguo](https://github.com/mandyguo-xyguo) and [mwurts4google](https://github.com/mwurts4google), the contributors of the official MultiReQA github repository
208
+
209
+ ### Licensing Information
210
+
211
+ [More Information Needed]
212
+
213
+ ### Citation Information
214
+
215
+ ```
216
+ @misc{m2020multireqa,
217
+ title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},
218
+ author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},
219
+ year={2020},
220
+ eprint={2005.02507},
221
+ archivePrefix={arXiv},
222
+ primaryClass={cs.CL}
223
+ }
224
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"SearchQA": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "SearchQA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 183902877, "num_examples": 3163801, "dataset_name": "multi_re_qa"}, "validation": {"name": "validation", "num_bytes": 26439174, "num_examples": 454836, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/SearchQA/candidates.json.gz": {"num_bytes": 32368716, "checksum": "adf6fe37aff7929b7be33fb105571b80db89adc3cee2093c8357b678c1b4c76c"}, "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/SearchQA/candidates.json.gz": {"num_bytes": 4623243, "checksum": "00c361a17babd40b9144a570bbadacba37136b638f0a1f55c49fe58fca1606a9"}}, "download_size": 36991959, "post_processing_size": null, "dataset_size": 210342051, "size_in_bytes": 247334010}, "TriviaQA": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "TriviaQA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 107326326, "num_examples": 1893674, "dataset_name": "multi_re_qa"}, "validation": {"name": "validation", "num_bytes": 13508062, "num_examples": 238339, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/TriviaQA/candidates.json.gz": {"num_bytes": 19336595, "checksum": "ff43a7ec9243f4c5631ec50fa799f0dfbcf4dec2b4116da3aaacffe0b7fe22ee"}, "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/TriviaQA/candidates.json.gz": {"num_bytes": 2413807, "checksum": "bf2f41e4f85fcdc163a6cb2ad7f1f711c185463ee701b4e29c9da5c19d5da641"}}, "download_size": 21750402, "post_processing_size": null, "dataset_size": 120834388, "size_in_bytes": 142584790}, "HotpotQA": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "HotpotQA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29516866, "num_examples": 508879, "dataset_name": "multi_re_qa"}, "validation": {"name": "validation", "num_bytes": 3027229, "num_examples": 52191, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/HotpotQA/candidates.json.gz": {"num_bytes": 5760488, "checksum": "1e19145a13aea9101edaaa3e79f19518b9bf0b1539e1912f5a4bec8c406bcbbc"}, "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/HotpotQA/candidates.json.gz": {"num_bytes": 582901, "checksum": "f359dde781dc7772d817c81d1f1c28fcdedb8858b4502a7bd7234d1da5e10395"}}, "download_size": 6343389, "post_processing_size": null, "dataset_size": 32544095, "size_in_bytes": 38887484}, "SQuAD": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "SQuAD", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 16828974, "num_examples": 95659, "dataset_name": "multi_re_qa"}, "validation": {"name": "validation", "num_bytes": 2012997, "num_examples": 10642, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/SQuAD/candidates.json.gz": {"num_bytes": 2685384, "checksum": "efdcc6576283194be5ce8cb1cc51ffc15200e8b116479b4eda06b2e4b6b77bd0"}, "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/SQuAD/candidates.json.gz": {"num_bytes": 318262, "checksum": "dc0fa9e536afa6969212cc5547dced39147ac93e007438464575ef4038dfd512"}}, "download_size": 3003646, "post_processing_size": null, "dataset_size": 18841971, "size_in_bytes": 21845617}, "NaturalQuestions": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "NaturalQuestions", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 28732767, "num_examples": 448355, "dataset_name": "multi_re_qa"}, "validation": {"name": "validation", "num_bytes": 1418124, "num_examples": 22118, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/NaturalQuestions/candidates.json.gz": {"num_bytes": 5794887, "checksum": "dc39392d7a4995024a3d8fc127607e2cdea9081ed17c7c014bb5ffca220474da"}, "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/NaturalQuestions/candidates.json.gz": {"num_bytes": 329600, "checksum": "4e9a422272d399206bc20438435fb60d4faddd4dc901db760d97b614cc082dd5"}}, "download_size": 6124487, "post_processing_size": null, "dataset_size": 30150891, "size_in_bytes": 36275378}, "BioASQ": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "BioASQ", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 766190, "num_examples": 14158, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/BioASQ/candidates.json.gz": {"num_bytes": 156649, "checksum": "4312adbb038532564f4178018c32c22b46d5d2a0a896900b72bc6f4df3ec0d99"}}, "download_size": 156649, "post_processing_size": null, "dataset_size": 766190, "size_in_bytes": 922839}, "RelationExtraction": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "RelationExtraction", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 217870, "num_examples": 3301, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/RelationExtraction/candidates.json.gz": {"num_bytes": 73019, "checksum": "23fcafe68a91367928a537e0220d2e52e9c5a662dd9976c102267640566b2f34"}}, "download_size": 73019, "post_processing_size": null, "dataset_size": 217870, "size_in_bytes": 290889}, "TextbookQA": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "TextbookQA", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 4182675, "num_examples": 71147, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/TextbookQA/candidates.json.gz": {"num_bytes": 704602, "checksum": "ac7a7dbae67afcce708c7ba6867991d8410ab92a8884964ec077898672f97208"}}, "download_size": 704602, "post_processing_size": null, "dataset_size": 4182675, "size_in_bytes": 4887277}, "DuoRC": {"description": "MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data", "citation": "@misc{m2020multireqa,\n title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},\n author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},\n year={2020},\n eprint={2005.02507},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "homepage": "https://github.com/google-research-datasets/MultiReQA", "license": "", "features": {"candidate_id": {"dtype": "string", "id": null, "_type": "Value"}, "response_start": {"dtype": "int32", "id": null, "_type": "Value"}, "response_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_re_qa", "config_name": "DuoRC", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1483518, "num_examples": 5525, "dataset_name": "multi_re_qa"}}, "download_checksums": {"https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/DuoRC/candidates.json.gz": {"num_bytes": 97625, "checksum": "0ce13953cf96a2f9d2f9a0b0dee7249c98dc95690a00e34236059f59f5ebc674"}}, "download_size": 97625, "post_processing_size": null, "dataset_size": 1483518, "size_in_bytes": 1581143}}
dummy/BioASQ/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cbe0515ccf2e6b30f17421a3a868c514a20325dc28e7d2f7e7df0ee15d6194f
3
+ size 399
dummy/DuoRC/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e701d0fa5d7ff00534241383f61bb98ff3ea1f16079beb70579c305351833903
3
+ size 581
dummy/HotpotQA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1a0cb384f2e0691eec295047d55e94e12ee870986f02c75c59684738b3117b9
3
+ size 411
dummy/NaturalQuestions/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:beafa10593d274bd0b61e081303e8d341e44690ee801064c231adafff7de8644
3
+ size 422
dummy/RelationExtraction/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a69b3451ece344b255cfe437195658341105673badbc6ae2b2e9b0cf05a60bd6
3
+ size 448
dummy/SQuAD/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:651d7fb0f81e11af3dc3a9cd8c75c4ac85568fdfe4ded21a73e005ae100ecdc9
3
+ size 479
dummy/SearchQA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fdb892ad46a7b343c8fa0b0263e98b67b5e627c62a05ee2ee428ec4101a40c4
3
+ size 400
dummy/TextbookQA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e396e67a0e1bccbc8e387639266205188cb0d88be7a1dbe4d298a62d22f5de74
3
+ size 397
dummy/TriviaQA/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6429faf8d15c0abd8f51b9c4a1ada5ea8cf7ba42170add816318f37c453bc30a
3
+ size 395
multi_re_qa.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # Find for instance the citation on arxiv or on the dataset repo/website
26
+ _CITATION = """\
27
+ @misc{m2020multireqa,
28
+ title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},
29
+ author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},
30
+ year={2020},
31
+ eprint={2005.02507},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CL}
34
+ }"""
35
+ # You can copy an official description
36
+ _DESCRIPTION = """MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data"""
37
+
38
+ _HOMEPAGE = "https://github.com/google-research-datasets/MultiReQA"
39
+
40
+ # License for the dataset is not available
41
+ _LICENSE = ""
42
+
43
+ # Official links to the data hosted on github are below
44
+ # Train and Dev sets are available only for SearchQA, TriviaQA, HotpotQA, SQuAD and NaturalQuestions
45
+ # Test sets are only available for BioASQ, RelationExtraction and TextbookQA
46
+
47
+ train_SearchQA = (
48
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/SearchQA/candidates.json.gz"
49
+ )
50
+ dev_SearchQA = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/SearchQA/candidates.json.gz"
51
+
52
+ train_TriviaQA = (
53
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/TriviaQA/candidates.json.gz"
54
+ )
55
+ dev_TriviaQA = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/TriviaQA/candidates.json.gz"
56
+
57
+ train_HotpotQA = (
58
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/HotpotQA/candidates.json.gz"
59
+ )
60
+ dev_HotpotQA = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/HotpotQA/candidates.json.gz"
61
+
62
+ train_SQuAD = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/SQuAD/candidates.json.gz"
63
+ dev_SQuAD = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/SQuAD/candidates.json.gz"
64
+
65
+ train_NaturalQuestions = (
66
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/train/NaturalQuestions/candidates.json.gz"
67
+ )
68
+ dev_NaturalQuestions = (
69
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/dev/NaturalQuestions/candidates.json.gz"
70
+ )
71
+
72
+ test_BioASQ = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/BioASQ/candidates.json.gz"
73
+
74
+ test_RelationExtraction = (
75
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/RelationExtraction/candidates.json.gz"
76
+ )
77
+
78
+ test_TextbookQA = (
79
+ "https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/TextbookQA/candidates.json.gz"
80
+ )
81
+
82
+ test_DuoRC = "https://github.com/google-research-datasets/MultiReQA/raw/master/data/test/DuoRC/candidates.json.gz"
83
+
84
+
85
+ class MultiReQa(datasets.GeneratorBasedBuilder):
86
+ """MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA."""
87
+
88
+ VERSION = datasets.Version("1.0.0")
89
+
90
+ BUILDER_CONFIGS = [
91
+ datasets.BuilderConfig(name="SearchQA", version=VERSION, description="SearchQA"),
92
+ datasets.BuilderConfig(name="TriviaQA", version=VERSION, description="TriviaQA"),
93
+ datasets.BuilderConfig(name="HotpotQA", version=VERSION, description="HotpotQA"),
94
+ datasets.BuilderConfig(name="SQuAD", version=VERSION, description="SQuAD"),
95
+ datasets.BuilderConfig(name="NaturalQuestions", version=VERSION, description="NaturalQuestions"),
96
+ datasets.BuilderConfig(name="BioASQ", version=VERSION, description="BioASQ"),
97
+ datasets.BuilderConfig(name="RelationExtraction", version=VERSION, description="RelationExtraction"),
98
+ datasets.BuilderConfig(name="TextbookQA", version=VERSION, description="TextbookQA"),
99
+ datasets.BuilderConfig(name="DuoRC", version=VERSION, description="DuoRC"),
100
+ ]
101
+
102
+ # DEFAULT_CONFIG_NAME = "SearchQA" # It's not mandatory to have a default configuration. Just use one if it make sense.
103
+
104
+ def _info(self):
105
+ # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
106
+ if self.config.name == "SearchQA": # This is the name of the configuration selected in BUILDER_CONFIGS above
107
+ features = datasets.Features(
108
+ {
109
+ "candidate_id": datasets.Value("string"),
110
+ "response_start": datasets.Value("int32"),
111
+ "response_end": datasets.Value("int32"),
112
+ }
113
+ )
114
+ else:
115
+ features = datasets.Features(
116
+ {
117
+ "candidate_id": datasets.Value("string"),
118
+ "response_start": datasets.Value("int32"),
119
+ "response_end": datasets.Value("int32"),
120
+ }
121
+ )
122
+ return datasets.DatasetInfo(
123
+ # This is the description that will appear on the datasets page.
124
+ description=_DESCRIPTION,
125
+ # This defines the different columns of the dataset and their types
126
+ features=features,
127
+ supervised_keys=None,
128
+ # Homepage of the dataset for documentation
129
+ homepage=_HOMEPAGE,
130
+ # License for the dataset if available
131
+ license=_LICENSE,
132
+ # Citation for the dataset
133
+ citation=_CITATION,
134
+ )
135
+
136
+ def _split_generators(self, dl_manager):
137
+ """Returns SplitGenerators."""
138
+
139
+ if (
140
+ self.config.name == "SearchQA"
141
+ or self.config.name == "TriviaQA"
142
+ or self.config.name == "HotpotQA"
143
+ or self.config.name == "SQuAD"
144
+ or self.config.name == "NaturalQuestions"
145
+ ):
146
+ if self.config.name == "SearchQA":
147
+ train_file_url = train_SearchQA
148
+ dev_file_url = dev_SearchQA
149
+
150
+ elif self.config.name == "TriviaQA":
151
+ train_file_url = train_TriviaQA
152
+ dev_file_url = dev_TriviaQA
153
+
154
+ elif self.config.name == "HotpotQA":
155
+ train_file_url = train_HotpotQA
156
+ dev_file_url = dev_HotpotQA
157
+
158
+ elif self.config.name == "SQuAD":
159
+ train_file_url = train_SQuAD
160
+ dev_file_url = dev_SQuAD
161
+
162
+ elif self.config.name == "NaturalQuestions":
163
+ train_file_url = train_NaturalQuestions
164
+ dev_file_url = dev_NaturalQuestions
165
+
166
+ train_file = dl_manager.download_and_extract(train_file_url)
167
+ dev_file = dl_manager.download_and_extract(dev_file_url)
168
+
169
+ return [
170
+ datasets.SplitGenerator(
171
+ name=datasets.Split.TRAIN,
172
+ # These kwargs will be passed to _generate_examples
173
+ gen_kwargs={
174
+ "filepath": os.path.join(train_file),
175
+ "split": "train",
176
+ },
177
+ ),
178
+ datasets.SplitGenerator(
179
+ name=datasets.Split.VALIDATION,
180
+ # These kwargs will be passed to _generate_examples
181
+ gen_kwargs={
182
+ "filepath": os.path.join(dev_file),
183
+ "split": "dev",
184
+ },
185
+ ),
186
+ ]
187
+ else:
188
+
189
+ if self.config.name == "BioASQ":
190
+ test_file_url = test_BioASQ
191
+
192
+ elif self.config.name == "RelationExtraction":
193
+ test_file_url = test_RelationExtraction
194
+
195
+ elif self.config.name == "TextbookQA":
196
+ test_file_url = test_TextbookQA
197
+
198
+ elif self.config.name == "DuoRC":
199
+ test_file_url = test_DuoRC
200
+
201
+ test_file = dl_manager.download_and_extract(test_file_url)
202
+
203
+ return [
204
+ datasets.SplitGenerator(
205
+ name=datasets.Split.TEST,
206
+ # These kwargs will be passed to _generate_examples
207
+ gen_kwargs={
208
+ "filepath": os.path.join(test_file),
209
+ "split": "test",
210
+ },
211
+ ),
212
+ ]
213
+
214
+ def _generate_examples(self, filepath, split):
215
+ """ Yields examples. """
216
+ # This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
217
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
218
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
219
+
220
+ with open(filepath, encoding="utf-8") as f:
221
+ for id_, row in enumerate(f):
222
+ data = json.loads(row)
223
+ yield id_, {
224
+ "candidate_id": data["candidate_id"],
225
+ "response_start": data["response_start"],
226
+ "response_end": data["response_end"],
227
+ }