Datasets:

Sub-tasks:
extractive-qa
Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
found
ArXiv:
Tags:
License:
File size: 13,932 Bytes
f39b222
 
 
 
 
ef48f4f
f39b222
ef48f4f
f39b222
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eafddd4
d23fcbb
2bf1271
f3178d9
2bf1271
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3178d9
2bf1271
 
f3178d9
2bf1271
f3178d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f39b222
 
c43560b
f39b222
 
 
 
eafddd4
f39b222
 
 
eafddd4
 
f39b222
 
 
 
 
 
 
 
 
 
 
 
 
fb5aa3c
f39b222
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb5aa3c
 
 
2bf1271
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|drop
- extended|hotpot_qa
- extended|natural_questions
- extended|race
- extended|search_qa
- extended|squad
- extended|trivia_qa
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: mrqa-2019
pretty_name: MRQA 2019
dataset_info:
  config_name: plain_text
  features:
  - name: subset
    dtype: string
  - name: context
    dtype: string
  - name: context_tokens
    sequence:
    - name: tokens
      dtype: string
    - name: offsets
      dtype: int32
  - name: qid
    dtype: string
  - name: question
    dtype: string
  - name: question_tokens
    sequence:
    - name: tokens
      dtype: string
    - name: offsets
      dtype: int32
  - name: detected_answers
    sequence:
    - name: text
      dtype: string
    - name: char_spans
      sequence:
      - name: start
        dtype: int32
      - name: end
        dtype: int32
    - name: token_spans
      sequence:
      - name: start
        dtype: int32
      - name: end
        dtype: int32
  - name: answers
    sequence: string
  splits:
  - name: train
    num_bytes: 4090677713
    num_examples: 516819
  - name: validation
    num_bytes: 484106546
    num_examples: 58221
  - name: test
    num_bytes: 57712097
    num_examples: 9633
  download_size: 1679161250
  dataset_size: 4632496356
configs:
- config_name: plain_text
  data_files:
  - split: train
    path: plain_text/train-*
  - split: validation
    path: plain_text/validation-*
  - split: test
    path: plain_text/test-*
  default: true
---

# Dataset Card for MRQA 2019

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [MRQA 2019 Shared Task](https://mrqa.github.io/2019/shared.html)
- **Repository:** [MRQA 2019 Github repository](https://github.com/mrqa/MRQA-Shared-Task-2019)
- **Paper:** [MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
](https://arxiv.org/abs/1910.09753)
- **Leaderboard:** [Shared task](https://mrqa.github.io/2019/shared.html)
- **Point of Contact:** [mrforqa@gmail.com](mrforqa@gmail.com)

### Dataset Summary

The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge.

The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.

### Supported Tasks and Leaderboards

From the official repository:

*The format of the task is extractive question answering. Given a question and context passage, systems must find the word or phrase in the document that best answers the question. While this format is somewhat restrictive, it allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.*

*We have adapted several existing datasets from their original formats and settings to conform to our unified extractive setting. Most notably:*
- *We provide only a single, length-limited context.*
- *There are no unanswerable or non-span answer questions.*
- *All questions have at least one accepted answer that is found exactly in the context.*

*A span is judged to be an exact match if it matches the answer string after performing normalization consistent with the SQuAD dataset. Specifically:*
- *The text is uncased.*
- *All punctuation is stripped.*
- *All articles `{a, an, the}` are removed.*
- *All consecutive whitespace markers are compressed to just a single normal space `' '`.*

Answers are evaluated using exact match and token-level F1 metrics. One can refer to the [mrqa_official_eval.py](https://github.com/mrqa/MRQA-Shared-Task-2019/blob/master/mrqa_official_eval.py) for evaluation.

### Languages

The text in the dataset is in English. The associated BCP-47 code is `en`.

## Dataset Structure

### Data Instances

An examples looks like this:
```
{
  'qid': 'f43c83e38d1e424ea00f8ad3c77ec999',
  'subset': 'SQuAD'

  'context': 'CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game. The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively. It was the third-most watched U.S. broadcast ever.',
  'context_tokens': {
    'offsets': [0, 4, 14, 20, 25, 28, 31, 35, 39, 41, 45, 53, 56, 64, 67, 68, 70, 78, 82, 84, 94, 105, 112, 116, 120, 122, 126, 132, 137, 140, 149, 154, 158, 168, 171, 175, 183, 188, 194, 203, 208, 216, 222, 233, 241, 245, 251, 255, 257, 261, 271, 275, 281, 286, 292, 296, 302, 307, 314, 323, 328, 330, 342, 344, 347, 351, 355, 360, 361, 366, 374, 379, 389, 393],
    'tokens': ['CBS', 'broadcast', 'Super', 'Bowl', '50', 'in', 'the', 'U.S.', ',', 'and', 'charged', 'an', 'average', 'of', '$', '5', 'million', 'for', 'a', '30-second', 'commercial', 'during', 'the', 'game', '.', 'The', 'Super', 'Bowl', '50', 'halftime', 'show', 'was', 'headlined', 'by', 'the', 'British', 'rock', 'group', 'Coldplay', 'with', 'special', 'guest', 'performers', 'Beyoncé', 'and', 'Bruno', 'Mars', ',', 'who', 'headlined', 'the', 'Super', 'Bowl', 'XLVII', 'and', 'Super', 'Bowl', 'XLVIII', 'halftime', 'shows', ',', 'respectively', '.', 'It', 'was', 'the', 'third', '-', 'most', 'watched', 'U.S.', 'broadcast', 'ever', '.']
  },

  'question': "Who was the main performer at this year's halftime show?",
  'question_tokens': {
      'offsets': [0, 4, 8, 12, 17, 27, 30, 35, 39, 42, 51, 55],
      'tokens': ['Who', 'was', 'the', 'main', 'performer', 'at', 'this', 'year', "'s", 'halftime', 'show', '?']
  },

  'detected_answers': {
    'char_spans': [
      {
        'end': [201],
        'start': [194]
      }, {
        'end': [201],
        'start': [194]
      }, {
        'end': [201],
        'start': [194]
      }
    ],
    'text': ['Coldplay', 'Coldplay', 'Coldplay'],
    'token_spans': [
      {
        'end': [38],
        'start': [38]
      }, {
        'end': [38],
        'start': [38]
        }, {
        'end': [38],
        'start': [38]
      }
    ]
  },

  'answers': ['Coldplay', 'Coldplay', 'Coldplay'],
}
```

### Data Fields

- `subset`: which of the dataset does this examples come from?
- `context`: This is the raw text of the supporting passage. Three special token types have been inserted: `[TLE]` precedes document titles, `[DOC]` denotes document breaks, and `[PAR]` denotes paragraph breaks. The maximum length of the context is 800 tokens.
- `context_tokens`: A tokenized version of the supporting passage, using spaCy. Each token is a tuple of the token string and token character offset. The maximum number of tokens is 800.
  - `tokens`: list of tokens.
  - `offets`: list of offsets.
- `qas`: A list of questions for the given context.
- `qid`: A unique identifier for the question. The `qid` is unique across all datasets.
- `question`: The raw text of the question.
- `question_tokens`: A tokenized version of the question. The tokenizer and token format is the same as for the context.
  - `tokens`: list of tokens.
  - `offets`: list of offsets.
- `detected_answers`: A list of answer spans for the given question that index into the context. For some datasets these spans have been automatically detected using searching heuristics. The same answer may appear multiple times in the text --- each of these occurrences is recorded. For example, if `42` is the answer, the context `"The answer is 42. 42 is the answer."`, has two occurrences marked.
  - `text`: The raw text of the detected answer.
  - `char_spans`: Inclusive (start, end) character spans (indexing into the raw context).
    - `start`: start (single element)
    - `end`: end (single element)
  - `token_spans`: Inclusive (start, end) token spans (indexing into the tokenized context).
    - `start`: start (single element)
    - `end`: end (single element)



### Data Splits

**Training data**
| Dataset | Number of Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250)   | 86,588 |
| [NewsQA](https://arxiv.org/abs/1611.09830)  | 74,160 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 61,688 |
| [SearchQA](https://arxiv.org/abs/1704.05179)| 117,384 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 72,928 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 104,071 |

**Development data**

This in-domain data may be used for helping develop models.

| Dataset | Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250) | 10,507 |
| [NewsQA](https://arxiv.org/abs/1611.09830) | 4,212 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 7,785|
| [SearchQA](https://arxiv.org/abs/1704.05179)| 16,980 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 5,904 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 12,836 |

**Test data**

The final testing data only contain out-of-domain data.

| Dataset | Examples |
| :-----: | :------: |
| [BioASQ](http://bioasq.org/) | 1,504 |
| [DROP](https://arxiv.org/abs/1903.00161) | 1,503 |
| [DuoRC](https://arxiv.org/abs/1804.07927)| 1,501 |
| [RACE](https://arxiv.org/abs/1704.04683) | 674 |
| [RelationExtraction](https://arxiv.org/abs/1706.04115) | 2,948|
| [TextbookQA](http://ai2-website.s3.amazonaws.com/publications/CVPR17_TQA.pdf)| 1,503 |



From the official repository:

***Note:** As previously mentioned, the out-of-domain dataset have been modified from their original settings to fit the unified MRQA Shared Task paradigm. At a high level, the following two major modifications have been made:*

*1. All QA-context pairs are extractive. That is, the answer is selected from the context and not via, e.g., multiple-choice.*
*2. All contexts are capped at a maximum of `800` tokens. As a result, for longer contexts like Wikipedia articles, we only consider examples where the answer appears in the first `800` tokens.*

*As a result, some splits are harder than the original datasets (e.g., removal of multiple-choice in RACE), while some are easier (e.g., restricted context length in NaturalQuestions --- we use the short answer selection). Thus one should expect different performance ranges if comparing to previous work on these datasets.*

## Dataset Creation

### Curation Rationale

From the official repository:

*Both train and test datasets have the same format described above, but may differ in some of the following ways:*
- *Passage distribution: Test examples may involve passages from different sources (e.g., science, news, novels, medical abstracts, etc) with pronounced syntactic and lexical differences.*
- *Question distribution: Test examples may emphasize different styles of questions (e.g., entity-centric, relational, other tasks reformulated as QA, etc) which may come from different sources (e.g., crowdworkers, domain experts, exam writers, etc.)*
- *Joint distribution: Test examples may vary according to the relationship of the question to the passage (e.g., collected independent vs. dependent of evidence, multi-hop, etc)*

### Source Data

[More Information Needed]

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

[More Information Needed]

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

Unknown

### Citation Information

```
@inproceedings{fisch2019mrqa,
    title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
    author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
    booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
    year={2019},
}
```

### Contributions

Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.