candidate_id
stringlengths
42
43
response_start
int32
0
4.32k
response_end
int32
2
4.46k
BioASQ_0029838703954d16a57a5cc158fd86d5/_0
0
154
BioASQ_0029838703954d16a57a5cc158fd86d5/_1
155
245
BioASQ_0029838703954d16a57a5cc158fd86d5/_10
1,395
1,489
BioASQ_0029838703954d16a57a5cc158fd86d5/_11
1,490
1,751
BioASQ_0029838703954d16a57a5cc158fd86d5/_2
246
381
BioASQ_0029838703954d16a57a5cc158fd86d5/_3
382
558
BioASQ_0029838703954d16a57a5cc158fd86d5/_4
559
630
BioASQ_0029838703954d16a57a5cc158fd86d5/_5
631
779
BioASQ_0029838703954d16a57a5cc158fd86d5/_6
780
848
BioASQ_0029838703954d16a57a5cc158fd86d5/_7
849
984
BioASQ_0029838703954d16a57a5cc158fd86d5/_8
985
1,165
BioASQ_0029838703954d16a57a5cc158fd86d5/_9
1,166
1,394
BioASQ_00475180d274498e9dad745b5ff8c048/_0
0
158
BioASQ_00475180d274498e9dad745b5ff8c048/_1
159
379
BioASQ_00475180d274498e9dad745b5ff8c048/_2
380
507
BioASQ_00475180d274498e9dad745b5ff8c048/_3
508
585
BioASQ_00475180d274498e9dad745b5ff8c048/_4
586
739
BioASQ_00475180d274498e9dad745b5ff8c048/_5
740
940
BioASQ_00475180d274498e9dad745b5ff8c048/_6
941
1,060
BioASQ_00475180d274498e9dad745b5ff8c048/_7
1,061
1,333
BioASQ_00475180d274498e9dad745b5ff8c048/_8
1,334
1,465
BioASQ_00514408a6c643d68ffbd2cf86d20e0e/_0
0
236
BioASQ_00514408a6c643d68ffbd2cf86d20e0e/_1
237
429
BioASQ_00514408a6c643d68ffbd2cf86d20e0e/_2
430
695
BioASQ_00514408a6c643d68ffbd2cf86d20e0e/_3
696
796
BioASQ_00d9954d30b64647a55f59618756c1ce/_0
0
47
BioASQ_00d9954d30b64647a55f59618756c1ce/_1
48
191
BioASQ_00d9954d30b64647a55f59618756c1ce/_10
925
962
BioASQ_00d9954d30b64647a55f59618756c1ce/_11
963
969
BioASQ_00d9954d30b64647a55f59618756c1ce/_12
970
1,026
BioASQ_00d9954d30b64647a55f59618756c1ce/_13
1,027
1,100
BioASQ_00d9954d30b64647a55f59618756c1ce/_14
1,101
1,224
BioASQ_00d9954d30b64647a55f59618756c1ce/_15
1,225
1,249
BioASQ_00d9954d30b64647a55f59618756c1ce/_16
1,250
1,328
BioASQ_00d9954d30b64647a55f59618756c1ce/_17
1,329
1,371
BioASQ_00d9954d30b64647a55f59618756c1ce/_18
1,372
1,425
BioASQ_00d9954d30b64647a55f59618756c1ce/_2
192
519
BioASQ_00d9954d30b64647a55f59618756c1ce/_3
520
565
BioASQ_00d9954d30b64647a55f59618756c1ce/_4
566
598
BioASQ_00d9954d30b64647a55f59618756c1ce/_5
599
656
BioASQ_00d9954d30b64647a55f59618756c1ce/_6
657
762
BioASQ_00d9954d30b64647a55f59618756c1ce/_7
763
801
BioASQ_00d9954d30b64647a55f59618756c1ce/_8
802
847
BioASQ_00d9954d30b64647a55f59618756c1ce/_9
848
924
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_0
0
149
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_1
150
398
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_2
399
645
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_3
646
743
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_4
744
830
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_5
831
896
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_6
897
1,032
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_7
1,033
1,089
BioASQ_00f4c5c0e5df41acb2f90c62057cceb8/_8
1,090
1,293
BioASQ_0109efd350d2412484e85a1d74f44015/_0
0
154
BioASQ_0109efd350d2412484e85a1d74f44015/_1
155
279
BioASQ_0109efd350d2412484e85a1d74f44015/_2
280
545
BioASQ_0109efd350d2412484e85a1d74f44015/_3
546
819
BioASQ_0109efd350d2412484e85a1d74f44015/_4
820
971
BioASQ_0109efd350d2412484e85a1d74f44015/_5
972
1,084
BioASQ_0109efd350d2412484e85a1d74f44015/_6
1,085
1,368
BioASQ_0109efd350d2412484e85a1d74f44015/_7
1,369
1,555
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_0
0
136
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_1
137
284
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_2
285
414
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_3
415
545
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_4
546
657
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_5
658
856
BioASQ_01296d63b9d147bc84e380acbd71b4cb/_6
857
1,044
BioASQ_012f50f15b46427a95f71bd22cb9066f/_0
0
107
BioASQ_012f50f15b46427a95f71bd22cb9066f/_1
108
267
BioASQ_012f50f15b46427a95f71bd22cb9066f/_2
268
402
BioASQ_012f50f15b46427a95f71bd22cb9066f/_3
403
537
BioASQ_012f50f15b46427a95f71bd22cb9066f/_4
538
691
BioASQ_012f50f15b46427a95f71bd22cb9066f/_5
692
836
BioASQ_012f50f15b46427a95f71bd22cb9066f/_6
837
1,039
BioASQ_012f50f15b46427a95f71bd22cb9066f/_7
1,040
1,163
BioASQ_012f50f15b46427a95f71bd22cb9066f/_8
1,164
1,241
BioASQ_0133a9c3b1f74db79d96f61d59105870/_0
0
253
BioASQ_0133a9c3b1f74db79d96f61d59105870/_1
254
425
BioASQ_0133a9c3b1f74db79d96f61d59105870/_2
426
540
BioASQ_0133a9c3b1f74db79d96f61d59105870/_3
541
660
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_0
0
175
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_1
176
224
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_2
225
475
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_3
476
708
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_4
709
820
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_5
821
1,086
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_6
1,087
1,191
BioASQ_017e0735cbc64bf6a05beef4895ac60f/_7
1,192
1,355
BioASQ_020cf00b943b468f9e50ce04bee47b9f/_0
0
92
BioASQ_020cf00b943b468f9e50ce04bee47b9f/_1
93
209
BioASQ_020cf00b943b468f9e50ce04bee47b9f/_2
210
320
BioASQ_020cf00b943b468f9e50ce04bee47b9f/_3
321
531
BioASQ_020cf00b943b468f9e50ce04bee47b9f/_4
532
846
BioASQ_0219464e052f47129d0aa71da3738b73/_0
0
153
BioASQ_0219464e052f47129d0aa71da3738b73/_1
154
361
BioASQ_0219464e052f47129d0aa71da3738b73/_10
1,354
1,485
BioASQ_0219464e052f47129d0aa71da3738b73/_11
1,486
1,576
BioASQ_0219464e052f47129d0aa71da3738b73/_2
362
466
BioASQ_0219464e052f47129d0aa71da3738b73/_3
467
520

Dataset Card for MultiReQA

Dataset Summary

MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, in cluding BioASQ, RelationExtraction, TextbookQA, contain only the test data (also includes DuoRC but not specified in the official documentation)

Supported Tasks and Leaderboards

  • Question answering (QA)
  • Retrieval question answering (ReQA)

Languages

Sentence boundary annotation for SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, TextbookQA and DuoRC

Dataset Structure

Data Instances

The general format is: { "candidate_id": <candidate_id>, "response_start": <response_start>, "response_end": <response_end> } ...

An example from SearchQA: {'candidate_id': 'SearchQA_000077f3912049dfb4511db271697bad/_0_1', 'response_end': 306, 'response_start': 243}

Data Fields

{ "candidate_id": <STRING>, "response_start": <INT>, "response_end": <INT> } ...

  • candidate_id: The candidate id of the candidate sentence. It consists of the original qid from the MRQA shared task.
  • response_start: The start index of the sentence with respect to its original context.
  • response_end: The end index of the sentence with respect to its original context

Data Splits

Train and Dev splits are available only for the following datasets,

  • SearchQA
  • TriviaQA
  • HotpotQA
  • SQuAD
  • NaturalQuestions

Test splits are available only for the following datasets,

  • BioASQ
  • RelationExtraction
  • TextbookQA

The number of candidate sentences for each dataset in the table below.

MultiReQA
train test
SearchQA 629,160 454,836
TriviaQA 335,659 238,339
HotpotQA 104,973 52,191
SQuAD 87,133 10,642
NaturalQuestions 106,521 22,118
BioASQ - 14,158
RelationExtraction - 3,301
TextbookQA - 3,701

Dataset Creation

Curation Rationale

MultiReQA is a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets from the MRQA shared task. The dataset was curated by converting existing QA datasets from MRQA shared task to the format of MultiReQA benchmark.

Source Data

Initial Data Collection and Normalization

The Initial data collection was performed by converting existing QA datasets from MRQA shared task to the format of MultiReQA benchmark.

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

The annotators/curators of the dataset are mandyguo-xyguo and mwurts4google, the contributors of the official MultiReQA github repository

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The annotators/curators of the dataset are mandyguo-xyguo and mwurts4google, the contributors of the official MultiReQA github repository

Licensing Information

[More Information Needed]

Citation Information

@misc{m2020multireqa,
    title={MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models},
    author={Mandy Guo and Yinfei Yang and Daniel Cer and Qinlan Shen and Noah Constant},
    year={2020},
    eprint={2005.02507},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Contributions

Thanks to @Karthik-Bhaskar for adding this dataset.

Downloads last month
849