parquet-converter commited on
Commit
8daa95a
1 Parent(s): 84a28ef

Update parquet files

Browse files
README.md DELETED
@@ -1,281 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- paperswithcode_id: ms-marco
5
- pretty_name: Microsoft Machine Reading Comprehension Dataset
6
- dataset_info:
7
- - config_name: v1.1
8
- features:
9
- - name: answers
10
- sequence: string
11
- - name: passages
12
- sequence:
13
- - name: is_selected
14
- dtype: int32
15
- - name: passage_text
16
- dtype: string
17
- - name: url
18
- dtype: string
19
- - name: query
20
- dtype: string
21
- - name: query_id
22
- dtype: int32
23
- - name: query_type
24
- dtype: string
25
- - name: wellFormedAnswers
26
- sequence: string
27
- splits:
28
- - name: validation
29
- num_bytes: 42710107
30
- num_examples: 10047
31
- - name: train
32
- num_bytes: 350884446
33
- num_examples: 82326
34
- - name: test
35
- num_bytes: 41020711
36
- num_examples: 9650
37
- download_size: 168698008
38
- dataset_size: 434615264
39
- - config_name: v2.1
40
- features:
41
- - name: answers
42
- sequence: string
43
- - name: passages
44
- sequence:
45
- - name: is_selected
46
- dtype: int32
47
- - name: passage_text
48
- dtype: string
49
- - name: url
50
- dtype: string
51
- - name: query
52
- dtype: string
53
- - name: query_id
54
- dtype: int32
55
- - name: query_type
56
- dtype: string
57
- - name: wellFormedAnswers
58
- sequence: string
59
- splits:
60
- - name: validation
61
- num_bytes: 414286005
62
- num_examples: 101093
63
- - name: train
64
- num_bytes: 3466972085
65
- num_examples: 808731
66
- - name: test
67
- num_bytes: 406197152
68
- num_examples: 101092
69
- download_size: 1384271865
70
- dataset_size: 4287455242
71
- ---
72
-
73
- # Dataset Card for "ms_marco"
74
-
75
- ## Table of Contents
76
- - [Dataset Description](#dataset-description)
77
- - [Dataset Summary](#dataset-summary)
78
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
79
- - [Languages](#languages)
80
- - [Dataset Structure](#dataset-structure)
81
- - [Data Instances](#data-instances)
82
- - [Data Fields](#data-fields)
83
- - [Data Splits](#data-splits)
84
- - [Dataset Creation](#dataset-creation)
85
- - [Curation Rationale](#curation-rationale)
86
- - [Source Data](#source-data)
87
- - [Annotations](#annotations)
88
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
89
- - [Considerations for Using the Data](#considerations-for-using-the-data)
90
- - [Social Impact of Dataset](#social-impact-of-dataset)
91
- - [Discussion of Biases](#discussion-of-biases)
92
- - [Other Known Limitations](#other-known-limitations)
93
- - [Additional Information](#additional-information)
94
- - [Dataset Curators](#dataset-curators)
95
- - [Licensing Information](#licensing-information)
96
- - [Citation Information](#citation-information)
97
- - [Contributions](#contributions)
98
-
99
- ## Dataset Description
100
-
101
- - **Homepage:** [https://microsoft.github.io/msmarco/](https://microsoft.github.io/msmarco/)
102
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
- - **Size of downloaded dataset files:** 1481.03 MB
106
- - **Size of the generated dataset:** 4503.32 MB
107
- - **Total amount of disk used:** 5984.34 MB
108
-
109
- ### Dataset Summary
110
-
111
- Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
112
-
113
- The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
114
- Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
115
- keyphrase extraction dataset, crawling dataset, and a conversational search.
116
-
117
- There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
118
- submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
119
-
120
- This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
121
-
122
- The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
123
-
124
- The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
125
- is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
126
- builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.
127
-
128
- version v1.1
129
-
130
- ### Supported Tasks and Leaderboards
131
-
132
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
-
134
- ### Languages
135
-
136
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
-
138
- ## Dataset Structure
139
-
140
- ### Data Instances
141
-
142
- #### v1.1
143
-
144
- - **Size of downloaded dataset files:** 160.88 MB
145
- - **Size of the generated dataset:** 414.48 MB
146
- - **Total amount of disk used:** 575.36 MB
147
-
148
- An example of 'train' looks as follows.
149
- ```
150
-
151
- ```
152
-
153
- #### v2.1
154
-
155
- - **Size of downloaded dataset files:** 1320.14 MB
156
- - **Size of the generated dataset:** 4088.84 MB
157
- - **Total amount of disk used:** 5408.98 MB
158
-
159
- An example of 'validation' looks as follows.
160
- ```
161
-
162
- ```
163
-
164
- ### Data Fields
165
-
166
- The data fields are the same among all splits.
167
-
168
- #### v1.1
169
- - `answers`: a `list` of `string` features.
170
- - `passages`: a dictionary feature containing:
171
- - `is_selected`: a `int32` feature.
172
- - `passage_text`: a `string` feature.
173
- - `url`: a `string` feature.
174
- - `query`: a `string` feature.
175
- - `query_id`: a `int32` feature.
176
- - `query_type`: a `string` feature.
177
- - `wellFormedAnswers`: a `list` of `string` features.
178
-
179
- #### v2.1
180
- - `answers`: a `list` of `string` features.
181
- - `passages`: a dictionary feature containing:
182
- - `is_selected`: a `int32` feature.
183
- - `passage_text`: a `string` feature.
184
- - `url`: a `string` feature.
185
- - `query`: a `string` feature.
186
- - `query_id`: a `int32` feature.
187
- - `query_type`: a `string` feature.
188
- - `wellFormedAnswers`: a `list` of `string` features.
189
-
190
- ### Data Splits
191
-
192
- |name|train |validation| test |
193
- |----|-----:|---------:|-----:|
194
- |v1.1| 82326| 10047| 9650|
195
- |v2.1|808731| 101093|101092|
196
-
197
- ## Dataset Creation
198
-
199
- ### Curation Rationale
200
-
201
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
-
203
- ### Source Data
204
-
205
- #### Initial Data Collection and Normalization
206
-
207
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
208
-
209
- #### Who are the source language producers?
210
-
211
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
-
213
- ### Annotations
214
-
215
- #### Annotation process
216
-
217
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
-
219
- #### Who are the annotators?
220
-
221
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
-
223
- ### Personal and Sensitive Information
224
-
225
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
226
-
227
- ## Considerations for Using the Data
228
-
229
- ### Social Impact of Dataset
230
-
231
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
232
-
233
- ### Discussion of Biases
234
-
235
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
236
-
237
- ### Other Known Limitations
238
-
239
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
240
-
241
- ## Additional Information
242
-
243
- ### Dataset Curators
244
-
245
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
-
247
- ### Licensing Information
248
-
249
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
250
-
251
- ### Citation Information
252
-
253
- ```
254
-
255
- @article{DBLP:journals/corr/NguyenRSGTMD16,
256
- author = {Tri Nguyen and
257
- Mir Rosenberg and
258
- Xia Song and
259
- Jianfeng Gao and
260
- Saurabh Tiwary and
261
- Rangan Majumder and
262
- Li Deng},
263
- title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
264
- journal = {CoRR},
265
- volume = {abs/1611.09268},
266
- year = {2016},
267
- url = {http://arxiv.org/abs/1611.09268},
268
- archivePrefix = {arXiv},
269
- eprint = {1611.09268},
270
- timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
271
- biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
272
- bibsource = {dblp computer science bibliography, https://dblp.org}
273
- }
274
- }
275
-
276
- ```
277
-
278
-
279
- ### Contributions
280
-
281
- Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"v1.1": {"description": "\nStarting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.\n\nThe first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer. \nSince then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset, \nkeyphrase extraction dataset, crawling dataset, and a conversational search.\n\nThere have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking \nsubmissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions\n\nThis data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1). \n\nThe original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.\n\nThe current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and \nis much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and \nbuilds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.\n\n\nversion v1.1", "citation": "\n@article{DBLP:journals/corr/NguyenRSGTMD16,\n author = {Tri Nguyen and\n Mir Rosenberg and\n Xia Song and\n Jianfeng Gao and\n Saurabh Tiwary and\n Rangan Majumder and\n Li Deng},\n title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},\n journal = {CoRR},\n volume = {abs/1611.09268},\n year = {2016},\n url = {http://arxiv.org/abs/1611.09268},\n archivePrefix = {arXiv},\n eprint = {1611.09268},\n timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},\n biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n}\n", "homepage": "https://microsoft.github.io/msmarco/", "license": "", "features": {"answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "passages": {"feature": {"is_selected": {"dtype": "int32", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "query": {"dtype": "string", "id": null, "_type": "Value"}, "query_id": {"dtype": "int32", "id": null, "_type": "Value"}, "query_type": {"dtype": "string", "id": null, "_type": "Value"}, "wellFormedAnswers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "ms_marco", "config_name": "v1.1", "version": {"version_str": "1.1.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 42710107, "num_examples": 10047, "dataset_name": "ms_marco"}, "train": {"name": "train", "num_bytes": 350884446, "num_examples": 82326, "dataset_name": "ms_marco"}, "test": {"name": "test", "num_bytes": 41020711, "num_examples": 9650, "dataset_name": "ms_marco"}}, "download_checksums": {"https://msmarco.blob.core.windows.net/msmsarcov1/train_v1.1.json.gz": {"num_bytes": 110704491, "checksum": "2aaa60df3a758137f0bb7c01fe334858477eb46fa8665ea01588e553cda6aa9f"}, "https://msmarco.blob.core.windows.net/msmsarcov1/dev_v1.1.json.gz": {"num_bytes": 13493661, "checksum": "c70fcb1de78e635cf501264891a1a56d52e7f63e69623da7dd41d89a785d67ca"}, "https://msmarco.blob.core.windows.net/msmsarcov1/test_hidden_v1.1.json": {"num_bytes": 44499856, "checksum": "083aa4f4d86ba0cedb830ca9972eff69f73cbc32b1da26b8617205f0dedea757"}}, "download_size": 168698008, "dataset_size": 434615264, "size_in_bytes": 603313272}, "v2.1": {"description": "\nStarting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.\n\nThe first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer. \nSince then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset, \nkeyphrase extraction dataset, crawling dataset, and a conversational search.\n\nThere have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking \nsubmissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions\n\nThis data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1). \n\nThe original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.\n\nThe current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and \nis much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and \nbuilds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.\n\n\nversion v2.1", "citation": "\n@article{DBLP:journals/corr/NguyenRSGTMD16,\n author = {Tri Nguyen and\n Mir Rosenberg and\n Xia Song and\n Jianfeng Gao and\n Saurabh Tiwary and\n Rangan Majumder and\n Li Deng},\n title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},\n journal = {CoRR},\n volume = {abs/1611.09268},\n year = {2016},\n url = {http://arxiv.org/abs/1611.09268},\n archivePrefix = {arXiv},\n eprint = {1611.09268},\n timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},\n biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n}\n", "homepage": "https://microsoft.github.io/msmarco/", "license": "", "features": {"answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "passages": {"feature": {"is_selected": {"dtype": "int32", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "query": {"dtype": "string", "id": null, "_type": "Value"}, "query_id": {"dtype": "int32", "id": null, "_type": "Value"}, "query_type": {"dtype": "string", "id": null, "_type": "Value"}, "wellFormedAnswers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "ms_marco", "config_name": "v2.1", "version": {"version_str": "2.1.0", "description": "", "datasets_version_to_prepare": null, "major": 2, "minor": 1, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 414286005, "num_examples": 101093, "dataset_name": "ms_marco"}, "train": {"name": "train", "num_bytes": 3466972085, "num_examples": 808731, "dataset_name": "ms_marco"}, "test": {"name": "test", "num_bytes": 406197152, "num_examples": 101092, "dataset_name": "ms_marco"}}, "download_checksums": {"https://msmarco.blob.core.windows.net/msmarco/train_v2.1.json.gz": {"num_bytes": 1112116929, "checksum": "e91745411ca81e441a3bb75deb71ce000dc2fc31334085b7d499982f14218fe2"}, "https://msmarco.blob.core.windows.net/msmarco/dev_v2.1.json.gz": {"num_bytes": 138303699, "checksum": "5b3c9c20d1808ee199a930941b0d96f79e397e9234f77a1496890b138df7cb3c"}, "https://msmarco.blob.core.windows.net/msmarco/eval_v2.1_public.json.gz": {"num_bytes": 133851237, "checksum": "05ac0e448450d507e7ff8e37f48a41cc2d015f5bd2c7974d2445f00a53625db6"}}, "download_size": 1384271865, "dataset_size": 4287455242, "size_in_bytes": 5671727107}}
 
 
ms_marco.py DELETED
@@ -1,204 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """MS MARCO dataset."""
18
-
19
-
20
- import json
21
-
22
- import datasets
23
-
24
-
25
- _CITATION = """
26
- @article{DBLP:journals/corr/NguyenRSGTMD16,
27
- author = {Tri Nguyen and
28
- Mir Rosenberg and
29
- Xia Song and
30
- Jianfeng Gao and
31
- Saurabh Tiwary and
32
- Rangan Majumder and
33
- Li Deng},
34
- title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
35
- journal = {CoRR},
36
- volume = {abs/1611.09268},
37
- year = {2016},
38
- url = {http://arxiv.org/abs/1611.09268},
39
- archivePrefix = {arXiv},
40
- eprint = {1611.09268},
41
- timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
42
- biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
43
- bibsource = {dblp computer science bibliography, https://dblp.org}
44
- }
45
- }
46
- """
47
-
48
- _DESCRIPTION = """
49
- Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
50
-
51
- The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
52
- Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
53
- keyphrase extraction dataset, crawling dataset, and a conversational search.
54
-
55
- There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
56
- submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
57
-
58
- This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
59
-
60
- The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
61
-
62
- The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
63
- is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
64
- builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.
65
-
66
- """
67
- _V2_URLS = {
68
- "train": "https://msmarco.blob.core.windows.net/msmarco/train_v2.1.json.gz",
69
- "dev": "https://msmarco.blob.core.windows.net/msmarco/dev_v2.1.json.gz",
70
- "test": "https://msmarco.blob.core.windows.net/msmarco/eval_v2.1_public.json.gz",
71
- }
72
-
73
- _V1_URLS = {
74
- "train": "https://msmarco.blob.core.windows.net/msmsarcov1/train_v1.1.json.gz",
75
- "dev": "https://msmarco.blob.core.windows.net/msmsarcov1/dev_v1.1.json.gz",
76
- "test": "https://msmarco.blob.core.windows.net/msmsarcov1/test_hidden_v1.1.json",
77
- }
78
-
79
-
80
- class MsMarcoConfig(datasets.BuilderConfig):
81
- """BuilderConfig for MS MARCO."""
82
-
83
- def __init__(self, **kwargs):
84
- """BuilderConfig for MS MARCO
85
-
86
- Args:
87
- **kwargs: keyword arguments forwarded to super.
88
- """
89
- super(MsMarcoConfig, self).__init__(**kwargs)
90
-
91
-
92
- class MsMarco(datasets.GeneratorBasedBuilder):
93
-
94
- BUILDER_CONFIGS = [
95
- MsMarcoConfig(
96
- name="v1.1",
97
- description="""version v1.1""",
98
- version=datasets.Version("1.1.0", ""),
99
- ),
100
- MsMarcoConfig(
101
- name="v2.1",
102
- description="""version v2.1""",
103
- version=datasets.Version("2.1.0", ""),
104
- ),
105
- ]
106
-
107
- def _info(self):
108
- return datasets.DatasetInfo(
109
- description=_DESCRIPTION + "\n" + self.config.description,
110
- features=datasets.Features(
111
- {
112
- "answers": datasets.features.Sequence(datasets.Value("string")),
113
- "passages": datasets.features.Sequence(
114
- {
115
- "is_selected": datasets.Value("int32"),
116
- "passage_text": datasets.Value("string"),
117
- "url": datasets.Value("string"),
118
- }
119
- ),
120
- "query": datasets.Value("string"),
121
- "query_id": datasets.Value("int32"),
122
- "query_type": datasets.Value("string"),
123
- "wellFormedAnswers": datasets.features.Sequence(datasets.Value("string")),
124
- }
125
- ),
126
- homepage="https://microsoft.github.io/msmarco/",
127
- citation=_CITATION,
128
- )
129
-
130
- def _split_generators(self, dl_manager):
131
- """Returns SplitGenerators."""
132
- if self.config.name == "v2.1":
133
- dl_path = dl_manager.download_and_extract(_V2_URLS)
134
- else:
135
- dl_path = dl_manager.download_and_extract(_V1_URLS)
136
- return [
137
- datasets.SplitGenerator(
138
- name=datasets.Split.VALIDATION,
139
- gen_kwargs={"filepath": dl_path["dev"]},
140
- ),
141
- datasets.SplitGenerator(
142
- name=datasets.Split.TRAIN,
143
- gen_kwargs={"filepath": dl_path["train"]},
144
- ),
145
- datasets.SplitGenerator(
146
- name=datasets.Split.TEST,
147
- gen_kwargs={"filepath": dl_path["test"]},
148
- ),
149
- ]
150
-
151
- def _generate_examples(self, filepath):
152
- """Yields examples."""
153
- with open(filepath, encoding="utf-8") as f:
154
- if self.config.name == "v2.1":
155
- data = json.load(f)
156
- questions = data["query"]
157
- answers = data.get("answers", {})
158
- passages = data["passages"]
159
- query_ids = data["query_id"]
160
- query_types = data["query_type"]
161
- wellFormedAnswers = data.get("wellFormedAnswers", {})
162
- for key in questions:
163
-
164
- is_selected = [passage.get("is_selected", -1) for passage in passages[key]]
165
- passage_text = [passage["passage_text"] for passage in passages[key]]
166
- urls = [passage["url"] for passage in passages[key]]
167
- question = questions[key]
168
- answer = answers.get(key, [])
169
- query_id = query_ids[key]
170
- query_type = query_types[key]
171
- wellFormedAnswer = wellFormedAnswers.get(key, [])
172
- if wellFormedAnswer == "[]":
173
- wellFormedAnswer = []
174
- yield query_id, {
175
- "answers": answer,
176
- "passages": {"is_selected": is_selected, "passage_text": passage_text, "url": urls},
177
- "query": question,
178
- "query_id": query_id,
179
- "query_type": query_type,
180
- "wellFormedAnswers": wellFormedAnswer,
181
- }
182
- if self.config.name == "v1.1":
183
- for row in f:
184
- data = json.loads(row)
185
- question = data["query"]
186
- answer = data.get("answers", [])
187
- passages = data["passages"]
188
- query_id = data["query_id"]
189
- query_type = data["query_type"]
190
- wellFormedAnswer = data.get("wellFormedAnswers", [])
191
-
192
- is_selected = [passage.get("is_selected", -1) for passage in passages]
193
- passage_text = [passage["passage_text"] for passage in passages]
194
- urls = [passage["url"] for passage in passages]
195
- if wellFormedAnswer == "[]":
196
- wellFormedAnswer = []
197
- yield query_id, {
198
- "answers": answer,
199
- "passages": {"is_selected": is_selected, "passage_text": passage_text, "url": urls},
200
- "query": question,
201
- "query_id": query_id,
202
- "query_type": query_type,
203
- "wellFormedAnswers": wellFormedAnswer,
204
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
v1.1/ms_marco-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdcdcd7cf6b3a38fdd6feac8a687d999c1728a51e33a0556f56405afb7fe3b47
3
+ size 20484568
v1.1/ms_marco-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b0bf970fc8a5791c320069cb5e8015f80461353f56def41b6b2e781cf9ec7fb
3
+ size 175452225
v1.1/ms_marco-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69d35f5f842801219405a6ce137981046d6f700aa37424f3ff6216272334b7e2
3
+ size 21391357
v2.1/ms_marco-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5494a8ade1dc40c347d667b4a7ed57b3b28487d6d4f58ba3e917a08327c9eaa
3
+ size 204396130
v2.1/ms_marco-train-00000-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf0b93163690f9518223ed5529a02e4a77a3e11ba035e5401d4fb1fc0792cb2f
3
+ size 244987443
v2.1/ms_marco-train-00001-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97a3f62a7ded532a6ee2c63a9f533c3a751732ab2d130aedcc544971dd8b3fb0
3
+ size 245540545
v2.1/ms_marco-train-00002-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c88ece22b49dc2d16812f9120ea464b2a23092bb14f234b182e2dad6869c6df9
3
+ size 248686360
v2.1/ms_marco-train-00003-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeafc33d517c49ece0b748ce497191339b180206a82f6349e8c30ff33211153e
3
+ size 249233580
v2.1/ms_marco-train-00004-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7210bd199c8bdeff550ef6337475ded4e4fa0754084772d93c0e417d988b6b56
3
+ size 248805381
v2.1/ms_marco-train-00005-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:556cc5fc7b2faf61e91489c486657203124e287929ab9b70b1a5f432f6626e15
3
+ size 244020051
v2.1/ms_marco-train-00006-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9fd95af63e8b21d2b56f86860b8f64b63ba10f2f707c8fc1ad80c9dff73809e
3
+ size 210489093
v2.1/ms_marco-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e88c8885a08342c163776c9e5b6576c78443d93618887e1b74d45e4be4fe0183
3
+ size 209628786