system HF staff commited on
Commit
a68b1a2
1 Parent(s): e5fbb72

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +202 -0
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "ms_marco"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://microsoft.github.io/msmarco/](https://microsoft.github.io/msmarco/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1481.03 MB
37
+ - **Size of the generated dataset:** 4503.32 MB
38
+ - **Total amount of disk used:** 5984.34 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
43
+
44
+ The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
45
+ Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
46
+ keyphrase extraction dataset, crawling dataset, and a conversational search.
47
+
48
+ There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
49
+ submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
50
+
51
+ This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
52
+
53
+ The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
54
+
55
+ The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
56
+ is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
57
+ builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.
58
+
59
+ version v1.1
60
+
61
+ ### [Supported Tasks](#supported-tasks)
62
+
63
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
64
+
65
+ ### [Languages](#languages)
66
+
67
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
+
69
+ ## [Dataset Structure](#dataset-structure)
70
+
71
+ We show detailed information for up to 5 configurations of the dataset.
72
+
73
+ ### [Data Instances](#data-instances)
74
+
75
+ #### v1.1
76
+
77
+ - **Size of downloaded dataset files:** 160.88 MB
78
+ - **Size of the generated dataset:** 414.48 MB
79
+ - **Total amount of disk used:** 575.36 MB
80
+
81
+ An example of 'train' looks as follows.
82
+ ```
83
+
84
+ ```
85
+
86
+ #### v2.1
87
+
88
+ - **Size of downloaded dataset files:** 1320.14 MB
89
+ - **Size of the generated dataset:** 4088.84 MB
90
+ - **Total amount of disk used:** 5408.98 MB
91
+
92
+ An example of 'validation' looks as follows.
93
+ ```
94
+
95
+ ```
96
+
97
+ ### [Data Fields](#data-fields)
98
+
99
+ The data fields are the same among all splits.
100
+
101
+ #### v1.1
102
+ - `answers`: a `list` of `string` features.
103
+ - `passages`: a dictionary feature containing:
104
+ - `is_selected`: a `int32` feature.
105
+ - `passage_text`: a `string` feature.
106
+ - `url`: a `string` feature.
107
+ - `query`: a `string` feature.
108
+ - `query_id`: a `int32` feature.
109
+ - `query_type`: a `string` feature.
110
+ - `wellFormedAnswers`: a `list` of `string` features.
111
+
112
+ #### v2.1
113
+ - `answers`: a `list` of `string` features.
114
+ - `passages`: a dictionary feature containing:
115
+ - `is_selected`: a `int32` feature.
116
+ - `passage_text`: a `string` feature.
117
+ - `url`: a `string` feature.
118
+ - `query`: a `string` feature.
119
+ - `query_id`: a `int32` feature.
120
+ - `query_type`: a `string` feature.
121
+ - `wellFormedAnswers`: a `list` of `string` features.
122
+
123
+ ### [Data Splits Sample Size](#data-splits-sample-size)
124
+
125
+ |name|train |validation| test |
126
+ |----|-----:|---------:|-----:|
127
+ |v1.1| 82326| 10047| 9650|
128
+ |v2.1|808731| 101093|101092|
129
+
130
+ ## [Dataset Creation](#dataset-creation)
131
+
132
+ ### [Curation Rationale](#curation-rationale)
133
+
134
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+
136
+ ### [Source Data](#source-data)
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ### [Annotations](#annotations)
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
149
+
150
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ ### [Discussion of Biases](#discussion-of-biases)
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ### [Other Known Limitations](#other-known-limitations)
159
+
160
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
161
+
162
+ ## [Additional Information](#additional-information)
163
+
164
+ ### [Dataset Curators](#dataset-curators)
165
+
166
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
+
168
+ ### [Licensing Information](#licensing-information)
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ ### [Citation Information](#citation-information)
173
+
174
+ ```
175
+
176
+ @article{DBLP:journals/corr/NguyenRSGTMD16,
177
+ author = {Tri Nguyen and
178
+ Mir Rosenberg and
179
+ Xia Song and
180
+ Jianfeng Gao and
181
+ Saurabh Tiwary and
182
+ Rangan Majumder and
183
+ Li Deng},
184
+ title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
185
+ journal = {CoRR},
186
+ volume = {abs/1611.09268},
187
+ year = {2016},
188
+ url = {http://arxiv.org/abs/1611.09268},
189
+ archivePrefix = {arXiv},
190
+ eprint = {1611.09268},
191
+ timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
192
+ biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
193
+ bibsource = {dblp computer science bibliography, https://dblp.org}
194
+ }
195
+ }
196
+
197
+ ```
198
+
199
+
200
+ ### Contributions
201
+
202
+ Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.