system HF staff commited on
Commit
125e01a
1 Parent(s): 1a3d5e6

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +204 -0
README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "search_qa"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/nyu-dl/dl4ir-searchQA](https://github.com/nyu-dl/dl4ir-searchQA)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 6163.54 MB
37
+ - **Size of the generated dataset:** 14573.76 MB
38
+ - **Total amount of disk used:** 20737.29 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ # pylint: disable=line-too-long
43
+ We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind
44
+ CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article
45
+ and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google.
46
+ Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context
47
+ tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation
48
+ as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human
49
+ and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
50
+
51
+ ### [Supported Tasks](#supported-tasks)
52
+
53
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+
55
+ ### [Languages](#languages)
56
+
57
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
58
+
59
+ ## [Dataset Structure](#dataset-structure)
60
+
61
+ We show detailed information for up to 5 configurations of the dataset.
62
+
63
+ ### [Data Instances](#data-instances)
64
+
65
+ #### raw_jeopardy
66
+
67
+ - **Size of downloaded dataset files:** 3160.84 MB
68
+ - **Size of the generated dataset:** 7410.98 MB
69
+ - **Total amount of disk used:** 10571.82 MB
70
+
71
+ An example of 'train' looks as follows.
72
+ ```
73
+
74
+ ```
75
+
76
+ #### train_test_val
77
+
78
+ - **Size of downloaded dataset files:** 3002.69 MB
79
+ - **Size of the generated dataset:** 7162.78 MB
80
+ - **Total amount of disk used:** 10165.47 MB
81
+
82
+ An example of 'validation' looks as follows.
83
+ ```
84
+
85
+ ```
86
+
87
+ ### [Data Fields](#data-fields)
88
+
89
+ The data fields are the same among all splits.
90
+
91
+ #### raw_jeopardy
92
+ - `category`: a `string` feature.
93
+ - `air_date`: a `string` feature.
94
+ - `question`: a `string` feature.
95
+ - `value`: a `string` feature.
96
+ - `answer`: a `string` feature.
97
+ - `round`: a `string` feature.
98
+ - `show_number`: a `int32` feature.
99
+ - `search_results`: a dictionary feature containing:
100
+ - `urls`: a `string` feature.
101
+ - `snippets`: a `string` feature.
102
+ - `titles`: a `string` feature.
103
+ - `related_links`: a `string` feature.
104
+
105
+ #### train_test_val
106
+ - `category`: a `string` feature.
107
+ - `air_date`: a `string` feature.
108
+ - `question`: a `string` feature.
109
+ - `value`: a `string` feature.
110
+ - `answer`: a `string` feature.
111
+ - `round`: a `string` feature.
112
+ - `show_number`: a `int32` feature.
113
+ - `search_results`: a dictionary feature containing:
114
+ - `urls`: a `string` feature.
115
+ - `snippets`: a `string` feature.
116
+ - `titles`: a `string` feature.
117
+ - `related_links`: a `string` feature.
118
+
119
+ ### [Data Splits Sample Size](#data-splits-sample-size)
120
+
121
+ #### raw_jeopardy
122
+
123
+ | |train |
124
+ |------------|-----:|
125
+ |raw_jeopardy|216757|
126
+
127
+ #### train_test_val
128
+
129
+ | |train |validation|test |
130
+ |--------------|-----:|---------:|----:|
131
+ |train_test_val|151295| 21613|43228|
132
+
133
+ ## [Dataset Creation](#dataset-creation)
134
+
135
+ ### [Curation Rationale](#curation-rationale)
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### [Source Data](#source-data)
140
+
141
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
+
143
+ ### [Annotations](#annotations)
144
+
145
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
+
147
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
152
+
153
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ### [Discussion of Biases](#discussion-of-biases)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Other Known Limitations](#other-known-limitations)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ## [Additional Information](#additional-information)
166
+
167
+ ### [Dataset Curators](#dataset-curators)
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ ### [Licensing Information](#licensing-information)
172
+
173
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
+
175
+ ### [Citation Information](#citation-information)
176
+
177
+ ```
178
+
179
+ @article{DBLP:journals/corr/DunnSHGCC17,
180
+ author = {Matthew Dunn and
181
+ Levent Sagun and
182
+ Mike Higgins and
183
+ V. Ugur G{"{u}}ney and
184
+ Volkan Cirik and
185
+ Kyunghyun Cho},
186
+ title = {SearchQA: {A} New Q{\&}A Dataset Augmented with Context from a
187
+ Search Engine},
188
+ journal = {CoRR},
189
+ volume = {abs/1704.05179},
190
+ year = {2017},
191
+ url = {http://arxiv.org/abs/1704.05179},
192
+ archivePrefix = {arXiv},
193
+ eprint = {1704.05179},
194
+ timestamp = {Mon, 13 Aug 2018 16:47:09 +0200},
195
+ biburl = {https://dblp.org/rec/journals/corr/DunnSHGCC17.bib},
196
+ bibsource = {dblp computer science bibliography, https://dblp.org}
197
+ }
198
+
199
+ ```
200
+
201
+
202
+ ### Contributions
203
+
204
+ Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.