system HF staff commited on
Commit
c4e72ce
1 Parent(s): 667f81c

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +278 -0
README.md ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "trivia_qa"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [http://nlp.cs.washington.edu/triviaqa/](http://nlp.cs.washington.edu/triviaqa/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 8833.35 MB
37
+ - **Size of the generated dataset:** 43351.32 MB
38
+ - **Total amount of disk used:** 52184.66 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ TriviaqQA is a reading comprehension dataset containing over 650K
43
+ question-answer-evidence triples. TriviaqQA includes 95K question-answer
44
+ pairs authored by trivia enthusiasts and independently gathered evidence
45
+ documents, six per question on average, that provide high quality distant
46
+ supervision for answering the questions.
47
+
48
+ ### [Supported Tasks](#supported-tasks)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ### [Languages](#languages)
53
+
54
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ## [Dataset Structure](#dataset-structure)
57
+
58
+ We show detailed information for up to 5 configurations of the dataset.
59
+
60
+ ### [Data Instances](#data-instances)
61
+
62
+ #### rc
63
+
64
+ - **Size of downloaded dataset files:** 2542.29 MB
65
+ - **Size of the generated dataset:** 15275.31 MB
66
+ - **Total amount of disk used:** 17817.60 MB
67
+
68
+ An example of 'train' looks as follows.
69
+ ```
70
+
71
+ ```
72
+
73
+ #### rc.nocontext
74
+
75
+ - **Size of downloaded dataset files:** 2542.29 MB
76
+ - **Size of the generated dataset:** 120.42 MB
77
+ - **Total amount of disk used:** 2662.71 MB
78
+
79
+ An example of 'train' looks as follows.
80
+ ```
81
+
82
+ ```
83
+
84
+ #### unfiltered
85
+
86
+ - **Size of downloaded dataset files:** 3145.53 MB
87
+ - **Size of the generated dataset:** 27884.47 MB
88
+ - **Total amount of disk used:** 31030.00 MB
89
+
90
+ An example of 'validation' looks as follows.
91
+ ```
92
+
93
+ ```
94
+
95
+ #### unfiltered.nocontext
96
+
97
+ - **Size of downloaded dataset files:** 603.25 MB
98
+ - **Size of the generated dataset:** 71.11 MB
99
+ - **Total amount of disk used:** 674.35 MB
100
+
101
+ An example of 'train' looks as follows.
102
+ ```
103
+
104
+ ```
105
+
106
+ ### [Data Fields](#data-fields)
107
+
108
+ The data fields are the same among all splits.
109
+
110
+ #### rc
111
+ - `question`: a `string` feature.
112
+ - `question_id`: a `string` feature.
113
+ - `question_source`: a `string` feature.
114
+ - `entity_pages`: a dictionary feature containing:
115
+ - `doc_source`: a `string` feature.
116
+ - `filename`: a `string` feature.
117
+ - `title`: a `string` feature.
118
+ - `wiki_context`: a `string` feature.
119
+ - `search_results`: a dictionary feature containing:
120
+ - `description`: a `string` feature.
121
+ - `filename`: a `string` feature.
122
+ - `rank`: a `int32` feature.
123
+ - `title`: a `string` feature.
124
+ - `url`: a `string` feature.
125
+ - `search_context`: a `string` feature.
126
+ - `aliases`: a `list` of `string` features.
127
+ - `normalized_aliases`: a `list` of `string` features.
128
+ - `matched_wiki_entity_name`: a `string` feature.
129
+ - `normalized_matched_wiki_entity_name`: a `string` feature.
130
+ - `normalized_value`: a `string` feature.
131
+ - `type`: a `string` feature.
132
+ - `value`: a `string` feature.
133
+
134
+ #### rc.nocontext
135
+ - `question`: a `string` feature.
136
+ - `question_id`: a `string` feature.
137
+ - `question_source`: a `string` feature.
138
+ - `entity_pages`: a dictionary feature containing:
139
+ - `doc_source`: a `string` feature.
140
+ - `filename`: a `string` feature.
141
+ - `title`: a `string` feature.
142
+ - `wiki_context`: a `string` feature.
143
+ - `search_results`: a dictionary feature containing:
144
+ - `description`: a `string` feature.
145
+ - `filename`: a `string` feature.
146
+ - `rank`: a `int32` feature.
147
+ - `title`: a `string` feature.
148
+ - `url`: a `string` feature.
149
+ - `search_context`: a `string` feature.
150
+ - `aliases`: a `list` of `string` features.
151
+ - `normalized_aliases`: a `list` of `string` features.
152
+ - `matched_wiki_entity_name`: a `string` feature.
153
+ - `normalized_matched_wiki_entity_name`: a `string` feature.
154
+ - `normalized_value`: a `string` feature.
155
+ - `type`: a `string` feature.
156
+ - `value`: a `string` feature.
157
+
158
+ #### unfiltered
159
+ - `question`: a `string` feature.
160
+ - `question_id`: a `string` feature.
161
+ - `question_source`: a `string` feature.
162
+ - `entity_pages`: a dictionary feature containing:
163
+ - `doc_source`: a `string` feature.
164
+ - `filename`: a `string` feature.
165
+ - `title`: a `string` feature.
166
+ - `wiki_context`: a `string` feature.
167
+ - `search_results`: a dictionary feature containing:
168
+ - `description`: a `string` feature.
169
+ - `filename`: a `string` feature.
170
+ - `rank`: a `int32` feature.
171
+ - `title`: a `string` feature.
172
+ - `url`: a `string` feature.
173
+ - `search_context`: a `string` feature.
174
+ - `aliases`: a `list` of `string` features.
175
+ - `normalized_aliases`: a `list` of `string` features.
176
+ - `matched_wiki_entity_name`: a `string` feature.
177
+ - `normalized_matched_wiki_entity_name`: a `string` feature.
178
+ - `normalized_value`: a `string` feature.
179
+ - `type`: a `string` feature.
180
+ - `value`: a `string` feature.
181
+
182
+ #### unfiltered.nocontext
183
+ - `question`: a `string` feature.
184
+ - `question_id`: a `string` feature.
185
+ - `question_source`: a `string` feature.
186
+ - `entity_pages`: a dictionary feature containing:
187
+ - `doc_source`: a `string` feature.
188
+ - `filename`: a `string` feature.
189
+ - `title`: a `string` feature.
190
+ - `wiki_context`: a `string` feature.
191
+ - `search_results`: a dictionary feature containing:
192
+ - `description`: a `string` feature.
193
+ - `filename`: a `string` feature.
194
+ - `rank`: a `int32` feature.
195
+ - `title`: a `string` feature.
196
+ - `url`: a `string` feature.
197
+ - `search_context`: a `string` feature.
198
+ - `aliases`: a `list` of `string` features.
199
+ - `normalized_aliases`: a `list` of `string` features.
200
+ - `matched_wiki_entity_name`: a `string` feature.
201
+ - `normalized_matched_wiki_entity_name`: a `string` feature.
202
+ - `normalized_value`: a `string` feature.
203
+ - `type`: a `string` feature.
204
+ - `value`: a `string` feature.
205
+
206
+ ### [Data Splits Sample Size](#data-splits-sample-size)
207
+
208
+ | name |train |validation|test |
209
+ |--------------------|-----:|---------:|----:|
210
+ |rc |138384| 18669|17210|
211
+ |rc.nocontext |138384| 18669|17210|
212
+ |unfiltered | 87622| 11313|10832|
213
+ |unfiltered.nocontext| 87622| 11313|10832|
214
+
215
+ ## [Dataset Creation](#dataset-creation)
216
+
217
+ ### [Curation Rationale](#curation-rationale)
218
+
219
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
+
221
+ ### [Source Data](#source-data)
222
+
223
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
+
225
+ ### [Annotations](#annotations)
226
+
227
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
+
229
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
230
+
231
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
232
+
233
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
234
+
235
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
236
+
237
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
+
239
+ ### [Discussion of Biases](#discussion-of-biases)
240
+
241
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
+
243
+ ### [Other Known Limitations](#other-known-limitations)
244
+
245
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
+
247
+ ## [Additional Information](#additional-information)
248
+
249
+ ### [Dataset Curators](#dataset-curators)
250
+
251
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
+
253
+ ### [Licensing Information](#licensing-information)
254
+
255
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
+
257
+ ### [Citation Information](#citation-information)
258
+
259
+ ```
260
+
261
+ @article{2017arXivtriviaqa,
262
+ author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
263
+ Daniel and {Zettlemoyer}, Luke},
264
+ title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
265
+ journal = {arXiv e-prints},
266
+ year = 2017,
267
+ eid = {arXiv:1705.03551},
268
+ pages = {arXiv:1705.03551},
269
+ archivePrefix = {arXiv},
270
+ eprint = {1705.03551},
271
+ }
272
+
273
+ ```
274
+
275
+
276
+ ### Contributions
277
+
278
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.