Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
192b7fa
1 Parent(s): 0316ec0

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +154 -0
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "drop"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://allennlp.org/drop](https://allennlp.org/drop)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 7.92 MB
37
+ - **Size of the generated dataset:** 105.77 MB
38
+ - **Total amount of disk used:** 113.69 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs.
43
+ . DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a
44
+ question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or
45
+ sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was
46
+ necessary for prior datasets.
47
+
48
+ ### [Supported Tasks](#supported-tasks)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ### [Languages](#languages)
53
+
54
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ## [Dataset Structure](#dataset-structure)
57
+
58
+ We show detailed information for up to 5 configurations of the dataset.
59
+
60
+ ### [Data Instances](#data-instances)
61
+
62
+ #### default
63
+
64
+ - **Size of downloaded dataset files:** 7.92 MB
65
+ - **Size of the generated dataset:** 105.77 MB
66
+ - **Total amount of disk used:** 113.69 MB
67
+
68
+ An example of 'validation' looks as follows.
69
+ ```
70
+ This example was too long and was cropped:
71
+
72
+ {
73
+ "answers_spans": {
74
+ "spans": ["Chaz Schilens"]
75
+ },
76
+ "passage": "\" Hoping to rebound from their loss to the Patriots, the Raiders stayed at home for a Week 16 duel with the Houston Texans. Oak...",
77
+ "question": "Who scored the first touchdown of the game?"
78
+ }
79
+ ```
80
+
81
+ ### [Data Fields](#data-fields)
82
+
83
+ The data fields are the same among all splits.
84
+
85
+ #### default
86
+ - `passage`: a `string` feature.
87
+ - `question`: a `string` feature.
88
+ - `answers_spans`: a dictionary feature containing:
89
+ - `spans`: a `string` feature.
90
+
91
+ ### [Data Splits Sample Size](#data-splits-sample-size)
92
+
93
+ | name |train|validation|
94
+ |-------|----:|---------:|
95
+ |default|77409| 9536|
96
+
97
+ ## [Dataset Creation](#dataset-creation)
98
+
99
+ ### [Curation Rationale](#curation-rationale)
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ### [Source Data](#source-data)
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### [Annotations](#annotations)
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
112
+
113
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
+
115
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
116
+
117
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ### [Discussion of Biases](#discussion-of-biases)
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ ### [Other Known Limitations](#other-known-limitations)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ## [Additional Information](#additional-information)
130
+
131
+ ### [Dataset Curators](#dataset-curators)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ### [Licensing Information](#licensing-information)
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### [Citation Information](#citation-information)
140
+
141
+ ```
142
+ @inproceedings{Dua2019DROP,
143
+ author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},
144
+ title={ {DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs},
145
+ booktitle={Proc. of NAACL},
146
+ year={2019}
147
+ }
148
+
149
+ ```
150
+
151
+
152
+ ### Contributions
153
+
154
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.