Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
93d9f7d
1 Parent(s): 0e7df9b

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +245 -0
README.md ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "break_data"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/allenai/Break](https://github.com/allenai/Break)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 76.16 MB
37
+ - **Size of the generated dataset:** 148.34 MB
38
+ - **Total amount of disk used:** 224.49 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations
43
+ (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases.
44
+ This repository contains the Break dataset along with information on the exact data format.
45
+
46
+ ### [Supported Tasks](#supported-tasks)
47
+
48
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
+
50
+ ### [Languages](#languages)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ## [Dataset Structure](#dataset-structure)
55
+
56
+ We show detailed information for up to 5 configurations of the dataset.
57
+
58
+ ### [Data Instances](#data-instances)
59
+
60
+ #### QDMR
61
+
62
+ - **Size of downloaded dataset files:** 15.23 MB
63
+ - **Size of the generated dataset:** 15.19 MB
64
+ - **Total amount of disk used:** 30.42 MB
65
+
66
+ An example of 'validation' looks as follows.
67
+ ```
68
+ {
69
+ "decomposition": "return flights ;return #1 from denver ;return #2 to philadelphia ;return #3 if available",
70
+ "operators": "['select', 'filter', 'filter', 'filter']",
71
+ "question_id": "ATIS_dev_0",
72
+ "question_text": "what flights are available tomorrow from denver to philadelphia ",
73
+ "split": "dev"
74
+ }
75
+ ```
76
+
77
+ #### QDMR-high-level
78
+
79
+ - **Size of downloaded dataset files:** 15.23 MB
80
+ - **Size of the generated dataset:** 6.24 MB
81
+ - **Total amount of disk used:** 21.47 MB
82
+
83
+ An example of 'train' looks as follows.
84
+ ```
85
+ {
86
+ "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4",
87
+ "operators": "['select', 'filter', 'filter', 'filter', 'project']",
88
+ "question_id": "ATIS_dev_102",
89
+ "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ",
90
+ "split": "dev"
91
+ }
92
+ ```
93
+
94
+ #### QDMR-high-level-lexicon
95
+
96
+ - **Size of downloaded dataset files:** 15.23 MB
97
+ - **Size of the generated dataset:** 30.17 MB
98
+ - **Total amount of disk used:** 45.40 MB
99
+
100
+ An example of 'train' looks as follows.
101
+ ```
102
+ This example was too long and was cropped:
103
+
104
+ {
105
+ "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'he', 'distinct', 'House', 'two', 'at least', 'or ', 'date', 'o...",
106
+ "source": "What office, also held by a member of the Maine House of Representatives, did James K. Polk hold before he was president?"
107
+ }
108
+ ```
109
+
110
+ #### QDMR-lexicon
111
+
112
+ - **Size of downloaded dataset files:** 15.23 MB
113
+ - **Size of the generated dataset:** 73.61 MB
114
+ - **Total amount of disk used:** 88.84 MB
115
+
116
+ An example of 'validation' looks as follows.
117
+ ```
118
+ This example was too long and was cropped:
119
+
120
+ {
121
+ "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'distinct', 'two', 'at least', 'or ', 'date', 'on ', '@@14@@', ...",
122
+ "source": "what flights are available tomorrow from denver to philadelphia "
123
+ }
124
+ ```
125
+
126
+ #### logical-forms
127
+
128
+ - **Size of downloaded dataset files:** 15.23 MB
129
+ - **Size of the generated dataset:** 23.13 MB
130
+ - **Total amount of disk used:** 38.36 MB
131
+
132
+ An example of 'train' looks as follows.
133
+ ```
134
+ {
135
+ "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4",
136
+ "operators": "['select', 'filter', 'filter', 'filter', 'project']",
137
+ "program": "some program",
138
+ "question_id": "ATIS_dev_102",
139
+ "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ",
140
+ "split": "dev"
141
+ }
142
+ ```
143
+
144
+ ### [Data Fields](#data-fields)
145
+
146
+ The data fields are the same among all splits.
147
+
148
+ #### QDMR
149
+ - `question_id`: a `string` feature.
150
+ - `question_text`: a `string` feature.
151
+ - `decomposition`: a `string` feature.
152
+ - `operators`: a `string` feature.
153
+ - `split`: a `string` feature.
154
+
155
+ #### QDMR-high-level
156
+ - `question_id`: a `string` feature.
157
+ - `question_text`: a `string` feature.
158
+ - `decomposition`: a `string` feature.
159
+ - `operators`: a `string` feature.
160
+ - `split`: a `string` feature.
161
+
162
+ #### QDMR-high-level-lexicon
163
+ - `source`: a `string` feature.
164
+ - `allowed_tokens`: a `string` feature.
165
+
166
+ #### QDMR-lexicon
167
+ - `source`: a `string` feature.
168
+ - `allowed_tokens`: a `string` feature.
169
+
170
+ #### logical-forms
171
+ - `question_id`: a `string` feature.
172
+ - `question_text`: a `string` feature.
173
+ - `decomposition`: a `string` feature.
174
+ - `operators`: a `string` feature.
175
+ - `split`: a `string` feature.
176
+ - `program`: a `string` feature.
177
+
178
+ ### [Data Splits Sample Size](#data-splits-sample-size)
179
+
180
+ | name |train|validation|test|
181
+ |-----------------------|----:|---------:|---:|
182
+ |QDMR |44321| 7760|8069|
183
+ |QDMR-high-level |17503| 3130|3195|
184
+ |QDMR-high-level-lexicon|17503| 3130|3195|
185
+ |QDMR-lexicon |44321| 7760|8069|
186
+ |logical-forms |44098| 7719|8006|
187
+
188
+ ## [Dataset Creation](#dataset-creation)
189
+
190
+ ### [Curation Rationale](#curation-rationale)
191
+
192
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
+
194
+ ### [Source Data](#source-data)
195
+
196
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
+
198
+ ### [Annotations](#annotations)
199
+
200
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
+
202
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
203
+
204
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
+
206
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
207
+
208
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ ### [Discussion of Biases](#discussion-of-biases)
213
+
214
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
+
216
+ ### [Other Known Limitations](#other-known-limitations)
217
+
218
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
+
220
+ ## [Additional Information](#additional-information)
221
+
222
+ ### [Dataset Curators](#dataset-curators)
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ### [Licensing Information](#licensing-information)
227
+
228
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
+
230
+ ### [Citation Information](#citation-information)
231
+
232
+ ```
233
+ @article{Wolfson2020Break,
234
+ title={Break It Down: A Question Understanding Benchmark},
235
+ author={Wolfson, Tomer and Geva, Mor and Gupta, Ankit and Gardner, Matt and Goldberg, Yoav and Deutch, Daniel and Berant, Jonathan},
236
+ journal={Transactions of the Association for Computational Linguistics},
237
+ year={2020},
238
+ }
239
+
240
+ ```
241
+
242
+
243
+ ### Contributions
244
+
245
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.