Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
cf67497
1 Parent(s): adb54bd

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,327 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- pretty_name: RACE
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-qa
21
- paperswithcode_id: race
22
- dataset_info:
23
- - config_name: high
24
- features:
25
- - name: example_id
26
- dtype: string
27
- - name: article
28
- dtype: string
29
- - name: answer
30
- dtype: string
31
- - name: question
32
- dtype: string
33
- - name: options
34
- sequence: string
35
- splits:
36
- - name: test
37
- num_bytes: 6989121
38
- num_examples: 3498
39
- - name: train
40
- num_bytes: 126243396
41
- num_examples: 62445
42
- - name: validation
43
- num_bytes: 6885287
44
- num_examples: 3451
45
- download_size: 25443609
46
- dataset_size: 140117804
47
- - config_name: middle
48
- features:
49
- - name: example_id
50
- dtype: string
51
- - name: article
52
- dtype: string
53
- - name: answer
54
- dtype: string
55
- - name: question
56
- dtype: string
57
- - name: options
58
- sequence: string
59
- splits:
60
- - name: test
61
- num_bytes: 1786297
62
- num_examples: 1436
63
- - name: train
64
- num_bytes: 31065322
65
- num_examples: 25421
66
- - name: validation
67
- num_bytes: 1761937
68
- num_examples: 1436
69
- download_size: 25443609
70
- dataset_size: 34613556
71
- - config_name: all
72
- features:
73
- - name: example_id
74
- dtype: string
75
- - name: article
76
- dtype: string
77
- - name: answer
78
- dtype: string
79
- - name: question
80
- dtype: string
81
- - name: options
82
- sequence: string
83
- splits:
84
- - name: test
85
- num_bytes: 8775394
86
- num_examples: 4934
87
- - name: train
88
- num_bytes: 157308694
89
- num_examples: 87866
90
- - name: validation
91
- num_bytes: 8647200
92
- num_examples: 4887
93
- download_size: 25443609
94
- dataset_size: 174731288
95
- ---
96
-
97
- # Dataset Card for "race"
98
-
99
- ## Table of Contents
100
- - [Dataset Description](#dataset-description)
101
- - [Dataset Summary](#dataset-summary)
102
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
103
- - [Languages](#languages)
104
- - [Dataset Structure](#dataset-structure)
105
- - [Data Instances](#data-instances)
106
- - [Data Fields](#data-fields)
107
- - [Data Splits](#data-splits)
108
- - [Dataset Creation](#dataset-creation)
109
- - [Curation Rationale](#curation-rationale)
110
- - [Source Data](#source-data)
111
- - [Annotations](#annotations)
112
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
113
- - [Considerations for Using the Data](#considerations-for-using-the-data)
114
- - [Social Impact of Dataset](#social-impact-of-dataset)
115
- - [Discussion of Biases](#discussion-of-biases)
116
- - [Other Known Limitations](#other-known-limitations)
117
- - [Additional Information](#additional-information)
118
- - [Dataset Curators](#dataset-curators)
119
- - [Licensing Information](#licensing-information)
120
- - [Citation Information](#citation-information)
121
- - [Contributions](#contributions)
122
-
123
- ## Dataset Description
124
-
125
- - **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
126
- - **Repository:** https://github.com/qizhex/RACE_AR_baselines
127
- - **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
128
- - **Point of Contact:** [Guokun Lai](mailto:guokun@cs.cmu.edu), [Qizhe Xie](mailto:qzxie@cs.cmu.edu)
129
- - **Size of downloaded dataset files:** 72.79 MB
130
- - **Size of the generated dataset:** 333.27 MB
131
- - **Total amount of disk used:** 406.07 MB
132
-
133
- ### Dataset Summary
134
-
135
- RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
136
- dataset is collected from English examinations in China, which are designed for middle school and high school students.
137
- The dataset can be served as the training and test sets for machine comprehension.
138
-
139
- ### Supported Tasks and Leaderboards
140
-
141
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
-
143
- ### Languages
144
-
145
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
-
147
- ## Dataset Structure
148
-
149
- ### Data Instances
150
-
151
- #### all
152
-
153
- - **Size of downloaded dataset files:** 24.26 MB
154
- - **Size of the generated dataset:** 166.64 MB
155
- - **Total amount of disk used:** 190.90 MB
156
-
157
- An example of 'train' looks as follows.
158
- ```
159
- This example was too long and was cropped:
160
-
161
- {
162
- "answer": "A",
163
- "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
164
- "example_id": "high132.txt",
165
- "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
166
- "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
167
- }
168
- ```
169
-
170
- #### high
171
-
172
- - **Size of downloaded dataset files:** 24.26 MB
173
- - **Size of the generated dataset:** 133.63 MB
174
- - **Total amount of disk used:** 157.89 MB
175
-
176
- An example of 'train' looks as follows.
177
- ```
178
- This example was too long and was cropped:
179
-
180
- {
181
- "answer": "A",
182
- "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
183
- "example_id": "high132.txt",
184
- "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
185
- "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
186
- }
187
- ```
188
-
189
- #### middle
190
-
191
- - **Size of downloaded dataset files:** 24.26 MB
192
- - **Size of the generated dataset:** 33.01 MB
193
- - **Total amount of disk used:** 57.27 MB
194
-
195
- An example of 'train' looks as follows.
196
- ```
197
- This example was too long and was cropped:
198
-
199
- {
200
- "answer": "B",
201
- "article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
202
- "example_id": "middle3.txt",
203
- "options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
204
- "question": "According to the passage, which of the following statements is TRUE?"
205
- }
206
- ```
207
-
208
- ### Data Fields
209
-
210
- The data fields are the same among all splits.
211
-
212
- #### all
213
- - `example_id`: a `string` feature.
214
- - `article`: a `string` feature.
215
- - `answer`: a `string` feature.
216
- - `question`: a `string` feature.
217
- - `options`: a `list` of `string` features.
218
-
219
- #### high
220
- - `example_id`: a `string` feature.
221
- - `article`: a `string` feature.
222
- - `answer`: a `string` feature.
223
- - `question`: a `string` feature.
224
- - `options`: a `list` of `string` features.
225
-
226
- #### middle
227
- - `example_id`: a `string` feature.
228
- - `article`: a `string` feature.
229
- - `answer`: a `string` feature.
230
- - `question`: a `string` feature.
231
- - `options`: a `list` of `string` features.
232
-
233
- ### Data Splits
234
-
235
- | name |train|validation|test|
236
- |------|----:|---------:|---:|
237
- |all |87866| 4887|4934|
238
- |high |62445| 3451|3498|
239
- |middle|25421| 1436|1436|
240
-
241
- ## Dataset Creation
242
-
243
- ### Curation Rationale
244
-
245
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
-
247
- ### Source Data
248
-
249
- #### Initial Data Collection and Normalization
250
-
251
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
-
253
- #### Who are the source language producers?
254
-
255
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
-
257
- ### Annotations
258
-
259
- #### Annotation process
260
-
261
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
-
263
- #### Who are the annotators?
264
-
265
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
266
-
267
- ### Personal and Sensitive Information
268
-
269
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
270
-
271
- ## Considerations for Using the Data
272
-
273
- ### Social Impact of Dataset
274
-
275
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
-
277
- ### Discussion of Biases
278
-
279
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
280
-
281
- ### Other Known Limitations
282
-
283
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
-
285
- ## Additional Information
286
-
287
- ### Dataset Curators
288
-
289
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
-
291
- ### Licensing Information
292
-
293
- http://www.cs.cmu.edu/~glai1/data/race/
294
-
295
- 1. RACE dataset is available for non-commercial research purpose only.
296
-
297
- 2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
298
-
299
- 3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
300
-
301
- 4. We reserve the right to terminate your access to the RACE dataset at any time.
302
-
303
- ### Citation Information
304
-
305
- ```
306
- @inproceedings{lai-etal-2017-race,
307
- title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
308
- author = "Lai, Guokun and
309
- Xie, Qizhe and
310
- Liu, Hanxiao and
311
- Yang, Yiming and
312
- Hovy, Eduard",
313
- booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
314
- month = sep,
315
- year = "2017",
316
- address = "Copenhagen, Denmark",
317
- publisher = "Association for Computational Linguistics",
318
- url = "https://aclanthology.org/D17-1082",
319
- doi = "10.18653/v1/D17-1082",
320
- pages = "785--794",
321
- }
322
- ```
323
-
324
-
325
- ### Contributions
326
-
327
- Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
all/race-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f048db2b0ec48234fc8f4ad8aa5bc29c42d9f9d9a747569aa1f8a45313f2add0
3
+ size 2075880
all/race-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0545a4c93283d8f3b2afc96e48c2dc0e622afd779e0e362c9250531b17e26ceb
3
+ size 37378262
all/race-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c0bb9ec98a0d63780e8aaa927e2b4ab5b829184cdce823c25845ebbb199f1e4
3
+ size 2046502
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"high": {"description": "Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The\n dataset is collected from English examinations in China, which are designed for middle school and high school students.\nThe dataset can be served as the training and test sets for machine comprehension.\n\n", "citation": "@article{lai2017large,\n title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},\n author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},\n journal={arXiv preprint arXiv:1704.04683},\n year={2017}\n}\n", "homepage": "http://www.cs.cmu.edu/~glai1/data/race/", "license": "", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "options": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "race", "config_name": "high", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 6989121, "num_examples": 3498, "dataset_name": "race"}, "train": {"name": "train", "num_bytes": 126243396, "num_examples": 62445, "dataset_name": "race"}, "validation": {"name": "validation", "num_bytes": 6885287, "num_examples": 3451, "dataset_name": "race"}}, "download_checksums": {"http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz": {"num_bytes": 25443609, "checksum": "b2769cc9fdc5c546a693300eb9a966cec6870bd349fbc44ed5225f8ad33006e5"}}, "download_size": 25443609, "post_processing_size": null, "dataset_size": 140117804, "size_in_bytes": 165561413}, "middle": {"description": "Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The\n dataset is collected from English examinations in China, which are designed for middle school and high school students.\nThe dataset can be served as the training and test sets for machine comprehension.\n\n", "citation": "@article{lai2017large,\n title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},\n author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},\n journal={arXiv preprint arXiv:1704.04683},\n year={2017}\n}\n", "homepage": "http://www.cs.cmu.edu/~glai1/data/race/", "license": "", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "options": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "race", "config_name": "middle", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1786297, "num_examples": 1436, "dataset_name": "race"}, "train": {"name": "train", "num_bytes": 31065322, "num_examples": 25421, "dataset_name": "race"}, "validation": {"name": "validation", "num_bytes": 1761937, "num_examples": 1436, "dataset_name": "race"}}, "download_checksums": {"http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz": {"num_bytes": 25443609, "checksum": "b2769cc9fdc5c546a693300eb9a966cec6870bd349fbc44ed5225f8ad33006e5"}}, "download_size": 25443609, "post_processing_size": null, "dataset_size": 34613556, "size_in_bytes": 60057165}, "all": {"description": "Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The\n dataset is collected from English examinations in China, which are designed for middle school and high school students.\nThe dataset can be served as the training and test sets for machine comprehension.\n\n", "citation": "@article{lai2017large,\n title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},\n author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},\n journal={arXiv preprint arXiv:1704.04683},\n year={2017}\n}\n", "homepage": "http://www.cs.cmu.edu/~glai1/data/race/", "license": "", "features": {"example_id": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "options": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "race", "config_name": "all", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 8775394, "num_examples": 4934, "dataset_name": "race"}, "train": {"name": "train", "num_bytes": 157308694, "num_examples": 87866, "dataset_name": "race"}, "validation": {"name": "validation", "num_bytes": 8647200, "num_examples": 4887, "dataset_name": "race"}}, "download_checksums": {"http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz": {"num_bytes": 25443609, "checksum": "b2769cc9fdc5c546a693300eb9a966cec6870bd349fbc44ed5225f8ad33006e5"}}, "download_size": 25443609, "post_processing_size": null, "dataset_size": 174731288, "size_in_bytes": 200174897}}
 
 
high/race-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a6a045efc539beec92b70a9324eaa277abbf754278eb7ab9925cf8942bfc55a
3
+ size 1683665
high/race-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7814bc27bb545f094c3d3ad6182258f3bf03680d3a023c5e36df3b8e2da38a
3
+ size 30411741
high/race-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29291e7c5a8c5bafbf0df477a9a8d6c0870290b66aaf120083130742f053d371
3
+ size 1655471
middle/race-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:687ab2612db7ae83d67691eb2a6c52a5721b7311474beb2207e99cdbf358fe46
3
+ size 404724
middle/race-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f29b5749d4e066162dda2a50d29205514b2415d08cab8554d1635d0ea186d222
3
+ size 6969930
middle/race-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87b90eb59b39d92a2f6059a6e57b471ccd06e0bbea5ea2b3f512a57142f67ca7
3
+ size 406939
race.py DELETED
@@ -1,111 +0,0 @@
1
- """TODO(race): Add a description here."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
-
8
-
9
- _CITATION = """\
10
- @article{lai2017large,
11
- title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},
12
- author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},
13
- journal={arXiv preprint arXiv:1704.04683},
14
- year={2017}
15
- }
16
- """
17
-
18
- _DESCRIPTION = """\
19
- Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
20
- dataset is collected from English examinations in China, which are designed for middle school and high school students.
21
- The dataset can be served as the training and test sets for machine comprehension.
22
-
23
- """
24
-
25
- _URL = "http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz"
26
-
27
-
28
- class Race(datasets.GeneratorBasedBuilder):
29
- """ReAding Comprehension Dataset From Examination dataset from CMU"""
30
-
31
- VERSION = datasets.Version("0.1.0")
32
-
33
- BUILDER_CONFIGS = [
34
- datasets.BuilderConfig(name="high", description="Exams designed for high school students", version=VERSION),
35
- datasets.BuilderConfig(
36
- name="middle", description="Exams designed for middle school students", version=VERSION
37
- ),
38
- datasets.BuilderConfig(
39
- name="all", description="Exams designed for both high school and middle school students", version=VERSION
40
- ),
41
- ]
42
-
43
- def _info(self):
44
- return datasets.DatasetInfo(
45
- # This is the description that will appear on the datasets page.
46
- description=_DESCRIPTION,
47
- # datasets.features.FeatureConnectors
48
- features=datasets.Features(
49
- {
50
- "example_id": datasets.Value("string"),
51
- "article": datasets.Value("string"),
52
- "answer": datasets.Value("string"),
53
- "question": datasets.Value("string"),
54
- "options": datasets.features.Sequence(datasets.Value("string"))
55
- # These are the features of your dataset like images, labels ...
56
- }
57
- ),
58
- # If there's a common (input, target) tuple from the features,
59
- # specify them here. They'll be used if as_supervised=True in
60
- # builder.as_dataset.
61
- supervised_keys=None,
62
- # Homepage of the dataset for documentation
63
- homepage="http://www.cs.cmu.edu/~glai1/data/race/",
64
- citation=_CITATION,
65
- )
66
-
67
- def _split_generators(self, dl_manager):
68
- """Returns SplitGenerators."""
69
- # Downloads the data and defines the splits
70
- # dl_manager is a datasets.download.DownloadManager that can be used to
71
- archive = dl_manager.download(_URL)
72
- case = str(self.config.name)
73
- if case == "all":
74
- case = ""
75
- return [
76
- datasets.SplitGenerator(
77
- name=datasets.Split.TEST,
78
- # These kwargs will be passed to _generate_examples
79
- gen_kwargs={"train_test_or_eval": f"RACE/test/{case}", "files": dl_manager.iter_archive(archive)},
80
- ),
81
- datasets.SplitGenerator(
82
- name=datasets.Split.TRAIN,
83
- # These kwargs will be passed to _generate_examples
84
- gen_kwargs={"train_test_or_eval": f"RACE/train/{case}", "files": dl_manager.iter_archive(archive)},
85
- ),
86
- datasets.SplitGenerator(
87
- name=datasets.Split.VALIDATION,
88
- # These kwargs will be passed to _generate_examples
89
- gen_kwargs={"train_test_or_eval": f"RACE/dev/{case}", "files": dl_manager.iter_archive(archive)},
90
- ),
91
- ]
92
-
93
- def _generate_examples(self, train_test_or_eval, files):
94
- """Yields examples."""
95
- for file_idx, (path, f) in enumerate(files):
96
- if path.startswith(train_test_or_eval) and path.endswith(".txt"):
97
- data = json.loads(f.read().decode("utf-8"))
98
- questions = data["questions"]
99
- answers = data["answers"]
100
- options = data["options"]
101
- for i in range(len(questions)):
102
- question = questions[i]
103
- answer = answers[i]
104
- option = options[i]
105
- yield f"{file_idx}_{i}", {
106
- "example_id": data["id"],
107
- "article": data["article"],
108
- "question": question,
109
- "answer": answer,
110
- "options": option,
111
- }