parquet-converter commited on
Commit
8bd1bec
1 Parent(s): e95117a

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,226 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - crowdsourced
8
- - expert-generated
9
- license:
10
- - apache-2.0
11
- multilinguality:
12
- - monolingual
13
- pretty_name: MathQA
14
- size_categories:
15
- - 10K<n<100K
16
- source_datasets:
17
- - extended|aqua_rat
18
- task_categories:
19
- - question-answering
20
- task_ids:
21
- - multiple-choice-qa
22
- paperswithcode_id: mathqa
23
- dataset_info:
24
- features:
25
- - name: Problem
26
- dtype: string
27
- - name: Rationale
28
- dtype: string
29
- - name: options
30
- dtype: string
31
- - name: correct
32
- dtype: string
33
- - name: annotated_formula
34
- dtype: string
35
- - name: linear_formula
36
- dtype: string
37
- - name: category
38
- dtype: string
39
- splits:
40
- - name: test
41
- num_bytes: 1844184
42
- num_examples: 2985
43
- - name: train
44
- num_bytes: 18368826
45
- num_examples: 29837
46
- - name: validation
47
- num_bytes: 2752969
48
- num_examples: 4475
49
- download_size: 7302821
50
- dataset_size: 22965979
51
- ---
52
-
53
- # Dataset Card for MathQA
54
-
55
- ## Table of Contents
56
- - [Dataset Description](#dataset-description)
57
- - [Dataset Summary](#dataset-summary)
58
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
59
- - [Languages](#languages)
60
- - [Dataset Structure](#dataset-structure)
61
- - [Data Instances](#data-instances)
62
- - [Data Fields](#data-fields)
63
- - [Data Splits](#data-splits)
64
- - [Dataset Creation](#dataset-creation)
65
- - [Curation Rationale](#curation-rationale)
66
- - [Source Data](#source-data)
67
- - [Annotations](#annotations)
68
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
69
- - [Considerations for Using the Data](#considerations-for-using-the-data)
70
- - [Social Impact of Dataset](#social-impact-of-dataset)
71
- - [Discussion of Biases](#discussion-of-biases)
72
- - [Other Known Limitations](#other-known-limitations)
73
- - [Additional Information](#additional-information)
74
- - [Dataset Curators](#dataset-curators)
75
- - [Licensing Information](#licensing-information)
76
- - [Citation Information](#citation-information)
77
- - [Contributions](#contributions)
78
-
79
- ## Dataset Description
80
-
81
- - **Homepage:** [https://math-qa.github.io/math-QA/](https://math-qa.github.io/math-QA/)
82
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
83
- - **Paper:** [MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms](https://aclanthology.org/N19-1245/)
84
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
85
- - **Size of downloaded dataset files:** 6.96 MB
86
- - **Size of the generated dataset:** 21.90 MB
87
- - **Total amount of disk used:** 28.87 MB
88
-
89
- ### Dataset Summary
90
-
91
- We introduce a large-scale dataset of math word problems.
92
-
93
- Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset with fully-specified operational programs.
94
-
95
- AQuA-RAT has provided the questions, options, rationale, and the correct options.
96
-
97
- ### Supported Tasks and Leaderboards
98
-
99
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
100
-
101
- ### Languages
102
-
103
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
-
105
- ## Dataset Structure
106
-
107
- ### Data Instances
108
-
109
- #### default
110
-
111
- - **Size of downloaded dataset files:** 6.96 MB
112
- - **Size of the generated dataset:** 21.90 MB
113
- - **Total amount of disk used:** 28.87 MB
114
-
115
- An example of 'train' looks as follows.
116
- ```
117
- {
118
- "Problem": "a multiple choice test consists of 4 questions , and each question has 5 answer choices . in how many r ways can the test be completed if every question is unanswered ?",
119
- "Rationale": "\"5 choices for each of the 4 questions , thus total r of 5 * 5 * 5 * 5 = 5 ^ 4 = 625 ways to answer all of them . answer : c .\"",
120
- "annotated_formula": "power(5, 4)",
121
- "category": "general",
122
- "correct": "c",
123
- "linear_formula": "power(n1,n0)|",
124
- "options": "a ) 24 , b ) 120 , c ) 625 , d ) 720 , e ) 1024"
125
- }
126
- ```
127
-
128
- ### Data Fields
129
-
130
- The data fields are the same among all splits.
131
-
132
- #### default
133
- - `Problem`: a `string` feature.
134
- - `Rationale`: a `string` feature.
135
- - `options`: a `string` feature.
136
- - `correct`: a `string` feature.
137
- - `annotated_formula`: a `string` feature.
138
- - `linear_formula`: a `string` feature.
139
- - `category`: a `string` feature.
140
-
141
- ### Data Splits
142
-
143
- | name |train|validation|test|
144
- |-------|----:|---------:|---:|
145
- |default|29837| 4475|2985|
146
-
147
- ## Dataset Creation
148
-
149
- ### Curation Rationale
150
-
151
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
-
153
- ### Source Data
154
-
155
- #### Initial Data Collection and Normalization
156
-
157
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
158
-
159
- #### Who are the source language producers?
160
-
161
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
-
163
- ### Annotations
164
-
165
- #### Annotation process
166
-
167
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
-
169
- #### Who are the annotators?
170
-
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
-
173
- ### Personal and Sensitive Information
174
-
175
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
-
177
- ## Considerations for Using the Data
178
-
179
- ### Social Impact of Dataset
180
-
181
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
-
183
- ### Discussion of Biases
184
-
185
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
-
187
- ### Other Known Limitations
188
-
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
-
191
- ## Additional Information
192
-
193
- ### Dataset Curators
194
-
195
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
-
197
- ### Licensing Information
198
-
199
- The dataset is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
200
-
201
- ### Citation Information
202
-
203
- ```
204
- @inproceedings{amini-etal-2019-mathqa,
205
- title = "{M}ath{QA}: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms",
206
- author = "Amini, Aida and
207
- Gabriel, Saadia and
208
- Lin, Shanchuan and
209
- Koncel-Kedziorski, Rik and
210
- Choi, Yejin and
211
- Hajishirzi, Hannaneh",
212
- booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
213
- month = jun,
214
- year = "2019",
215
- address = "Minneapolis, Minnesota",
216
- publisher = "Association for Computational Linguistics",
217
- url = "https://aclanthology.org/N19-1245",
218
- doi = "10.18653/v1/N19-1245",
219
- pages = "2357--2367",
220
- }
221
- ```
222
-
223
-
224
- ### Contributions
225
-
226
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "\nOur dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset. AQuA-RAT has provided the questions, options, rationale, and the correct options.\n", "citation": "\n", "homepage": "https://math-qa.github.io/math-QA/", "license": "", "features": {"Problem": {"dtype": "string", "id": null, "_type": "Value"}, "Rationale": {"dtype": "string", "id": null, "_type": "Value"}, "options": {"dtype": "string", "id": null, "_type": "Value"}, "correct": {"dtype": "string", "id": null, "_type": "Value"}, "annotated_formula": {"dtype": "string", "id": null, "_type": "Value"}, "linear_formula": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "math_qa", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1844184, "num_examples": 2985, "dataset_name": "math_qa"}, "train": {"name": "train", "num_bytes": 18368826, "num_examples": 29837, "dataset_name": "math_qa"}, "validation": {"name": "validation", "num_bytes": 2752969, "num_examples": 4475, "dataset_name": "math_qa"}}, "download_checksums": {"https://math-qa.github.io/math-QA/data/MathQA.zip": {"num_bytes": 7302821, "checksum": "7344f30456a7aef3176d4866cc953b35b41bec44eda6b00cdbcfde2876b2f07a"}}, "download_size": 7302821, "dataset_size": 22965979, "size_in_bytes": 30268800}}
 
 
default/math_qa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81bb81c1a0cc0b0de038008e0cc23e7f92bd7227511d13a12742c195bb1426ec
3
+ size 903426
default/math_qa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:942831a0b2134a2e41cc76dc4233bfeb454e15688a475e7bf139cd5e4b3a3924
3
+ size 9013732
default/math_qa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:502f93842014027701344ba79848ddc6d848899a21c909b37345cdd77d25c25b
3
+ size 1350140
math_qa.py DELETED
@@ -1,84 +0,0 @@
1
- """TODO(math_qa): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(math_qa): BibTeX citation
11
- _CITATION = """
12
- """
13
-
14
- # TODO(math_qa):
15
- _DESCRIPTION = """
16
- Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset. AQuA-RAT has provided the questions, options, rationale, and the correct options.
17
- """
18
- _URL = "https://math-qa.github.io/math-QA/data/MathQA.zip"
19
-
20
-
21
- class MathQa(datasets.GeneratorBasedBuilder):
22
- """TODO(math_qa): Short description of my dataset."""
23
-
24
- # TODO(math_qa): Set up version.
25
- VERSION = datasets.Version("0.1.0")
26
-
27
- def _info(self):
28
- # TODO(math_qa): Specifies the datasets.DatasetInfo object
29
- return datasets.DatasetInfo(
30
- # This is the description that will appear on the datasets page.
31
- description=_DESCRIPTION,
32
- # datasets.features.FeatureConnectors
33
- features=datasets.Features(
34
- {
35
- # These are the features of your dataset like images, labels ...
36
- "Problem": datasets.Value("string"),
37
- "Rationale": datasets.Value("string"),
38
- "options": datasets.Value("string"),
39
- "correct": datasets.Value("string"),
40
- "annotated_formula": datasets.Value("string"),
41
- "linear_formula": datasets.Value("string"),
42
- "category": datasets.Value("string"),
43
- }
44
- ),
45
- # If there's a common (input, target) tuple from the features,
46
- # specify them here. They'll be used if as_supervised=True in
47
- # builder.as_dataset.
48
- supervised_keys=None,
49
- # Homepage of the dataset for documentation
50
- homepage="https://math-qa.github.io/math-QA/",
51
- citation=_CITATION,
52
- )
53
-
54
- def _split_generators(self, dl_manager):
55
- """Returns SplitGenerators."""
56
- # TODO(math_qa): Downloads the data and defines the splits
57
- # dl_manager is a datasets.download.DownloadManager that can be used to
58
- # download and extract URLs
59
- dl_path = dl_manager.download_and_extract(_URL)
60
- return [
61
- datasets.SplitGenerator(
62
- name=datasets.Split.TRAIN,
63
- # These kwargs will be passed to _generate_examples
64
- gen_kwargs={"filepath": os.path.join(dl_path, "train.json")},
65
- ),
66
- datasets.SplitGenerator(
67
- name=datasets.Split.TEST,
68
- # These kwargs will be passed to _generate_examples
69
- gen_kwargs={"filepath": os.path.join(dl_path, "test.json")},
70
- ),
71
- datasets.SplitGenerator(
72
- name=datasets.Split.VALIDATION,
73
- # These kwargs will be passed to _generate_examples
74
- gen_kwargs={"filepath": os.path.join(dl_path, "dev.json")},
75
- ),
76
- ]
77
-
78
- def _generate_examples(self, filepath):
79
- """Yields examples."""
80
- # TODO(math_qa): Yields (key, example) tuples from the dataset
81
- with open(filepath, encoding="utf-8") as f:
82
- data = json.load(f)
83
- for id_, row in enumerate(data):
84
- yield id_, row