Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
82f5708
·
1 Parent(s): dfc9851

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,207 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-nc-3.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - question-answering
18
- task_ids:
19
- - closed-domain-qa
20
- paperswithcode_id: sciq
21
- pretty_name: SciQ
22
- dataset_info:
23
- features:
24
- - name: question
25
- dtype: string
26
- - name: distractor3
27
- dtype: string
28
- - name: distractor1
29
- dtype: string
30
- - name: distractor2
31
- dtype: string
32
- - name: correct_answer
33
- dtype: string
34
- - name: support
35
- dtype: string
36
- splits:
37
- - name: test
38
- num_bytes: 564826
39
- num_examples: 1000
40
- - name: train
41
- num_bytes: 6556427
42
- num_examples: 11679
43
- - name: validation
44
- num_bytes: 555019
45
- num_examples: 1000
46
- download_size: 2821345
47
- dataset_size: 7676272
48
- ---
49
-
50
- # Dataset Card for "sciq"
51
-
52
- ## Table of Contents
53
- - [Dataset Description](#dataset-description)
54
- - [Dataset Summary](#dataset-summary)
55
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
56
- - [Languages](#languages)
57
- - [Dataset Structure](#dataset-structure)
58
- - [Data Instances](#data-instances)
59
- - [Data Fields](#data-fields)
60
- - [Data Splits](#data-splits)
61
- - [Dataset Creation](#dataset-creation)
62
- - [Curation Rationale](#curation-rationale)
63
- - [Source Data](#source-data)
64
- - [Annotations](#annotations)
65
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
66
- - [Considerations for Using the Data](#considerations-for-using-the-data)
67
- - [Social Impact of Dataset](#social-impact-of-dataset)
68
- - [Discussion of Biases](#discussion-of-biases)
69
- - [Other Known Limitations](#other-known-limitations)
70
- - [Additional Information](#additional-information)
71
- - [Dataset Curators](#dataset-curators)
72
- - [Licensing Information](#licensing-information)
73
- - [Citation Information](#citation-information)
74
- - [Contributions](#contributions)
75
-
76
- ## Dataset Description
77
-
78
- - **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
79
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
80
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Size of downloaded dataset files:** 2.69 MB
83
- - **Size of the generated dataset:** 7.32 MB
84
- - **Total amount of disk used:** 10.01 MB
85
-
86
- ### Dataset Summary
87
-
88
- The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
89
-
90
- ### Supported Tasks and Leaderboards
91
-
92
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
-
94
- ### Languages
95
-
96
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
-
98
- ## Dataset Structure
99
-
100
- ### Data Instances
101
-
102
- #### default
103
-
104
- - **Size of downloaded dataset files:** 2.69 MB
105
- - **Size of the generated dataset:** 7.32 MB
106
- - **Total amount of disk used:** 10.01 MB
107
-
108
- An example of 'train' looks as follows.
109
- ```
110
- This example was too long and was cropped:
111
-
112
- {
113
- "correct_answer": "coriolis effect",
114
- "distractor1": "muon effect",
115
- "distractor2": "centrifugal effect",
116
- "distractor3": "tropical effect",
117
- "question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
118
- "support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
119
- }
120
- ```
121
-
122
- ### Data Fields
123
-
124
- The data fields are the same among all splits.
125
-
126
- #### default
127
- - `question`: a `string` feature.
128
- - `distractor3`: a `string` feature.
129
- - `distractor1`: a `string` feature.
130
- - `distractor2`: a `string` feature.
131
- - `correct_answer`: a `string` feature.
132
- - `support`: a `string` feature.
133
-
134
- ### Data Splits
135
-
136
- | name |train|validation|test|
137
- |-------|----:|---------:|---:|
138
- |default|11679| 1000|1000|
139
-
140
- ## Dataset Creation
141
-
142
- ### Curation Rationale
143
-
144
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
145
-
146
- ### Source Data
147
-
148
- #### Initial Data Collection and Normalization
149
-
150
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
-
152
- #### Who are the source language producers?
153
-
154
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
155
-
156
- ### Annotations
157
-
158
- #### Annotation process
159
-
160
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
161
-
162
- #### Who are the annotators?
163
-
164
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
165
-
166
- ### Personal and Sensitive Information
167
-
168
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
-
170
- ## Considerations for Using the Data
171
-
172
- ### Social Impact of Dataset
173
-
174
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
175
-
176
- ### Discussion of Biases
177
-
178
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
-
180
- ### Other Known Limitations
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- ## Additional Information
185
-
186
- ### Dataset Curators
187
-
188
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
-
190
- ### Licensing Information
191
-
192
- The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
193
-
194
- ### Citation Information
195
-
196
- ```
197
- @inproceedings{SciQ,
198
- title={Crowdsourcing Multiple Choice Science Questions},
199
- author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
200
- year={2017},
201
- journal={arXiv:1707.06209v1}
202
- }
203
- ```
204
-
205
- ### Contributions
206
-
207
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.\n\n", "citation": "@inproceedings{SciQ,\n title={Crowdsourcing Multiple Choice Science Questions},\n author={Johannes Welbl, Nelson F. Liu, Matt Gardner},\n year={2017},\n journal={arXiv:1707.06209v1}\n}\n", "homepage": "https://allenai.org/data/sciq", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "distractor3": {"dtype": "string", "id": null, "_type": "Value"}, "distractor1": {"dtype": "string", "id": null, "_type": "Value"}, "distractor2": {"dtype": "string", "id": null, "_type": "Value"}, "correct_answer": {"dtype": "string", "id": null, "_type": "Value"}, "support": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "sciq", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 564826, "num_examples": 1000, "dataset_name": "sciq"}, "train": {"name": "train", "num_bytes": 6556427, "num_examples": 11679, "dataset_name": "sciq"}, "validation": {"name": "validation", "num_bytes": 555019, "num_examples": 1000, "dataset_name": "sciq"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/SciQ.zip": {"num_bytes": 2821345, "checksum": "7f3312f6ac6b09970b32942d106a8c44ec0dad46a0369f17d635aff8e348a87c"}}, "download_size": 2821345, "dataset_size": 7676272, "size_in_bytes": 10497617}}
 
 
default/sciq-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:364f312c9997f2ea6496550839450f27b6db4ca556ebae659fd0c527e7e685f2
3
+ size 342807
default/sciq-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc71e6ba5c018db003c7b75dcd30f1570537c9c007f024a63ef24190927461a3
3
+ size 3993098
default/sciq-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12c1f7227435150ff546f858049a1031dd9074639943c262cf659549f855280a
3
+ size 338502
sciq.py DELETED
@@ -1,91 +0,0 @@
1
- """TODO(sciQ): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(sciQ): BibTeX citation
11
- _CITATION = """\
12
- @inproceedings{SciQ,
13
- title={Crowdsourcing Multiple Choice Science Questions},
14
- author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
15
- year={2017},
16
- journal={arXiv:1707.06209v1}
17
- }
18
- """
19
-
20
- # TODO(sciQ):
21
- _DESCRIPTION = """\
22
- The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
23
-
24
- """
25
- _URL = "https://s3-us-west-2.amazonaws.com/ai2-website/data/SciQ.zip"
26
-
27
-
28
- class Sciq(datasets.GeneratorBasedBuilder):
29
- """TODO(sciQ): Short description of my dataset."""
30
-
31
- # TODO(sciQ): Set up version.
32
- VERSION = datasets.Version("0.1.0")
33
-
34
- def _info(self):
35
- # TODO(sciQ): Specifies the datasets.DatasetInfo object
36
- return datasets.DatasetInfo(
37
- # This is the description that will appear on the datasets page.
38
- description=_DESCRIPTION,
39
- # datasets.features.FeatureConnectors
40
- features=datasets.Features(
41
- {
42
- # These are the features of your dataset like images, labels ...
43
- "question": datasets.Value("string"),
44
- "distractor3": datasets.Value("string"),
45
- "distractor1": datasets.Value("string"),
46
- "distractor2": datasets.Value("string"),
47
- "correct_answer": datasets.Value("string"),
48
- "support": datasets.Value("string"),
49
- }
50
- ),
51
- # If there's a common (input, target) tuple from the features,
52
- # specify them here. They'll be used if as_supervised=True in
53
- # builder.as_dataset.
54
- supervised_keys=None,
55
- # Homepage of the dataset for documentation
56
- homepage="https://allenai.org/data/sciq",
57
- citation=_CITATION,
58
- )
59
-
60
- def _split_generators(self, dl_manager):
61
- """Returns SplitGenerators."""
62
- # TODO(sciQ): Downloads the data and defines the splits
63
- # dl_manager is a datasets.download.DownloadManager that can be used to
64
- # download and extract URLs
65
- dl_dir = dl_manager.download_and_extract(_URL)
66
- data_dir = os.path.join(dl_dir, "SciQ dataset-2 3")
67
- return [
68
- datasets.SplitGenerator(
69
- name=datasets.Split.TRAIN,
70
- # These kwargs will be passed to _generate_examples
71
- gen_kwargs={"filepath": os.path.join(data_dir, "train.json")},
72
- ),
73
- datasets.SplitGenerator(
74
- name=datasets.Split.VALIDATION,
75
- # These kwargs will be passed to _generate_examples
76
- gen_kwargs={"filepath": os.path.join(data_dir, "valid.json")},
77
- ),
78
- datasets.SplitGenerator(
79
- name=datasets.Split.TEST,
80
- # These kwargs will be passed to _generate_examples
81
- gen_kwargs={"filepath": os.path.join(data_dir, "test.json")},
82
- ),
83
- ]
84
-
85
- def _generate_examples(self, filepath):
86
- """Yields examples."""
87
- # TODO(sciQ): Yields (key, example) tuples from the dataset
88
- with open(filepath, encoding="utf-8") as f:
89
- data = json.load(f)
90
- for id_, row in enumerate(data):
91
- yield id_, row