Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
1664417
1 Parent(s): 870fda1

Convert dataset to Parquet (#4)

Browse files

- Convert dataset to Parquet (e41544f1cce95e1e671a8c93438616372ad49fd4)
- Add ARC-Easy data files (4d4ed1b0a54f45df33cd8eb45c157cc00cdaf35e)
- Delete loading script (a47c900f11d268f8922aa5d672e71e8e2c58f27c)
- Delete legacy dataset_infos.json (0555bd897c85012514721df4e419709132bb00d4)

ARC-Challenge/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62f03257e737aed263f55c6abf87c7bb0028a44a6bdd2a26eb1279eb42c1d1e9
3
+ size 203808
ARC-Challenge/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e488c1587ffdcfc8443f916c53488a95cd471c5790e0746c6bfe4cecf20962cb
3
+ size 189909
ARC-Challenge/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:395a5c88d1580d69855fbaee9450270578df1ad5af6259771cd0a42c20e99f05
3
+ size 55743
ARC-Easy/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4160597d618ae851c7eb04e281574f3f654776216ac6b6641588d64527b47177
3
+ size 346257
ARC-Easy/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b315db8a4be597dc7daa50a4e70d48dd7c990c32085629e6ccd8c926beaa80b5
3
+ size 330598
ARC-Easy/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed890ff1e4cef7a7140d3a30dcea3ed2c9d467c6458f447ad9ef0176d8dcbb74
3
+ size 86080
README.md CHANGED
@@ -5,8 +5,6 @@ language_creators:
5
  - found
6
  language:
7
  - en
8
- language_bcp47:
9
- - en-US
10
  license:
11
  - cc-by-sa-4.0
12
  multilinguality:
@@ -20,8 +18,9 @@ task_categories:
20
  task_ids:
21
  - open-domain-qa
22
  - multiple-choice-qa
23
- paperswithcode_id: null
24
  pretty_name: Ai2Arc
 
 
25
  dataset_info:
26
  - config_name: ARC-Challenge
27
  features:
@@ -39,16 +38,16 @@ dataset_info:
39
  dtype: string
40
  splits:
41
  - name: train
42
- num_bytes: 351888
43
  num_examples: 1119
44
  - name: test
45
- num_bytes: 377740
46
  num_examples: 1172
47
  - name: validation
48
- num_bytes: 97254
49
  num_examples: 299
50
- download_size: 680841265
51
- dataset_size: 826882
52
  - config_name: ARC-Easy
53
  features:
54
  - name: id
@@ -65,16 +64,33 @@ dataset_info:
65
  dtype: string
66
  splits:
67
  - name: train
68
- num_bytes: 623254
69
  num_examples: 2251
70
  - name: test
71
- num_bytes: 661997
72
  num_examples: 2376
73
  - name: validation
74
- num_bytes: 158498
75
  num_examples: 570
76
- download_size: 680841265
77
- dataset_size: 1443749
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  ---
79
 
80
  # Dataset Card for "ai2_arc"
 
5
  - found
6
  language:
7
  - en
 
 
8
  license:
9
  - cc-by-sa-4.0
10
  multilinguality:
 
18
  task_ids:
19
  - open-domain-qa
20
  - multiple-choice-qa
 
21
  pretty_name: Ai2Arc
22
+ language_bcp47:
23
+ - en-US
24
  dataset_info:
25
  - config_name: ARC-Challenge
26
  features:
 
38
  dtype: string
39
  splits:
40
  - name: train
41
+ num_bytes: 349760
42
  num_examples: 1119
43
  - name: test
44
+ num_bytes: 375511
45
  num_examples: 1172
46
  - name: validation
47
+ num_bytes: 96660
48
  num_examples: 299
49
+ download_size: 449460
50
+ dataset_size: 821931
51
  - config_name: ARC-Easy
52
  features:
53
  - name: id
 
64
  dtype: string
65
  splits:
66
  - name: train
67
+ num_bytes: 619000
68
  num_examples: 2251
69
  - name: test
70
+ num_bytes: 657514
71
  num_examples: 2376
72
  - name: validation
73
+ num_bytes: 157394
74
  num_examples: 570
75
+ download_size: 762935
76
+ dataset_size: 1433908
77
+ configs:
78
+ - config_name: ARC-Challenge
79
+ data_files:
80
+ - split: train
81
+ path: ARC-Challenge/train-*
82
+ - split: test
83
+ path: ARC-Challenge/test-*
84
+ - split: validation
85
+ path: ARC-Challenge/validation-*
86
+ - config_name: ARC-Easy
87
+ data_files:
88
+ - split: train
89
+ path: ARC-Easy/train-*
90
+ - split: test
91
+ path: ARC-Easy/test-*
92
+ - split: validation
93
+ path: ARC-Easy/validation-*
94
  ---
95
 
96
  # Dataset Card for "ai2_arc"
ai2_arc.py DELETED
@@ -1,132 +0,0 @@
1
- """TODO(arc): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(ai2_arc): BibTeX citation
11
- _CITATION = """\
12
- @article{allenai:arc,
13
- author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
14
- Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
15
- title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
16
- journal = {arXiv:1803.05457v1},
17
- year = {2018},
18
- }
19
- """
20
-
21
- # TODO(ai2_arc):
22
- _DESCRIPTION = """\
23
- A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
24
- advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
25
- only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also
26
- including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
27
- """
28
-
29
- _URL = "https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip"
30
-
31
-
32
- class Ai2ArcConfig(datasets.BuilderConfig):
33
- """BuilderConfig for Ai2ARC."""
34
-
35
- def __init__(self, **kwargs):
36
- """BuilderConfig for Ai2Arc.
37
-
38
- Args:
39
- **kwargs: keyword arguments forwarded to super.
40
- """
41
- super(Ai2ArcConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
42
-
43
-
44
- class Ai2Arc(datasets.GeneratorBasedBuilder):
45
- """TODO(arc): Short description of my dataset."""
46
-
47
- # TODO(arc): Set up version.
48
- VERSION = datasets.Version("1.0.0")
49
- BUILDER_CONFIGS = [
50
- Ai2ArcConfig(
51
- name="ARC-Challenge",
52
- description="""\
53
- Challenge Set of 2590 “hard” questions (those that both a retrieval and a co-occurrence method fail to answer correctly)
54
- """,
55
- ),
56
- Ai2ArcConfig(
57
- name="ARC-Easy",
58
- description="""\
59
- Easy Set of 5197 questions
60
- """,
61
- ),
62
- ]
63
-
64
- def _info(self):
65
- # TODO(ai2_arc): Specifies the datasets.DatasetInfo object
66
- return datasets.DatasetInfo(
67
- # This is the description that will appear on the datasets page.
68
- description=_DESCRIPTION,
69
- # datasets.features.FeatureConnectors
70
- features=datasets.Features(
71
- {
72
- "id": datasets.Value("string"),
73
- "question": datasets.Value("string"),
74
- "choices": datasets.features.Sequence(
75
- {"text": datasets.Value("string"), "label": datasets.Value("string")}
76
- ),
77
- "answerKey": datasets.Value("string")
78
- # These are the features of your dataset like images, labels ...
79
- }
80
- ),
81
- # If there's a common (input, target) tuple from the features,
82
- # specify them here. They'll be used if as_supervised=True in
83
- # builder.as_dataset.
84
- supervised_keys=None,
85
- # Homepage of the dataset for documentation
86
- homepage="https://allenai.org/data/arc",
87
- citation=_CITATION,
88
- )
89
-
90
- def _split_generators(self, dl_manager):
91
- """Returns SplitGenerators."""
92
- # TODO(ai2_arc): Downloads the data and defines the splits
93
- # dl_manager is a datasets.download.DownloadManager that can be used to
94
- # download and extract URLs
95
- dl_dir = dl_manager.download_and_extract(_URL)
96
- data_dir = os.path.join(dl_dir, "ARC-V1-Feb2018-2")
97
- return [
98
- datasets.SplitGenerator(
99
- name=datasets.Split.TRAIN,
100
- # These kwargs will be passed to _generate_examples
101
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Train.jsonl")},
102
- ),
103
- datasets.SplitGenerator(
104
- name=datasets.Split.TEST,
105
- # These kwargs will be passed to _generate_examples
106
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Test.jsonl")},
107
- ),
108
- datasets.SplitGenerator(
109
- name=datasets.Split.VALIDATION,
110
- # These kwargs will be passed to _generate_examples
111
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Dev.jsonl")},
112
- ),
113
- ]
114
-
115
- def _generate_examples(self, filepath):
116
- """Yields examples."""
117
- # TODO(ai2_arc): Yields (key, example) tuples from the dataset
118
- with open(filepath, encoding="utf-8") as f:
119
- for row in f:
120
- data = json.loads(row)
121
- answerkey = data["answerKey"]
122
- id_ = data["id"]
123
- question = data["question"]["stem"]
124
- choices = data["question"]["choices"]
125
- text_choices = [choice["text"] for choice in choices]
126
- label_choices = [choice["label"] for choice in choices]
127
- yield id_, {
128
- "id": id_,
129
- "answerKey": answerkey,
130
- "question": question,
131
- "choices": {"text": text_choices, "label": label_choices},
132
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"ARC-Challenge": {"description": "A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\n advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\n only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\n including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.\n", "citation": "@article{allenai:arc,\n author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and\n Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},\n title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},\n journal = {arXiv:1803.05457v1},\n year = {2018},\n}\n", "homepage": "https://allenai.org/data/arc", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "ai2_arc", "config_name": "ARC-Challenge", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 377740, "num_examples": 1172, "dataset_name": "ai2_arc"}, "train": {"name": "train", "num_bytes": 351888, "num_examples": 1119, "dataset_name": "ai2_arc"}, "validation": {"name": "validation", "num_bytes": 97254, "num_examples": 299, "dataset_name": "ai2_arc"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip": {"num_bytes": 680841265, "checksum": "6d2d5ab50b2ceec6ba5f79c921be77cf2de712ea25a2b3f4fff3acc101cecfa0"}}, "download_size": 680841265, "dataset_size": 826882, "size_in_bytes": 681668147}, "ARC-Easy": {"description": "A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\n advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\n only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\n including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.\n", "citation": "@article{allenai:arc,\n author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and\n Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},\n title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},\n journal = {arXiv:1803.05457v1},\n year = {2018},\n}\n", "homepage": "https://allenai.org/data/arc", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "ai2_arc", "config_name": "ARC-Easy", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 661997, "num_examples": 2376, "dataset_name": "ai2_arc"}, "train": {"name": "train", "num_bytes": 623254, "num_examples": 2251, "dataset_name": "ai2_arc"}, "validation": {"name": "validation", "num_bytes": 158498, "num_examples": 570, "dataset_name": "ai2_arc"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip": {"num_bytes": 680841265, "checksum": "6d2d5ab50b2ceec6ba5f79c921be77cf2de712ea25a2b3f4fff3acc101cecfa0"}}, "download_size": 680841265, "dataset_size": 1443749, "size_in_bytes": 682285014}}