parquet-converter commited on
Commit
64bc556
1 Parent(s): 7abfcdb

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,225 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: CosmosQA
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-qa
21
- paperswithcode_id: cosmosqa
22
- dataset_info:
23
- features:
24
- - name: id
25
- dtype: string
26
- - name: context
27
- dtype: string
28
- - name: question
29
- dtype: string
30
- - name: answer0
31
- dtype: string
32
- - name: answer1
33
- dtype: string
34
- - name: answer2
35
- dtype: string
36
- - name: answer3
37
- dtype: string
38
- - name: label
39
- dtype: int32
40
- splits:
41
- - name: train
42
- num_bytes: 17159918
43
- num_examples: 25262
44
- - name: test
45
- num_bytes: 5121479
46
- num_examples: 6963
47
- - name: validation
48
- num_bytes: 2186987
49
- num_examples: 2985
50
- download_size: 24399475
51
- dataset_size: 24468384
52
- ---
53
-
54
- # Dataset Card for "cosmos_qa"
55
-
56
- ## Table of Contents
57
- - [Dataset Description](#dataset-description)
58
- - [Dataset Summary](#dataset-summary)
59
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
60
- - [Languages](#languages)
61
- - [Dataset Structure](#dataset-structure)
62
- - [Data Instances](#data-instances)
63
- - [Data Fields](#data-fields)
64
- - [Data Splits](#data-splits)
65
- - [Dataset Creation](#dataset-creation)
66
- - [Curation Rationale](#curation-rationale)
67
- - [Source Data](#source-data)
68
- - [Annotations](#annotations)
69
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
70
- - [Considerations for Using the Data](#considerations-for-using-the-data)
71
- - [Social Impact of Dataset](#social-impact-of-dataset)
72
- - [Discussion of Biases](#discussion-of-biases)
73
- - [Other Known Limitations](#other-known-limitations)
74
- - [Additional Information](#additional-information)
75
- - [Dataset Curators](#dataset-curators)
76
- - [Licensing Information](#licensing-information)
77
- - [Citation Information](#citation-information)
78
- - [Contributions](#contributions)
79
-
80
- ## Dataset Description
81
-
82
- - **Homepage:** [https://wilburone.github.io/cosmos/](https://wilburone.github.io/cosmos/)
83
- - **Repository:** https://github.com/wilburOne/cosmosqa/
84
- - **Paper:** [Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning](https://arxiv.org/abs/1909.00277)
85
- - **Point of Contact:** [Lifu Huang](mailto:warrior.fu@gmail.com)
86
- - **Size of downloaded dataset files:** 23.27 MB
87
- - **Size of the generated dataset:** 23.37 MB
88
- - **Total amount of disk used:** 46.64 MB
89
-
90
- ### Dataset Summary
91
-
92
- Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context
93
-
94
- ### Supported Tasks and Leaderboards
95
-
96
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
-
98
- ### Languages
99
-
100
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
-
102
- ## Dataset Structure
103
-
104
- ### Data Instances
105
-
106
- #### default
107
-
108
- - **Size of downloaded dataset files:** 23.27 MB
109
- - **Size of the generated dataset:** 23.37 MB
110
- - **Total amount of disk used:** 46.64 MB
111
-
112
- An example of 'validation' looks as follows.
113
- ```
114
- This example was too long and was cropped:
115
-
116
- {
117
- "answer0": "If he gets married in the church he wo nt have to get a divorce .",
118
- "answer1": "He wants to get married to a different person .",
119
- "answer2": "He wants to know if he does nt like this girl can he divorce her ?",
120
- "answer3": "None of the above choices .",
121
- "context": "\"Do i need to go for a legal divorce ? I wanted to marry a woman but she is not in the same religion , so i am not concern of th...",
122
- "id": "3BFF0DJK8XA7YNK4QYIGCOG1A95STE##3180JW2OT5AF02OISBX66RFOCTG5J7##A2LTOS0AZ3B28A##Blog_56156##q1_a1##378G7J1SJNCDAAIN46FM2P7T6KZEW2",
123
- "label": 1,
124
- "question": "Why is this person asking about divorce ?"
125
- }
126
- ```
127
-
128
- ### Data Fields
129
-
130
- The data fields are the same among all splits.
131
-
132
- #### default
133
- - `id`: a `string` feature.
134
- - `context`: a `string` feature.
135
- - `question`: a `string` feature.
136
- - `answer0`: a `string` feature.
137
- - `answer1`: a `string` feature.
138
- - `answer2`: a `string` feature.
139
- - `answer3`: a `string` feature.
140
- - `label`: a `int32` feature.
141
-
142
- ### Data Splits
143
-
144
- | name |train|validation|test|
145
- |-------|----:|---------:|---:|
146
- |default|25262| 2985|6963|
147
-
148
- ## Dataset Creation
149
-
150
- ### Curation Rationale
151
-
152
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
-
154
- ### Source Data
155
-
156
- #### Initial Data Collection and Normalization
157
-
158
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
-
160
- #### Who are the source language producers?
161
-
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
-
164
- ### Annotations
165
-
166
- #### Annotation process
167
-
168
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
-
170
- #### Who are the annotators?
171
-
172
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
173
-
174
- ### Personal and Sensitive Information
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- ## Considerations for Using the Data
179
-
180
- ### Social Impact of Dataset
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- ### Discussion of Biases
185
-
186
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
-
188
- ### Other Known Limitations
189
-
190
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
191
-
192
- ## Additional Information
193
-
194
- ### Dataset Curators
195
-
196
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
-
198
- ### Licensing Information
199
-
200
- As reported via email by Yejin Choi, the dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
201
-
202
- ### Citation Information
203
-
204
- ```
205
- @inproceedings{huang-etal-2019-cosmos,
206
- title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
207
- author = "Huang, Lifu and
208
- Le Bras, Ronan and
209
- Bhagavatula, Chandra and
210
- Choi, Yejin",
211
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
212
- month = nov,
213
- year = "2019",
214
- address = "Hong Kong, China",
215
- publisher = "Association for Computational Linguistics",
216
- url = "https://www.aclweb.org/anthology/D19-1243",
217
- doi = "10.18653/v1/D19-1243",
218
- pages = "2391--2401",
219
- }
220
- ```
221
-
222
-
223
- ### Contributions
224
-
225
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cosmos_qa.py DELETED
@@ -1,116 +0,0 @@
1
- """Cosmos QA dataset."""
2
-
3
-
4
- import csv
5
- import json
6
-
7
- import datasets
8
-
9
-
10
- _HOMEPAGE = "https://wilburone.github.io/cosmos/"
11
-
12
- _DESCRIPTION = """\
13
- Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context
14
- """
15
-
16
- _CITATION = """\
17
- @inproceedings{huang-etal-2019-cosmos,
18
- title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
19
- author = "Huang, Lifu and
20
- Le Bras, Ronan and
21
- Bhagavatula, Chandra and
22
- Choi, Yejin",
23
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
24
- month = nov,
25
- year = "2019",
26
- address = "Hong Kong, China",
27
- publisher = "Association for Computational Linguistics",
28
- url = "https://www.aclweb.org/anthology/D19-1243",
29
- doi = "10.18653/v1/D19-1243",
30
- pages = "2391--2401",
31
- }
32
- """
33
-
34
- _LICENSE = "CC BY 4.0"
35
-
36
- _URL = "https://github.com/wilburOne/cosmosqa/raw/master/data/"
37
- _URLS = {
38
- "train": _URL + "train.csv",
39
- "test": _URL + "test.jsonl",
40
- "dev": _URL + "valid.csv",
41
- }
42
-
43
-
44
- class CosmosQa(datasets.GeneratorBasedBuilder):
45
- """Cosmos QA dataset."""
46
-
47
- VERSION = datasets.Version("0.1.0")
48
-
49
- def _info(self):
50
- return datasets.DatasetInfo(
51
- description=_DESCRIPTION,
52
- features=datasets.Features(
53
- {
54
- "id": datasets.Value("string"),
55
- "context": datasets.Value("string"),
56
- "question": datasets.Value("string"),
57
- "answer0": datasets.Value("string"),
58
- "answer1": datasets.Value("string"),
59
- "answer2": datasets.Value("string"),
60
- "answer3": datasets.Value("string"),
61
- "label": datasets.Value("int32"),
62
- }
63
- ),
64
- homepage=_HOMEPAGE,
65
- citation=_CITATION,
66
- license=_LICENSE,
67
- )
68
-
69
- def _split_generators(self, dl_manager):
70
- """Returns SplitGenerators."""
71
- urls_to_download = _URLS
72
- dl_dir = dl_manager.download_and_extract(urls_to_download)
73
- return [
74
- datasets.SplitGenerator(
75
- name=datasets.Split.TRAIN,
76
- gen_kwargs={"filepath": dl_dir["train"], "split": "train"},
77
- ),
78
- datasets.SplitGenerator(
79
- name=datasets.Split.TEST,
80
- gen_kwargs={"filepath": dl_dir["test"], "split": "test"},
81
- ),
82
- datasets.SplitGenerator(
83
- name=datasets.Split.VALIDATION,
84
- gen_kwargs={"filepath": dl_dir["dev"], "split": "dev"},
85
- ),
86
- ]
87
-
88
- def _generate_examples(self, filepath, split):
89
- """Yields examples."""
90
- with open(filepath, encoding="utf-8") as f:
91
- if split == "test":
92
- for id_, row in enumerate(f):
93
- data = json.loads(row)
94
- yield id_, {
95
- "id": data["id"],
96
- "context": data["context"],
97
- "question": data["question"],
98
- "answer0": data["answer0"],
99
- "answer1": data["answer1"],
100
- "answer2": data["answer2"],
101
- "answer3": data["answer3"],
102
- "label": int(data.get("label", -1)),
103
- }
104
- else:
105
- data = csv.DictReader(f)
106
- for id_, row in enumerate(data):
107
- yield id_, {
108
- "id": row["id"],
109
- "context": row["context"],
110
- "question": row["question"],
111
- "answer0": row["answer0"],
112
- "answer1": row["answer1"],
113
- "answer2": row["answer2"],
114
- "answer3": row["answer3"],
115
- "label": int(row.get("label", -1)),
116
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context\n", "citation": "@inproceedings{huang-etal-2019-cosmos,\n title = \"Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning\",\n author = \"Huang, Lifu and\n Le Bras, Ronan and\n Bhagavatula, Chandra and\n Choi, Yejin\",\n booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D19-1243\",\n doi = \"10.18653/v1/D19-1243\",\n pages = \"2391--2401\",\n}\n", "homepage": "https://wilburone.github.io/cosmos/", "license": "CC BY 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer0": {"dtype": "string", "id": null, "_type": "Value"}, "answer1": {"dtype": "string", "id": null, "_type": "Value"}, "answer2": {"dtype": "string", "id": null, "_type": "Value"}, "answer3": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cosmos_qa", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 17159918, "num_examples": 25262, "dataset_name": "cosmos_qa"}, "test": {"name": "test", "num_bytes": 5121479, "num_examples": 6963, "dataset_name": "cosmos_qa"}, "validation": {"name": "validation", "num_bytes": 2186987, "num_examples": 2985, "dataset_name": "cosmos_qa"}}, "download_checksums": {"https://github.com/wilburOne/cosmosqa/raw/master/data/train.csv": {"num_bytes": 16660449, "checksum": "d8d5ca1f9f6534b6530550718591af89372d976a8fc419360fab4158dee4d0b2"}, "https://github.com/wilburOne/cosmosqa/raw/master/data/test.jsonl": {"num_bytes": 5610681, "checksum": "70005196dc2588b95de34f1657b25e2c1a4810cfe55b5bb0c0e15580c37b3ed0"}, "https://github.com/wilburOne/cosmosqa/raw/master/data/valid.csv": {"num_bytes": 2128345, "checksum": "a6a94fc1463ca82bb10f98ef68ed535405e6f5c36e044ff8e136b5c19dea63f3"}}, "download_size": 24399475, "post_processing_size": null, "dataset_size": 24468384, "size_in_bytes": 48867859}}
 
 
default/cosmos_qa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82079c2d47a0b7c71e3f3c428e7a451b9a21886bcd4010a1e55c002177a86679
3
+ size 2873193
default/cosmos_qa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eec429c4a2385f615b65343121ed88deeac385a698a5ead185097065b338da0
3
+ size 7923049
default/cosmos_qa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3591b6e76f927e8ebbac6282ceaa5b3e8b36ce7107a438de8fbe2fabf2371229
3
+ size 1233336