Datasets:
Tasks:
Multiple Choice
Sub-tasks:
multiple-choice-qa
Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
cc-by-4.0
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError Exception: ClientPayloadError Message: 400, message='Can not decode content-encoding: gzip' Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response rows = get_rows( File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator return func(*args, **kwargs) File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__ for key, example in ex_iterable: File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/cosmos_qa/3e18538cbfdb2c04189b16642715f0f6da3e97ed5df0aadcec3641245b2cf157/cosmos_qa.py", line 106, in _generate_examples for id_, row in enumerate(data): File "/usr/local/lib/python3.9/csv.py", line 111, in __next__ row = next(self.reader) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 594, in read return super().read(length) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1684, in read out = self.cache._fetch(self.loc, self.loc + length) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 381, in _fetch self.cache = self.fetcher(start, bend) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 114, in wrapper return sync(self.loop, func, *args, **kwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 99, in sync raise return_result File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 54, in _runner result[0] = await coro File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 663, in async_fetch_range out = await r.read() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1037, in read self._body = await self.content.read() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/streams.py", line 349, in read raise self._exception aiohttp.client_exceptions.ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for "cosmos_qa"
Dataset Summary
Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 23.27 MB
- Size of the generated dataset: 23.37 MB
- Total amount of disk used: 46.64 MB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"answer0": "If he gets married in the church he wo nt have to get a divorce .",
"answer1": "He wants to get married to a different person .",
"answer2": "He wants to know if he does nt like this girl can he divorce her ?",
"answer3": "None of the above choices .",
"context": "\"Do i need to go for a legal divorce ? I wanted to marry a woman but she is not in the same religion , so i am not concern of th...",
"id": "3BFF0DJK8XA7YNK4QYIGCOG1A95STE##3180JW2OT5AF02OISBX66RFOCTG5J7##A2LTOS0AZ3B28A##Blog_56156##q1_a1##378G7J1SJNCDAAIN46FM2P7T6KZEW2",
"label": 1,
"question": "Why is this person asking about divorce ?"
}
Data Fields
The data fields are the same among all splits.
default
id
: astring
feature.context
: astring
feature.question
: astring
feature.answer0
: astring
feature.answer1
: astring
feature.answer2
: astring
feature.answer3
: astring
feature.label
: aint32
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 25262 | 2985 | 6963 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
As reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0 license.
Citation Information
@inproceedings{huang-etal-2019-cosmos,
title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
author = "Huang, Lifu and
Le Bras, Ronan and
Bhagavatula, Chandra and
Choi, Yejin",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1243",
doi = "10.18653/v1/D19-1243",
pages = "2391--2401",
}
Contributions
Thanks to @patrickvonplaten, @lewtun, @albertvillanova, @thomwolf for adding this dataset.
- Downloads last month
- 38,087
Homepage:
wilburone.github.io
Repository:
github.com
Paper:
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning
Point of Contact:
Lifu Huang
Size of downloaded dataset files:
23.27 MB
Size of the generated dataset:
23.37 MB
Total amount of disk used:
46.64 MB
Models trained or fine-tuned on cosmos_qa
•
Updated
•
4.55k
•
27