Datasets:
cosmos_qa

Languages: English
Multilinguality: monolingual
Size Categories: 10K<n<100K
Language Creators: found
Annotations Creators: crowdsourced
Source Datasets: original
License: cc-by-4.0
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ClientPayloadError
Message:      400, message='Can not decode content-encoding: gzip'
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__
                  for key, example in ex_iterable:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/cosmos_qa/3e18538cbfdb2c04189b16642715f0f6da3e97ed5df0aadcec3641245b2cf157/cosmos_qa.py", line 106, in _generate_examples
                  for id_, row in enumerate(data):
                File "/usr/local/lib/python3.9/csv.py", line 111, in __next__
                  row = next(self.reader)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 594, in read
                  return super().read(length)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1684, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 381, in _fetch
                  self.cache = self.fetcher(start, bend)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 114, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 99, in sync
                  raise return_result
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 54, in _runner
                  result[0] = await coro
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 663, in async_fetch_range
                  out = await r.read()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1037, in read
                  self._body = await self.content.read()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/streams.py", line 349, in read
                  raise self._exception
              aiohttp.client_exceptions.ClientPayloadError: 400, message='Can not decode content-encoding: gzip'

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "cosmos_qa"

Dataset Summary

Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

default

  • Size of downloaded dataset files: 23.27 MB
  • Size of the generated dataset: 23.37 MB
  • Total amount of disk used: 46.64 MB

An example of 'validation' looks as follows.

This example was too long and was cropped:

{
    "answer0": "If he gets married in the church he wo nt have to get a divorce .",
    "answer1": "He wants to get married to a different person .",
    "answer2": "He wants to know if he does nt like this girl can he divorce her ?",
    "answer3": "None of the above choices .",
    "context": "\"Do i need to go for a legal divorce ? I wanted to marry a woman but she is not in the same religion , so i am not concern of th...",
    "id": "3BFF0DJK8XA7YNK4QYIGCOG1A95STE##3180JW2OT5AF02OISBX66RFOCTG5J7##A2LTOS0AZ3B28A##Blog_56156##q1_a1##378G7J1SJNCDAAIN46FM2P7T6KZEW2",
    "label": 1,
    "question": "Why is this person asking about divorce ?"
}

Data Fields

The data fields are the same among all splits.

default

  • id: a string feature.
  • context: a string feature.
  • question: a string feature.
  • answer0: a string feature.
  • answer1: a string feature.
  • answer2: a string feature.
  • answer3: a string feature.
  • label: a int32 feature.

Data Splits

name train validation test
default 25262 2985 6963

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

As reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0 license.

Citation Information

@inproceedings{huang-etal-2019-cosmos,
    title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
    author = "Huang, Lifu  and
      Le Bras, Ronan  and
      Bhagavatula, Chandra  and
      Choi, Yejin",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/D19-1243",
    doi = "10.18653/v1/D19-1243",
    pages = "2391--2401",
}

Contributions

Thanks to @patrickvonplaten, @lewtun, @albertvillanova, @thomwolf for adding this dataset.

Downloads last month
38,087

Models trained or fine-tuned on cosmos_qa