Datasets:

Task Categories: question-answering
Languages: English
Multilinguality: monolingual
Size Categories: 100K<n<1M
Language Creators: crowdsourced
Annotations Creators: crowdsourced
Source Datasets: original
Licenses: cc-by-3.0
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    RuntimeError
Message:      Give up after 5 attempts with ConnectionError
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 986, in _wrap_create_connection
                  return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1081, in create_connection
                  transport, protocol = await self._create_connection_transport(
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1111, in _create_connection_transport
                  await waiter
                File "/usr/local/lib/python3.9/asyncio/sslproto.py", line 528, in data_received
                  ssldata, appdata = self._sslpipe.feed_ssldata(data)
                File "/usr/local/lib/python3.9/asyncio/sslproto.py", line 188, in feed_ssldata
                  self._sslobj.do_handshake()
                File "/usr/local/lib/python3.9/ssl.py", line 944, in do_handshake
                  self._sslobj.do_handshake()
              ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 768, in _file_info
                  r = await session.get(url, allow_redirects=ar, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client.py", line 535, in _request
                  conn = await self._connector.connect(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 542, in connect
                  proto = await self._create_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 907, in _create_connection
                  _, proto = await self._create_direct_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
                  raise last_exc
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
                  transp, proto = await self._wrap_create_connection(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 988, in _wrap_create_connection
                  raise ClientConnectorCertificateError(req.connection_key, exc) from exc
              aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host thespermwhale.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')]
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/wiki_movies/2fab0fed49fad4c5854fcf8d4e958439d961e0d7de5d5ed2ca9ce54e309347cd/wiki_movies.py", line 118, in _generate_examples
                  for path, f in files:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 760, in __iter__
                  yield from self.generator(*self.args, **self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 787, in _iter_from_urlpath
                  with xopen(urlpath, "rb", use_auth_token=use_auth_token) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 453, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 135, in open
                  return self.__enter__()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 340, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 404, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://thespermwhale.com/jaseweston/babi/movieqa.tar.gz
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
                  builder_instance.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/wiki_movies/2fab0fed49fad4c5854fcf8d4e958439d961e0d7de5d5ed2ca9ce54e309347cd/wiki_movies.py", line 86, in _split_generators
                  archive = dl_manager.download(my_urls)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 533, in get_from_cache
                  raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
              ConnectionError: Couldn't reach https://thespermwhale.com/jaseweston/babi/movieqa.tar.gz (SSLError(MaxRetryError("HTTPSConnectionPool(host='thespermwhale.com', port=443): Max retries exceeded with url: /jaseweston/babi/movieqa.tar.gz (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')))")))
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/utils.py", line 128, in decorator
                  raise RuntimeError(f"Give up after {attempt} attempts with ConnectionError") from last_err
              RuntimeError: Give up after 5 attempts with ConnectionError

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for WikiMovies

Dataset Summary

The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entitiesbased on questions with answers in the open movie database (OMDb). It is the QA part of the Movie Dialog dataset.

Supported Tasks and Leaderboards

  • Question Answering

Languages

The text in the dataset is written in English.

Dataset Structure

Data Instances

The raw data consists of question answer pairs separated by a tab. Here are 3 examples:

1 what does Grégoire Colin appear in?	Before the Rain
1 Joe Thomas appears in which movies?	The Inbetweeners Movie, The Inbetweeners 2
1 what films did Michelle Trachtenberg star in?	Inspector Gadget, Black Christmas, Ice Princess, Harriet the Spy, The Scribbler

It is unclear what the 1 is for at the beginning of each line, but it has been removed in the Dataset object.

Data Fields

Here is an example of the raw data ingested by Datasets:

{
'answer': 'Before the Rain', 
'question': 'what does Grégoire Colin appear in?'
}

answer: a string containing the answer to a corresponding question. question: a string containing the relevant question.

Data Splits

The data is split into train, test, and dev sets. The split sizes are as follows:

wiki-entities_qa_* n examples
train.txt 96185
dev.txt 10000
test.txt 9952

Dataset Creation

Curation Rationale

WikiMovies was built with the following goals in mind: (i) machine learning techniques should have ample training examples for learning; and (ii) one can analyze easily the performance of different representations of knowledge and break down the results by question type. The datasetcan be downloaded fromhttp://fb.ai/babi

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@misc{miller2016keyvalue,
      title={Key-Value Memory Networks for Directly Reading Documents},
      author={Alexander Miller and Adam Fisch and Jesse Dodge and Amir-Hossein Karimi and Antoine Bordes and Jason Weston},
      year={2016},
      eprint={1606.03126},
      archivePrefix={arXiv},
      primaryClass={cs.CL}

Contributions

Thanks to @aclifton314 for adding this dataset.