Task Categories: question-answering
Languages: English
Multilinguality: monolingual
Size Categories: 100K<n<1M
Language Creators: crowdsourced
Annotations Creators: crowdsourced
Source Datasets: original
Licenses: cc-by-3.0
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    RuntimeError
Message:      Give up after 5 attempts with ConnectionError
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 986, in _wrap_create_connection
                  return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
                File "/usr/local/lib/python3.9/asyncio/", line 1081, in create_connection
                  transport, protocol = await self._create_connection_transport(
                File "/usr/local/lib/python3.9/asyncio/", line 1111, in _create_connection_transport
                  await waiter
                File "/usr/local/lib/python3.9/asyncio/", line 528, in data_received
                  ssldata, appdata = self._sslpipe.feed_ssldata(data)
                File "/usr/local/lib/python3.9/asyncio/", line 188, in feed_ssldata
                File "/usr/local/lib/python3.9/", line 944, in do_handshake
              ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 768, in _file_info
                  r = await session.get(url, allow_redirects=ar, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 535, in _request
                  conn = await self._connector.connect(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 542, in connect
                  proto = await self._create_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 907, in _create_connection
                  _, proto = await self._create_direct_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 1206, in _create_direct_connection
                  raise last_exc
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 1175, in _create_direct_connection
                  transp, proto = await self._wrap_create_connection(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 988, in _wrap_create_connection
                  raise ClientConnectorCertificateError(req.connection_key, exc) from exc
              aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')]
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/wiki_movies/2fab0fed49fad4c5854fcf8d4e958439d961e0d7de5d5ed2ca9ce54e309347cd/", line 118, in _generate_examples
                  for path, f in files:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 760, in __iter__
                  yield from self.generator(*self.args, **self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 787, in _iter_from_urlpath
                  with xopen(urlpath, "rb", use_auth_token=use_auth_token) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 453, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 135, in open
                  return self.__enter__()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 103, in __enter__
                  f =, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 340, in _open
                  size = size or, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 404, in _info
                  raise FileNotFoundError(url) from exc
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1746, in load_dataset
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 704, in download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/wiki_movies/2fab0fed49fad4c5854fcf8d4e958439d961e0d7de5d5ed2ca9ce54e309347cd/", line 86, in _split_generators
                  archive =
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 533, in get_from_cache
                  raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
              ConnectionError: Couldn't reach (SSLError(MaxRetryError("HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /jaseweston/babi/movieqa.tar.gz (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')))")))
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/", line 128, in decorator
                  raise RuntimeError(f"Give up after {attempt} attempts with ConnectionError") from last_err
              RuntimeError: Give up after 5 attempts with ConnectionError

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for WikiMovies

Dataset Summary

The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entitiesbased on questions with answers in the open movie database (OMDb). It is the QA part of the Movie Dialog dataset.

Supported Tasks and Leaderboards

  • Question Answering


The text in the dataset is written in English.

Dataset Structure

Data Instances

The raw data consists of question answer pairs separated by a tab. Here are 3 examples:

1 what does Grégoire Colin appear in?	Before the Rain
1 Joe Thomas appears in which movies?	The Inbetweeners Movie, The Inbetweeners 2
1 what films did Michelle Trachtenberg star in?	Inspector Gadget, Black Christmas, Ice Princess, Harriet the Spy, The Scribbler

It is unclear what the 1 is for at the beginning of each line, but it has been removed in the Dataset object.

Data Fields

Here is an example of the raw data ingested by Datasets:

'answer': 'Before the Rain', 
'question': 'what does Grégoire Colin appear in?'

answer: a string containing the answer to a corresponding question. question: a string containing the relevant question.

Data Splits

The data is split into train, test, and dev sets. The split sizes are as follows:

wiki-entities_qa_* n examples
train.txt 96185
dev.txt 10000
test.txt 9952

Dataset Creation

Curation Rationale

WikiMovies was built with the following goals in mind: (i) machine learning techniques should have ample training examples for learning; and (ii) one can analyze easily the performance of different representations of knowledge and break down the results by question type. The datasetcan be downloaded from

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]


Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

      title={Key-Value Memory Networks for Directly Reading Documents},
      author={Alexander Miller and Adam Fisch and Jesse Dodge and Amir-Hossein Karimi and Antoine Bordes and Jason Weston},


Thanks to @aclifton314 for adding this dataset.