Dataset Preview Go to dataset viewer
The dataset preview is not available for this dataset.
Cannot get the split names for the dataset.
Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 376, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/ghomasHudson--muld/5dd7adbada4482e5e452bc9c17836c3f6ee1d84ecc3ef335e66b68040cf9bd53/", line 139, in _split_generators
                  dl_dirs = dl_manager.download_and_extract(self.config.urls)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 946, in download_and_extract
                  return self.extract(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 909, in extract
                  urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 428, in map_nested
                  mapped = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 429, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 330, in _single_map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 914, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 401, in _get_extraction_protocol
                  with, **kwargs) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 195, in __getitem__
                  out = super().__getitem__(item)
              IndexError: list index out of range
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 79, in get_splits_response
                  split_full_names = get_dataset_split_full_names(dataset, hf_token)
                File "/src/services/worker/src/worker/responses/", line 39, in get_dataset_split_full_names
                  return [
                File "/src/services/worker/src/worker/responses/", line 42, in <listcomp>
                  for split in get_dataset_split_names(dataset, config, use_auth_token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 426, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 381, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open an discussion for direct support.


The Multitask Long Document Benchmark

MuLD (Multitask Long Document Benchmark) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text.

Supported Tasks and Leaderboards

The 6 MuLD tasks consist of:

  • NarrativeQA - A question answering dataset requiring an understanding of the plot of books and films.
  • HotpotQA - An expanded version of HotpotQA requiring multihop reasoning between multiple wikipedia pages. This expanded version includes the full Wikipedia pages.
  • OpenSubtitles - A translation dataset based on the OpenSubtitles 2018 dataset. The entire subtitles for each tv show is provided, one subtitle per line in both English and German.
  • VLSP (Very Long Scientific Papers) - An expanded version of the Scientific Papers summarization dataset. Instead of removing very long papers (e.g. thesis), we explicitly include them removing any short papers.
  • AO3 Style Change Detection - Consists of documents formed from the work of multiple Archive of Our Own authors, where the task is to predict the author for each paragraph.
  • Movie Character Types - Predicting whether a named character is the Hero/Villain given a movie script.

Dataset Structure

The data is presented in a text-to-text format where each instance contains a input string, output string and (optionally) json encoded metadata.

{'input: 'Who was wearing the blue shirt? The beginning...', 'output': ['John'], 'metadata': ''}

Data Fields

  • input: a string which has a differing structure per task but is presented in a unified format
  • output: a list of strings where each is a possible answer. Most instances only have a single answer, but some such as narrativeQA and VLSP may have multiple.
  • metadata: Additional metadata which may be helpful for evaluation. In this version, only the OpenSubtitles task contains metadata (for the ContraPro annotations).

Data Splits

Each tasks contains different splits depending what was available in the source datasets:

Task Name Train Validation Test
NarrativeQA ✔️ ✔️ ✔️
HotpotQA ✔️ ✔️
AO3 Style Change Detection ✔️ ✔️ ✔️
Movie Character Types ✔️ ✔️ ✔️
OpenSubtitles ✔️ ✔️

Citation Information

      title={MuLD: The Multitask Long Document Benchmark}, 
      author={G Thomas Hudson and Noura Al Moubayed},

Please also cite the papers directly used in this benchmark.

Edit dataset card
Evaluate models HF Leaderboard