Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    NonMatchingChecksumError
Message:      Checksums didn't match for dataset source files:
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/cryptonite/cbfa41745107a1b1f80c74f2d59bd2be216f003356f86ee4a6a11387e07080c6/", line 112, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 453, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 135, in open
                  return self.__enter__()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 103, in __enter__
                  f =, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 107, in _open
                  out =, mode.strip("b"))
                File "/usr/local/lib/python3.9/", line 1502, in open
                  zinfo = self.getinfo(name)
                File "/usr/local/lib/python3.9/", line 1429, in getinfo
                  raise KeyError(
              KeyError: "There is no item named 'cryptonite-official-split/cryptonite-train.jsonl' in the archive"
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1746, in load_dataset
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 704, in download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 775, in _download_and_prepare
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 40, in verify_checksums
                  raise NonMatchingChecksumError(error_msg + str(bad_urls))
              datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Cryptonite

Dataset Summary

Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).



Dataset Structure

Data Instances

This is one example from the train set.

  'clue': 'make progress socially in stated region (5)',
  'answer': 'climb',
  'date': 971654400000,
  'enumeration': '(5)',
  'id': 'Times-31523-6across',
  'publisher': 'Times',
  'quick': False

Data Fields

  • clue: a string representing the clue provided for the crossword
  • answer: a string representing the answer to the clue
  • enumeration: a string representing the
  • publisher: a string representing the publisher of the crossword
  • date: a int64 representing the UNIX timestamp of the date of publication of the crossword
  • quick: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)
  • id: a string to uniquely identify a given example in the dataset

Data Splits

Train (470,804 examples), validation (26,156 examples), test (26,157 examples).

Dataset Creation

Curation Rationale

Crosswords from the Times and the Telegraph.

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]


Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy

Licensing Information


Citation Information

      title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language}, 
      author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},


Thanks to @theo-m for adding this dataset.

Edit dataset card
Evaluate models HF Leaderboard