Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ReadError
Message:      invalid header
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.9/", line 186, in nti
                  s = nts(s, "ascii", "strict")
                File "/usr/local/lib/python3.9/", line 170, in nts
                  return s.decode(encoding, errors)
              UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: ordinal not in range(128)
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/", line 2336, in next
                  tarinfo = self.tarinfo.fromtarfile(self)
                File "/usr/local/lib/python3.9/", line 1122, in fromtarfile
                  obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
                File "/usr/local/lib/python3.9/", line 1064, in frombuf
                  chksum = nti(buf[148:156])
                File "/usr/local/lib/python3.9/", line 189, in nti
                  raise InvalidHeaderError("invalid header")
              tarfile.InvalidHeaderError: invalid header
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 484, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 119, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 175, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 747, in __iter__
                  for key, example in self._iter():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 737, in _iter
                  yield from ex_iterable
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 106, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/biu-nlp--qanom/27e3b0a3f0e07958d29c445c53218f3dd9115cec2851e017fae990de6fe0ef09/", line 249, in _generate_examples_from_jsonl
                  orig_splits_jsons = [read_lines(filepath)
                File "/tmp/modules-cache/datasets_modules/datasets/biu-nlp--qanom/27e3b0a3f0e07958d29c445c53218f3dd9115cec2851e017fae990de6fe0ef09/", line 249, in <listcomp>
                  orig_splits_jsons = [read_lines(filepath)
                File "/tmp/modules-cache/datasets_modules/datasets/biu-nlp--qanom/27e3b0a3f0e07958d29c445c53218f3dd9115cec2851e017fae990de6fe0ef09/", line 245, in read_lines
                  with, "rt") as f:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 702, in xgzip_open
                  return, "rb", use_auth_token=use_auth_token), *args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 469, in xopen
                  file_obj =, mode=mode, *args, **kwargs).open()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 284, in filesystem
                  return cls(**storage_options)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 86, in __init__
                  self.tar = tarfile.TarFile(
                File "/usr/local/lib/python3.9/", line 1531, in __init__
                  self.firstmember =
                File "/usr/local/lib/python3.9/", line 2348, in next
                  raise ReadError(str(e))
              tarfile.ReadError: invalid header

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (


This dataset contains question-answer pairs to model the predicate-argument structure of deverbal nominalizations. The questions start with wh-words (Who, What, Where, What, etc.) and contain the verbal form of a nominalization from the sentence; the answers are phrases in the sentence.

See the paper for details: QANom: Question-Answer driven SRL for Nominalizations (Klein et. al., COLING 2020)

For previewing the QANom data along with the verbal annotations of QASRL, check out Also check out our GitHub repository to find code for nominalization identification, QANom annotation, evaluation, and models.

The dataset was annotated by selected workers from Amazon Mechanical Turk.

Downloads last month
Edit dataset card
Evaluate models HF Leaderboard