Datasets:

Task Categories: question-answering
Languages: English
Multilinguality: monolingual
Size Categories: 100K<n<1M
Language Creators: found
Annotations Creators: machine-generated
Source Datasets: original
Licenses: cc-by-3.0
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ReadError
Message:      invalid header
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 186, in nti
                  s = nts(s, "ascii", "strict")
                File "/usr/local/lib/python3.9/tarfile.py", line 170, in nts
                  return s.decode(encoding, errors)
              UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 4: ordinal not in range(128)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 2327, in next
                  tarinfo = self.tarinfo.fromtarfile(self)
                File "/usr/local/lib/python3.9/tarfile.py", line 1113, in fromtarfile
                  obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
                File "/usr/local/lib/python3.9/tarfile.py", line 1055, in frombuf
                  chksum = nti(buf[148:156])
                File "/usr/local/lib/python3.9/tarfile.py", line 189, in nti
                  raise InvalidHeaderError("invalid header")
              tarfile.InvalidHeaderError: invalid header
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/simple_questions_v2/d4192b0a3f18df90a99f9f56987766d28ae2e48bc479053894efadaf20b6e935/simple_questions_v2.py", line 141, in _generate_examples
                  with open(datapath, encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 453, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py", line 86, in __init__
                  self.tar = tarfile.TarFile(fileobj=self.fo)
                File "/usr/local/lib/python3.9/tarfile.py", line 1522, in __init__
                  self.firstmember = self.next()
                File "/usr/local/lib/python3.9/tarfile.py", line 2339, in next
                  raise ReadError(str(e))
              tarfile.ReadError: invalid header

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for SimpleQuestions

Dataset Summary

[More Information Needed]

Supported Tasks and Leaderboards

[More Information Needed]

Languages

[More Information Needed]

Dataset Structure

Data Instances

Here are some examples of questions and facts:

  • What American cartoonist is the creator of Andy Lippincott? Fact: (andy_lippincott, character_created_by, garry_trudeau)
  • Which forest is Fires Creek in? Fact: (fires_creek, containedby, nantahala_national_forest)
  • What does Jimmy Neutron do? Fact: (jimmy_neutron, fictional_character_occupation, inventor)
  • What dietary restriction is incompatible with kimchi? Fact: (kimchi, incompatible_with_dietary_restrictions, veganism)

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

[More Information Needed]

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

[More Information Needed]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @abhishekkrthakur for adding this dataset.