Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

dataset preview does not work

#6
by MoritzLaurer HF staff - opened

It would be great to inspect the dataset with the preview function, but there is currently only an error message that the dataset cannot be displayed

BigScience Workshop org

cc @severo

cc @albertvillanova . Getting the list of splits names seems to be downloading all the files (https://huggingface.co/datasets/bigscience/P3/blob/main/P3.py#L152).

For example, the following takes forever:

from datasets import get_dataset_split_names
get_dataset_split_names(path="bigscience/P3", config_name="adversarial_qa_dbert_answer_the_following_q")

Note that streaming also takes a while:

from datasets import load_dataset
load_dataset("bigscience/P3", name="adversarial_qa_dbert_answer_the_following_q", split="train", streaming=True)
BigScience Workshop org

@severo at first sight I would say it is not a bug in datasets, but it is caused by the way this loading script has been implemented...

I can have a look...

Sure, I didn't mean it was a bug in datasets. It would be awesome to have your opinion on the loading script

BigScience Workshop org

Thanks for looking at these @albertvillanova @severo !

BigScience Workshop org

I have addressed the TimeoutError with PR #8.

thanks to everyone for looking into this! I currently still get the following error message in the preview window (at the top here):

Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 407, in _info
                  await _file_info(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 792, in _file_info
                  r.raise_for_status()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 503, message='Service Unavailable', url=URL('https://s3-proxy.huggingface.tech/lfs.huggingface.co/datasets/bigscience/P3/007901602060b6512c20c36927cc12caca6d756ff43511ad1b195b88738640bf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK/20230105/us-east-1/s3/aws4_request&X-Amz-Date=20230105T112349Z&X-Amz-Expires=259200&X-Amz-Signature=e020015fb020a6d577a8a44443d81eec793bc6d3874c378d5e9b15d8dfe4a752&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3D%22test.tfrecord-00000-of-00001%22&x-id=GetObject')
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/4f6db716ff19001c01ff0924cdfed96b155e28a3e1afb7a1e1d7f5efa7a49fce/P3.py", line 154, in _split_generators
                  data_dir = dl_manager.download_and_extract(_URLs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 973, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 938, in extract
                  urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 444, in map_nested
                  mapped = [
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 445, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 362, in _single_map_nested
                  return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 362, in <dictcomp>
                  return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 364, in _single_map_nested
                  mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 364, in <listcomp>
                  mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
                  return function(data_struct)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 943, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 396, in _get_extraction_protocol
                  with fsspec.open(urlpath, **kwargs) as f:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1106, in open
                  f = self._open(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 346, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 113, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 98, in sync
                  raise return_result
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 420, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://huggingface.co/datasets/bigscience/P3/resolve/main/data/rotten_tomatoes_Movie_Expressed_Sentiment_2/test.tfrecord-00000-of-00001
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 127, in compute_splits_response
                  split_full_names = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 87, in get_dataset_split_full_names
                  return [
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 90, in <listcomp>
                  for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 442, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 393, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
BigScience Workshop org

We have addressed the 503 ClientResponseError ('Service Unavailable') with PR #9.

BigScience Workshop org
edited Jan 6, 2023

The remaining issue to fix is related to tensorflow: UnimplementedError

We have addressed it with PR #10.

Thanks, @albertvillanova !

With https://huggingface.co/datasets/bigscience/P3/discussions/8 (and also a bugfix in the Hub itself), we have been able to get the list of splits (see https://datasets-server.huggingface.co/splits?dataset=bigscience/P3), which is why the dataset viewer now allows selecting a config and a split.

And with https://huggingface.co/datasets/bigscience/P3/discussions/9, the time to compute the list of splits has gone from 5 hours to 5 minutes! It changes everything.

Finally, we now have an error in fetching the rows:

Error code:   StreamingRowsError
Exception:    UnimplementedError
Message:      File system scheme 'https' not implemented (file: 'https://huggingface.co/datasets/bigscience/P3/resolve/main/data/adversarial_qa_dbert_answer_the_following_q/validation.tfrecord-00000-of-00001')
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 484, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 119, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 175, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 846, in __iter__
                  for key, example in self._iter():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 788, in _iter
                  yield from ex_iterable
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/37807f3c9a6fd1cbb13f63a7e56dc24ddf9553c683097b80d39bdc4ea82a52d7/P3.py", line 214, in _generate_examples
                  ds = load_cached_task(features_dict, tfrecord)
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/37807f3c9a6fd1cbb13f63a7e56dc24ddf9553c683097b80d39bdc4ea82a52d7/P3.py", line 67, in load_cached_task
                  ds = tf.data.TFRecordDataset(tf.io.gfile.glob([tfrecord])) # TODO -> handle multiple shards
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/lib/io/file_io.py", line 443, in get_matching_files_v2
                  return [
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/lib/io/file_io.py", line 447, in <listcomp>
                  for matching_filename in _pywrap_file_io.GetMatchingFiles(
              tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme 'https' not implemented (file: 'https://huggingface.co/datasets/bigscience/P3/resolve/main/data/adversarial_qa_dbert_answer_the_following_q/validation.tfrecord-00000-of-00001')
BigScience Workshop org

I think the backtrace above is outdated. The current one is:

Error code:   StreamingRowsError
Exception:    UnimplementedError
Message:      {{function_node __wrapped__IteratorGetNext_output_types_4_device_/job:localhost/replica:0/task:0/device:CPU:0}} File system scheme 'https' not implemented (file: 'https://huggingface.co/datasets/bigscience/P3/resolve/main/data/adversarial_qa_dbert_answer_the_following_q/train.tfrecord-00000-of-00001') [Op:IteratorGetNext]
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 484, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 119, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 175, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 846, in __iter__
                  for key, example in self._iter():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 788, in _iter
                  yield from ex_iterable
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/8552511bfe0ad20ea5cc051eddc55d07e3e6502745d6fb2209c12d3d68245098/P3.py", line 216, in _generate_examples
                  for ex in ds.as_numpy_iterator():
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4635, in __next__
                  return nest.map_structure(to_numpy, next(self._iterator))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 766, in __next__
                  return self._next_internal()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 749, in _next_internal
                  ret = gen_dataset_ops.iterator_get_next(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 3017, in iterator_get_next
                  _ops.raise_from_not_ok_status(e, name)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 7209, in raise_from_not_ok_status
                  raise core._status_to_exception(e) from None  # pylint: disable=protected-access
              tensorflow.python.framework.errors_impl.UnimplementedError: {{function_node __wrapped__IteratorGetNext_output_types_4_device_/job:localhost/replica:0/task:0/device:CPU:0}} File system scheme 'https' not implemented (file: 'https://huggingface.co/datasets/bigscience/P3/resolve/main/data/adversarial_qa_dbert_answer_the_following_q/train.tfrecord-00000-of-00001') [Op:IteratorGetNext]

I'm investigating it...

BigScience Workshop org

@severo This is weird... I can't reproduce the error:

In [1]: from datasets import load_dataset; ds = load_dataset("/path/to/my/local/bigscience/P3", "adversarial_qa_dbert_answer_the_following_q", split="train", streaming=True); item = next(iter(ds)); item
Out[1]: 
{'inputs': [9246,
  8,
  826,
  5454,
...
 'targets': [24242, 800, 26939, 1],
 'targets_pretokenized': '\nsubjective idealism\n'}

I have checked with tensorflow versions 2.7.0 (my local version) and 2.10.1 (the same as in datasets-server): the error is not raised...

Any hint?

@albertvillanova Your code works because the data files are available locally, but that's not the case with datasets-server. Instead, it relies on remote files, which tf.data.TFRecordDataset does not support, hence the error.

BigScience Workshop org
edited Jan 20, 2023

Yes, @severo and me discussed about this on Slack last Friday... :(

BigScience Workshop org

The viewer is working now.

I'm closing this issue.

albertvillanova changed discussion status to closed

The viewer is not working anymore, see https://github.com/huggingface/datasets-server/issues/1365.

The trace is now:

Error code:   ConfigNamesError
Exception:    TypeError
Message:      Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 56, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 334, in get_dataset_config_names
                  builder_cls = import_main_class(dataset_module.module_path)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 116, in import_main_class
                  module = importlib.import_module(module_path)
                File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
                  return _bootstrap._gcd_import(name[level:], package, level)
                File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
                File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
                File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
                File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
                File "<frozen importlib._bootstrap_external>", line 850, in exec_module
                File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/204f22caf7f0cbaf01a8631ec396c1cab69f8d71f276fb8619fae696536874ab/P3.py", line 23, in <module>
                  from ._tfrecord_example_pb2 import SequenceExample
                File "/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/204f22caf7f0cbaf01a8631ec396c1cab69f8d71f276fb8619fae696536874ab/_tfrecord_example_pb2.py", line 39, in <module>
                  _descriptor.FieldDescriptor(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/google/protobuf/descriptor.py", line 561, in __new__
                  _message.Message._CheckCalledFromGeneratedFile()
              TypeError: Descriptors cannot not be created directly.
              If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
              If you cannot immediately regenerate your protos, some other possible workarounds are:
               1. Downgrade the protobuf package to 3.20.x or lower.
               2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
              
              More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

cc @albertvillanova @mariosasko @lhoestq

severo changed discussion status to open
BigScience Workshop org

it looks like an issue with datasets-server not able to create a google.descriptor.FieldDescriptor- maybe we should check if there was a breaking change on this in the latest google package releases ?

@lhoestq Both solutions from the error message fix the issue. However, I think the best solution is generating a proto file compatible with the newer protobuf to the repo and then choosing the proto file based on the version of the protobuf package installed (inside the script).

As we won't support dataset scripts anymore, maybe it's time to convert to Parquet or another supported file format?

@severo Indeed! I've started the process of converting the dataset to Parquet at https://huggingface.co/datasets/bigscience/P3/discussions/18.

Closing the discussion since @mariosasko converted the dataset to data-only 🎉

severo changed discussion status to closed

Sign up or log in to comment