Dataset Viewer issue: StreamingRowsError

#1
by tbone5563 - opened

The dataset viewer is not working.

Error details:

Error code:   StreamingRowsError
Exception:    KeyError
Message:      'png'
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 320, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 91, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 68, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 234, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
                  example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
              KeyError: 'png'

cc @albertvillanova @lhoestq @severo .

I'm not sure what is the reason, but it's not the first occurrence.

cc @lhoestq : do you know why we get this kind of issue on webdatasets?

Each sample should have a png and a json. Here you can check that the first 5 samples indeed have one png and one json each

>>> from datasets.download import StreamingDownloadManager
>>> [(i, filename) for i, (filename, _) in zip(range(10), StreamingDownloadManager().iter_archive("hf://datasets/tbone5563/tar_images/train_data.tar"))]
[(0, '14_Jun_coronacases_case2_128.png'),
 (1, '14_Jun_coronacases_case2_128.json'),
 (2, '16_Morozov_study_0276_17.png'),
 (3, '16_Morozov_study_0276_17.json'),
 (4, '14_Jun_radiopaedia_40_86625_0_case18_15.png'),
 (5, '14_Jun_radiopaedia_40_86625_0_case18_15.json'),
 (6, '6_Rahimzadeh_137covid_patient20_SR_4_IM00021.png'),
 (7, '6_Rahimzadeh_137covid_patient20_SR_4_IM00021.json'),
 (8, '6_Rahimzadeh_137covid_patient24_SR_4_IM00007.png'),
 (9, '6_Rahimzadeh_137covid_patient24_SR_4_IM00007.json')]

You can run these checks and verify that it's the case for all samples:

>>> a = [filename for filename, _ in StreamingDownloadManager().iter_archive("hf://datasets/tbone5563/tar_images/train_data.tar")]
>>> all(x.endswith(".png") for x in a[::2]) and all(x.endswith(".json") for x in a[1::2]) and all(a[i][:-4] == a[i+1][:-5] for i in range(0, len(a), 2))
True

Same for the test file:

>>> a = [filename for filename, _ in StreamingDownloadManager().iter_archive("hf://datasets/tbone5563/tar_images/test_data.tar")]
>>> all(x.endswith(".png") for x in a[::2]) and all(x.endswith(".json") for x in a[1::2]) and all(a[i][:-4] == a[i+1][:-5] for i in range(0, len(a), 2))
True

So I guess this is a bug from the datasets library. I opened https://github.com/huggingface/datasets/issues/6880 but I won't have the bandwidth to investigate this week.

The webdataset code is here if someone wants to investigate: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py

lhoestq changed discussion status to closed
lhoestq changed discussion status to open

I have investigated the issue and is caused by a bug when a file basename in the TAR contains a dot (besides the one before the extension):

  • Example 970: 15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png

I have opened a pull request to fix it:

After investigation by @lhoestq , we realized that the webdataset library splits basenames at the first dot, so that 15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png becomes 15_Cohen_1-s2as grouping __key__, and 0-S0929664620300449-gr3_lrg-b.pngas the additional key to be added to the example.

So @tbone5563 , in order to make this dataset to be loaded as expected, you should fix the basenames of the files within the TARs so that they only contain a single dot, the one separating the file extension.

Sign up or log in to comment