Dataset Viewer issue
Reported by @mweiss : see https://github.com/huggingface/datasets-server/issues/942
The dataset viewer is not working.
Error details:
Error code: SplitsNamesError
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/tmp/modules-cache/datasets_modules/datasets/mweiss--fashion_mnist_corrupted/85a1803777f06711b46d39e368002ec284bd9aa39d6c058d8221ebe7e8f212dc/fashion_mnist_corrupted.py", line 92, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1074, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1026, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1031, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 434, in _get_extraction_protocol
return _get_extraction_protocol_with_magic_number(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 405, in _get_extraction_protocol_with_magic_number
f.seek(0)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 747, in seek
raise ValueError("Cannot seek streaming HTTP file")
ValueError: Cannot seek streaming HTTP file
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/splits.py", line 119, in compute_splits_response
split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token)
File "/src/services/worker/src/worker/job_runners/splits.py", line 76, in get_dataset_split_full_names
return [
File "/src/services/worker/src/worker/job_runners/splits.py", line 79, in <listcomp>
for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 442, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 393, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
@mweiss
please note that GitHub servers do not allow HTTP range requests and these are required to stream when using np.load
(it calls seek
method on file object). I have tested locally and I get this error:
~/.cache/huggingface/modules/datasets_modules/datasets/fashion_mnist_corrupted/9ca3df645be34f53b48d8c0806fd59a12b2751fbe22075cfda62be632b31bbef/fashion_mnist_corrupted.py in _generate_examples(self, filepath, split)
119 # Images
120 with open(filepath[0], "rb") as f:
--> 121 images = np.load(f)
122 with open(filepath[1], "rb") as f:
123 labels = np.load(f)
.../venv/lib/python3.9/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
425 # If the file size is less than N, we need to make sure not
426 # to seek past the beginning of the file
--> 427 fid.seek(-min(N, len(magic)), 1) # back-up
428 if magic.startswith(_ZIP_PREFIX) or magic.startswith(_ZIP_SUFFIX):
429 # zip-file (assume .npz)
.../venv/lib/python3.9/site-packages/fsspec/implementations/http.py in seek(self, loc, whence)
745 if loc == self.loc and whence == 0:
746 return
--> 747 raise ValueError("Cannot seek streaming HTTP file")
748
749 async def _read(self, num=-1):
ValueError: Cannot seek streaming HTTP file
One alternative would be to host the data files (as a ZIP data archive: data.zip
) on the Hugging Face Hub as well. Hugging Face Hub servers do support HTTP range requests.
Let me know if you find this interesting. I can open a PR...
Hey @albertvillanova
Thanks a lot for looking at this. That's awesome - I don't think I've ever seen such active and helpful support on any open-source repo.
I am a bit confused by your message. At the point where np.load(f)
is called, the file should already be downloaded and f
is pointing to the local dir, or is this not the case? Also, when regularly using the library to download the dataset, everything seems to work. But maybe I am underestimating the amount of logic which is triggered in the dataset preview implementation ;-)
Anyways, of course, if you could open a PR that would be highly appreciated, and I am fine with hosting a copy of the files here :-)
Thanks for the kind words, @mweiss .
Currently, the dataset viewer uses the streaming mode under the hood: load_dataset("mweiss/fashion_mnist_corrupted", split="train", streaming=True)
.
Therefore when calling np.load
in streaming mode, no data has been downloaded: the value of filepath[0]
is 'https://raw.githubusercontent.com/testingautomated-usi/fashion-mnist-c/v1.0.0/generated/npy/fmnist-c-train.npy'
.
Also note that this is only an issue in streaming mode: your dataset works perfectly without streaming, which is the default behavior.
I'm opening a PR to show you an alternative with hosted data files.
Ahh, now I get it. That's cool, and makes a lot of sense :-)
See PR: #3
The viewer is working now.
Indeed. Again, thanks a lot! This was amazing service - certainly, on a level I did not expect on an open source platform :-)