Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K<n<10K
License:
Dataset Viewer issue: list index out of range
#3
by
albertvillanova
HF staff
- opened
The dataset viewer is not working.
Error details:
Error code: StreamingRowsError
Exception: IndexError
Message: list index out of range
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 363, in get_rows_or_raise
return get_rows(
File "/src/services/worker/src/worker/utils.py", line 305, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 326, in get_rows
ds = load_dataset(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1790, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1260, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
File "/tmp/modules-cache/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
Note we get this error for source data files hosted on Google Drive.
See: https://github.com/huggingface/datasets/issues/5862#issuecomment-1591248797
Note that the current source data URL is no longer valid: 404 Not Found error
_URL = "https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/"
This is the same source ULR that was used in linnaeus
dataset.
After investigation, I have found out that this source data file corresponds to: http://nlp.dmis.korea.edu/projects/biobert-2020-checkpoints/NERdata.zip
From:
- Repo: https://github.com/dmis-lab/biobert
- Paper: "BioBERT: a pre-trained biomedical language representation model for biomedical text mining"
Fixed by #4:
albertvillanova
changed discussion status to
closed