Dataset Viewer issue
The dataset viewer is not working, albeit the loading script works properly. It is possible to download the dataset with load_dataset and the script for test ends successfully (datasets-cli test).
Here the Error details:
Error code: ConfigNamesError
Exception: DatasetWithScriptNotSupportedError
Message: The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag
@lhoestq
and
@severo
.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1481, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/src/worker/utils.py", line 400, in raise_unsupported_dataset_with_script_or_init
raise DatasetWithScriptNotSupportedError(
libcommon.exceptions.DatasetWithScriptNotSupportedError: The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag
@lhoestq
and
@severo
.
Hi. The reason is that we no longer support the dataset scripts in the dataset viewer. See https://discuss.huggingface.co/t/dataset-repo-requires-arbitrary-python-code-execution/59346/5
We had to disable the viewer for datasets with a script for now, because some people were abusing it. Sorry for the inconvenience.
In the meantime, if you want the dataset viewer to work you need to remove the dataset script and use a supported data format (csv, parquet, etc.) Personnally I’d recommend uploading the dataset using the datasets library and push_to_hub().
Hi, thank you for the reply.
One question, the dataset has a split structure that I does not belong to the standard ones (train/val/test) automatically detected by the dataset viewer, since the dataset is split in 5 folds + a silver fold. The division in folds is necessary to preserve the k-fold integrity. Pushing the dataset to the hub, will it preserve such a structure?
The structure of your files is:
I think you can ensure the matching between split name and file name through the YAML header of the README file, as explained in https://huggingface.co/docs/datasets/repository_structure#splits
Thank you for the support. I've solved via push_to_hub.