Datasets:
Dataset Viewer issue
Hi @albertvillanova @lhoestq @severo , can you please help us enable the dataset viewer for this dataset?
Error details:
Error code: ConfigNamesError
Exception: DatasetWithScriptNotSupportedError
Message: The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag
@lhoestq
and
@severo
.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1481, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/src/worker/utils.py", line 400, in raise_unsupported_dataset_with_script_or_init
raise DatasetWithScriptNotSupportedError(
libcommon.exceptions.DatasetWithScriptNotSupportedError: The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag
@lhoestq
and
@severo
.
Hi. The reason is that we no longer support the dataset scripts in the dataset viewer. See https://discuss.huggingface.co/t/dataset-repo-requires-arbitrary-python-code-execution/59346/5
We had to disable the viewer for datasets with a script for now, because some people were abusing it. Sorry for the inconvenience.
In the meantime, if you want the dataset viewer to work you need to remove the dataset script and use a supported data format (csv, parquet, etc.) Personnally I’d recommend uploading the dataset using the datasets library and push_to_hub().
Hi @severo , thanks for your answer!
Our data is stored as plain text + audio + CSV metadata, so I guess the only way to get rid of the loading script would be to store the data as parquet. We wanted to avoid that as we think there is value in having the data in a format that's human-readable/editable (we can review potential PRs easily) and can be easily loaded without necessarily using the datasets
library.
Another reason is that we provide options to control audio loading: sampling_rate
, mono
and decode_audio
, which are passed to datasets.Audio
, and with_audio
to allow skipping the audio column altogether. We think this is useful and although we could probably emulate this by creating named configs ("subsets") for selected combinations of options, this would greatly inflate the repository size (unnecessarily, as these options are irrelevant for the dataset viewer) while being less user-friendly.
We hope you will consider enabling the dataset viewer for this dataset (as it is for other ASR datasets like Common Voice and VIVOS).
Thank you!
Hi!
Our data is stored as plain text + audio + CSV metadata, so I guess the only way to get rid of the loading script would be to store the data as parquet. We wanted to avoid that as we think there is value in having the data in a format that's human-readable/editable (we can review potential PRs easily) and can be easily loaded without necessarily using the
datasets
library.
Storing these files alongside the Parquet version should be fine. You can also mention in the README that the Parquet version was generated from them (and have extra instructions on how to use them without datasets
).
Another reason is that we provide options to control audio loading:
sampling_rate
,mono
anddecode_audio
, which are passed todatasets.Audio
, andwith_audio
to allow skipping the audio column altogether. We think this is useful and although we could probably emulate this by creating named configs ("subsets") for selected combinations of options, this would greatly inflate the repository size (unnecessarily, as these options are irrelevant for the dataset viewer) while being less user-friendly.
There are no advantages to exposing sampling_rate
, mono
and decode_audio
as config parameters than calling .cast_column("audio", datasets.Audio(...))
on the generated dataset. Also, one can drop the audio
column by specifying columns
(without audio
) in load_dataset
, so 4 configs (one for each language; all
is the concatenation of them) would cover all the use cases.
@mariosasko Thanks, that's useful to know! I'll try this.