New update of dataset causes bugs in downloading

#17
by OfekGlick - opened

Hi,
I see that someone has pushed an update that all files are now in parquet form. That causes a problem however when trying to download the dataset.
The issue is that running the following code with mnli as an example:

datasets.load_dataset('glue', 'mnli')

would cause the following bug:
Traceback (most recent call last):
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\IPython\core\interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
datasets.load_dataset('glue','stsb')
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\splits.py", line 530, in getitem
instructions = make_file_instructions(
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\arrow_reader.py", line 112, in make_file_instructions
name2filenames = {
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\arrow_reader.py", line 113, in
info.name: filenames_for_dataset_split(
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\naming.py", line 70, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\site-packages\datasets\naming.py", line 54, in filename_prefix_for_split
if os.path.basename(name) != name:
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\ntpath.py", line 242, in basename
return split(p)[1]
File "C:\Users\ofekg\anaconda3\envs\ml_training\lib\ntpath.py", line 211, in split
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

I see that the .py files (i.e glue.py) have been deleted. Was that on purpose or by accident?

Hi,
I've met exactly the same problem. Any updates on how to solve this?

Hi ! We removed the dataset script on purpose to enable the security features of the datasets library on this dataset.

Please make sure you have a recent version of datasets to load glue:

pip install -U datasets

If you can't update datasets, you can still use a git revision of this dataset where the script is still there (e.g. you can use revision="fd8e86499fa5c264fcaad392a8f49ddf58bf4037" in load_dataset )

Hi,

I've updated datasets to the most recent version. But the same error occurs.

If I use revision="fd8e86499fa5c264fcaad392a8f49ddf58bf4037" in load_dataset, the following error will be thrown:
Found cached dataset glue (file:///home/yc538/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
Traceback (most recent call last):
File "/local/scratch/yc538/project/test.py", line 3, in
dataset = load_dataset("glue", "sst2", revision="fd8e86499fa5c264fcaad392a8f49ddf58bf4037")
File "/local/scratch/yc538/myenv/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/local/scratch/yc538/myenv/lib/python3.11/site-packages/datasets/builder.py", line 1107, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).name} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.

Hi!

revision="cb2099c76426ff97da7aa591cbd317d91fb5fcb7" works for me.

I think that the issue is something with the config names or the transfer of the data to "parquet".

P.eg. if you try (1) then you obtain ["ax"]. But if you try fixing the revision, like (2), the list increase with all the sets ["cola",..., "ax"].

(1) datasets.get_dataset_config_names("glue")
(2) datasets.get_dataset_config_names("glue", revision="cb2099c76426ff97da7aa591cbd317d91fb5fcb7")

best!!

Hi @furrutiav ,

Note that for security reasons, this dataset has been converted to no-script Parquet format, and no-script Parquet datasets are only supported if using recent versions of the datasets library .

We recommend that you update your installed datasets library, because newer versions includes security/bug fixes and new features:

pip install -U datasets

Only in the exceptional case where it is not possible for you to update your datasetslibrary, then you can circumvent the issue by using an older version of this dataset by passing the revision parameter. We have created a "script" branch as a convenience revision:

ds = load_dataset("glue", config_name, revision="script")

Sign up or log in to comment