Getting Dataset Viewer working with Custom Loading Script

#2
by cwchen-cm - opened
Crossing Minds Inc org

Hello,

I've been trying to host this dataset on Huggingface. This dataset has 4 distinct files because we think different people might want to use different parts of it and ignore the rest. For example, the "product_image_urls" are usable by themselves, and many users may not want to download "product_features" because it's almost 1 GB.

I've been trying unsuccessfully to get 2 things working at the same time:

  1. The Dataset Viewer to allow quick browsing/previewing of the dataset
  2. To allow loading of the dataset using datasets.load_dataset()

Initially I was just trying to get the Dataset Viewer working, so I did a bunch of editing of the Dataset Card (README.md) and moving/renaming the dataset files, and finally got it working where I could get a nice preview of each of the data files. But then when I tried to load_dataset() from a Python script, it failed. It seemed that even if I specified a config name in the second param, it still tried to download all the files, and it was trying to use the schema from the default config on all other configs too.

I did more reading, and came to the understanding that in order to get a multi-config dataset working such that you can load it from a Python script using datasets.load_dataset(), you have to write a custom dataset loading script. I looked at the Template loading script, and found a couple of example datasets using a loading script with multiple configs:

I copied their loading scripts (based closely on the HF Template), modified it to run with my parquet files, and uploaded it to my dataset. Now I can do load_dataset() on one of the configs and it works in my Python script!

BUT, now the Dataset Viewer doesn't work anymore, it says it can't work with "arbitrary Python code execution". I get that, but then I wonder why those two datasets (tyqiangz/multilingual-sentiments and Hello-SimpleAI/HC3) have their Dataset Viewer working? Is this by special approval, if so, can I get it too?

As an aside, I've found several other datasets where they don't use a loading script, they just have all the configs specified in the Dataset Card/README.md. The Dataset Viewer works fine BUT datasets.load_dataset() fail, I presume for similar reasons that I ran into.

To make everything more confusing, I found ONE dataset that 1) doesn't use a loading script (it just specifies all the configs in the README.md), 2) Dataset Viewer works, and 3) datasets.load_dataset() works! It does download all the configs, though, even if you specify one.

All this is to say, it's been quite confusing trying to get my multi-config dataset (which is a very common thing in ML, I have to say) to be able to take full advantage of the HF Hub and toolchain. Thank you in advance for any pointers, tips, or approvals you can provide. Thank you!


👋 Before opening the discussion, have you considered removing the loading script and relying on automated data support?

You can use convert_to_parquet from the datasets library.


cc @albertvillanova @lhoestq @severo .

Hi @cwchen-cm ,

Due to security issues, we no longer recommend using a loading script and the viewer is not supported on script-datasets: https://huggingface.co/docs/datasets/dataset_script

The dataset loading script is likely not needed if your dataset is in one of the following formats: CSV, JSON, JSON lines, text, images, audio or Parquet. With those formats, you should be able to load your dataset automatically with load_dataset(), as long as your dataset repository has a required structure.

Instead we recommend uploading your data files if these are in one of the supported formats: Parquet format is supported.

In order to implement multiple configs, you just need to specify them in the README files, by using the configs YAML tag. This is explained in our docs:

Crossing Minds Inc org
edited May 27

Thank you @albertvillanova for the quick response!

My files are in Parquet format. Now I've removed the dataset loader script, and configured the README.md according to the guides and followed the example of this dataset too. https://huggingface.co/datasets/abisee/cnn_dailymail

When I call

dataset = load_dataset("crossingminds/shopping-queries-image-dataset", "product_image_urls")

I get an error

Failed to read file '>/Users/chingwei/.cache/huggingface/datasets/downloads/0ede10cdf37d35bddc644cff569d4b4b6db52e5d887c78135c98824e7fc282c1' >with error <class 'ValueError'>: Couldn't cast
product_id: string
image_url: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 502
to
{'product_id': Value(dtype='string', id=None), 'clip_text_features': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), >'clip_image_features': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}
because column names don't match

It looks like it is trying to cast the data from the selected config ("product_image_urls") into the schema of a different config ("product_features").

Full traceback:


ValueError Traceback (most recent call last)
File ~/.pyenv/versions/3.10.12/envs/py3.10/lib/python3.10/site-packages/datasets/builder.py:1879, in >ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1878 _time = time.time()
-> 1879 for _, table in generator:
1880 if max_shard_size is not None and writer._num_bytes > max_shard_size:

File ~/.pyenv/versions/3.10.12/envs/py3.10/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:82, in >Parquet.generate_tables(self, files)
79 # Uncomment for debugging (will print the Arrow table size and elements)
80 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
81 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
---> 82 yield f"{file_idx}
{batch_idx}", self._cast_table(pa_table)
83 except ValueError as e:

File ~/.pyenv/versions/3.10.12/envs/py3.10/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:61, in >Parquet._cast_table(self, pa_table)
58 if self.info.features is not None:
59 # more expensive cast to support nested features with keys in a different order
60 # allows str <-> int/float or str to Audio for example
---> 61 pa_table = table_cast(pa_table, self.info.features.arrow_schema)
62 return pa_table

File ~/.pyenv/versions/3.10.12/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:2324, in table_cast(table, schema)
2323 if table.schema != schema:
-> 2324 return cast_table_to_schema(table, schema)
2325 elif table.schema.metadata != schema.metadata:
...
1911 e = e.context
-> 1912 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1914 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)

DatasetGenerationError: An error occurred while generating the dataset

Could you please help me understand how to fix this? I suspect it's because the different configs have different schemas/columns? I have tried in the past to specify them with dataset_info in the README.md, but I had the same problem.

Also: one of the configs Dataset Viewer has an error on it.

Thank you!

Crossing Minds Inc org
edited May 28

Now I've added the dataset_info fields to README.md to specify the schema for each dataset. The error I'm getting when I call datasets.load_dataset() now is:

Failed to read file >'/Users/chingwei/.cache/huggingface/datasets/downloads/7d7a3d99e9747437d330610f65781c6b378a9b0b8db854e70a27e2625e3f829c' >with error <class 'ValueError'>: Couldn't cast
product_id: string
clip_text_features: list<item: float>
child 0, item: float
clip_image_features: list<item: float>
child 0, item: float
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 480
to
{'product_id': Value(dtype='string', id=None), 'image_url': Value(dtype='string', id=None)}
because column names don't match

It's the same exact error when loading any of the configs, which makes me suspect that by default it's trying to load the first file, which is product_features, and is trying to cast it into the schema of the default config, which is product_image_urls. Specifying the config in the second param of load_dataset() doesn't seem to have any effect.

The Dataset Viewer looks great now, though. Please let me know how I can fix my Dataset so it can be loaded without a custom loading script. Thanks!

cc @lhoestq @severo @albertvillanova

Crossing Minds Inc org

@lhoestq @severo @albertvillanova Sorry to bother you again, but do you have any suggestions for me? For now, I will revert to my custom loading script so that the data can be loaded through datasets. But it would be really great if I could get the dataset viewer working as well.

Crossing Minds Inc org

Ah, interesting, once I reverted to the custom loading script, the problem seems to be fixed! The dataset viewer still works (perhaps it never updated since I didn't update any of the data files), while now loading with load_dataset() also works! This might explain how other datasets are able to use a custom loader and still have the dataset viewer.

Thank you for your great platform!

cwchen-cm changed discussion status to closed

Hi ! It's a bug from a recent change, I'm fixing this right now (script was ignored). This means the Viewer will not work if you use a script

Crossing Minds Inc org

Oh no! Well, I guess it was too good to be true. I would still love any advice you have to make my multi-config dataset work with both dataset viewer and load_dataset(). See above for the errors I get from load_dataset() when I have everything configured correctly for the dataset viewer.

Thanks for the update.

cwchen-cm changed discussion status to open

I think removing the script and the dataset_info should do the job actually. The configs YAML should be enough

Crossing Minds Inc org

I did try that before, and while the Dataset Viewer worked great, I was not able to load the dataset with datasets.load_dataset(). Has there been a change made to fix the problems I was seeing with load_dataset() on a multi-config dataset?

When I call

dataset = load_dataset("crossingminds/shopping-queries-image-dataset", "product_image_urls")

I get an error

Failed to read file '>/Users/chingwei/.cache/huggingface/datasets/downloads/0ede10cdf37d35bddc644cff569d4b4b6db52e5d887c78135c98824e7fc282c1' >with error 'ValueError'>: Couldn't cast
product_id: string
image_url: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 502 to
{'product_id': Value(dtype='string', id=None), 'clip_text_features': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), >'clip_image_features': >Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}
because column names don't match

It looks like it is trying to cast the data from the selected config ("product_image_urls") into the schema of a different config ("product_features").

It must have come from dataset_info somehow, since the Parquet file alone does have ['product_id', 'clip_text_features', 'clip_image_features']. Maybe a cache issue ? What version of datasets are you using ?

Crossing Minds Inc org

I'm using 2.13.1. Could this be the issue?

It's likely why it fails yes. We fixed lots of bugs in the cache in 2.15 and added more improvements in recent versions

Crossing Minds Inc org
edited Jun 27

Thank you, that fixed it! I've removed the custom loading script from this repo.

cwchen-cm changed discussion status to closed

Sign up or log in to comment