Datasets:

Review RixVox audio dataset

#1
by Lauler - opened
National Library of Sweden / KBLab org
β€’
edited Mar 6, 2023

Dataset review request for RixVox

I would very much appreciate if you could take a look at my dataset script, and test that it works as expected. It's my first time writing a custom dataset loading script. I followed your "Creating an audio dataset" guide, specifically the (Advanced) extract TAR archives locally section.

Description

RixVox is a newly created Swedish speech dataset comprised of speeches from the Swedish Parliament (the Riksdag). Audio from speeches have been aligned with official transcripts, on the sentence level, using aeneas. Speaker metadata is available for each observation, including the speaker's name, gender, party, birth year and electoral district. The dataset contains a total of 5493 hours of speech. An observation may consist of one or several sentences (up to 30 seconds in duration).

Here's a blog post describing some of the preliminary work, though we'll be writing a separate post describing how RixVox was created once we review it here and make sure it's ready to announce.

Files to review

  • Review if it's possible to load individual splits and "all" configuration (is there a way of testing this with only a partial download of the train set?).
  • Dataset script: rixvox.py . Most ASR datasets on Huggingface that I looked at for inspiration often have more complex setups with multiple languages and train/valid/test splits for each language. This one only comprises of a single language, and I only have a train/dev/test-split. Can the script be written simpler? For example, I'm not really sure about the appropriate choice of name and split. Currently I load the dataset by dataset = load_dataset("KBLab/rixvox", name="dev", cache_dir="data"). The split argument seems superfluous.

Another question:

In your example for creating an audio dataset, you fill the values of missing fields with "":

for field in data_fields:
                        if field not in row:
                            row[field] = ""

Since I load a parquet file, there are no fields missing, but there are missing values in some of the fields. Is there any special significance to using "" for empty values as opposed to for example None? I kept the missing values as None rather than convert them to "".

  • What should I name my folders/tarfiles so your dataset preview can successfully parse the split names and generate a preview?

  • Some feedback on your "Create an audio dataset"-guide: The guide was very helpful and clear. It would be very nice, however, to also have a section on the proper way to debug an audio dataset locally. Do I use load_dataset() or load_dataset_builder()? How do I specify args for local folders. I felt stupid and got confused trying to debug locally, so ended up just uploading it to Huggingface and debugged through a bunch of commits instead.

cc @lhoestq @polinaeterna @mariosasko @albertvillanova

National Library of Sweden / KBLab org
β€’
edited Mar 8, 2023

I realize now I should probably only have 1 BuilderConfig for this dataset.

How can I allow users to choose which one of train, dev or test set they want to download and load with a single BuilderConfig? Is there a self.config.split as well as a self.config.name that I can use for such a purpose?

When someone loads a configuration, it downloads and prepares all the associated splits. We don't support loading a specific split alone (except in streaming mode).

Some feedback on your "Create an audio dataset"-guide: The guide was very helpful and clear. It would be very nice, however, to also have a section on the proper way to debug an audio dataset locally. Do I use load_dataset() or load_dataset_builder()? How do I specify args for local folders. I felt stupid and got confused trying to debug locally, so ended up just uploading it to Huggingface and debugged through a bunch of commits instead.

You can pass the path to your local dataset directory to load_dataset()

What should I name my folders/tarfiles so your dataset preview can successfully parse the split names and generate a preview?

the error shown in the preview is

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/KBLab--rixvox/4d82872c01c87424971bd1013bc7c6e9c0820742b89b6af108267225eed87099/rixvox.py", line 150, in _split_generators
                  "local_extracted_archive_paths": local_extracted_archives.get(self.config.name),
              AttributeError: 'NoneType' object has no attribute 'get'

it happens because local_extracted_archives can be None - maybe use an empty dict if the dataset is streaming instead.

Most ASR datasets on Huggingface that I looked at for inspiration often have more complex setups with multiple languages and train/valid/test splits for each language. This one only comprises of a single language, and I only have a train/dev/test-split. Can the script be written simpler? For example, I'm not really sure about the appropriate choice of name and split. Currently I load the dataset by dataset = load_dataset("KBLab/rixvox", name="dev", cache_dir="data"). The split argument seems superfluous.

In that case you don't even need to define configurations at all ;)

National Library of Sweden / KBLab org

@lhoestq Thanks for the answers and the feedback.

I simplified the script according to your suggestions, removing the BuildConfigs altogether. It works perfectly now, with preview and all.

I think I had my mind really set on wanting to allow users to download train, validation and test independently (because the train set is so big), and for this reasong set things up in a way that isn't standard for the datasets library.

Anyone who wants to explore the splits without downloading the whole thing can just use streaming mode instead.

Thanks for your help!

Lauler changed discussion status to closed

Sign up or log in to comment