Fix streaming mode bug

#4
by sanchit-gandhi HF staff - opened

Currently, streaming mode is broken with People's Speech. E.g. trying to stream the "train" split of the "clean" subset, we expect to iterate over ~12k hours of audio data, corresponding to 4481 shards of audio data. However, streaming terminates after only just 1005 audio samples. Since the number of audio samples is less than the number of shards, we're clearly missing data:

from datasets import load_dataset

peoples_speech = load_dataset("MLCommons/peoples_speech", "clean", split="train", streaming=True)

# iterate over the entire 'train' split of the 'clean' subset
for i, sample in enumerate(peoples_speech):
    continue

# how many samples do we have?
print(i)

Print output:

1005

The problem is that we set local_extracted_archive = [None] * len(audio_archive_paths), where audio_archive_pathsis a dictionary mapping from split to audio shard paths. This means that audio_archive_paths has length 3 (train, validation, test), and so we always set local_extracted_archive = [None] * 3. In reality, it should be set to a list as long as the number of shards we have per split, i.e. for the train split: local_extracted_archive = [None] * 4481, or equivalently local_extracted_archive = [None] * len(audio_archive_paths["train"]).

This was resulting in us only iterating over 3 archives in our streaming loop: https://huggingface.co/datasets/MLCommons/peoples_speech/blob/99bede64f2b5ba34e8d618c11ee53477f9277a29/peoples_speech.py#L214

With the fix, we should be able to iterate over all 4481.

Good catch ! I think @polinaeterna can merge

MLCommons org

thank you!

polinaeterna changed pull request status to merged

Sign up or log in to comment