The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 1.34 GiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for the Voxconverse dataset

VoxConverse is an audio-visual diarisation dataset consisting of multispeaker clips of human speech, extracted from YouTube videos. Updates and additional information about the dataset can be found on the dataset website.

Note: This dataset has been preprocessed using diarizers. It makes the dataset compatible with diarizers to fine-tune pyannote segmentation models.

Example Usage

from datasets import load_dataset
ds = load_dataset("diarizers-community/voxconverse")

print(ds)

gives:

DatasetDict({
    train: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 136
    })
    validation: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 18
    })
    test: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 16
    })
})

Dataset source

Citation

@article{chung2020spot,
  title={Spot the conversation: speaker diarisation in the wild},
  author={Chung, Joon Son and Huh, Jaesung and Nagrani, Arsha and Afouras, Triantafyllos and Zisserman, Andrew},
  booktitle={Interspeech},
  year={2020}
}

Contribution

Thanks to @kamilakesbi and @sanchit-gandhi for adding this dataset.

Downloads last month
0
Edit dataset card

Models trained or fine-tuned on diarizers-community/voxconverse