tgrhn/speaker-segmentation-fine-tuned-voxconverse-en
Updated
•
12
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VoxConverse is an audio-visual diarisation dataset consisting of multispeaker clips of human speech, extracted from YouTube videos. Updates and additional information about the dataset can be found on the dataset website.
Note: This dataset has been preprocessed using diarizers. It makes the dataset compatible with diarizers to fine-tune pyannote segmentation models.
from datasets import load_dataset
ds = load_dataset("diarizers-community/voxconverse")
print(ds)
gives:
DatasetDict({
train: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 136
})
validation: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 18
})
test: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 16
})
})
@article{chung2020spot,
title={Spot the conversation: speaker diarisation in the wild},
author={Chung, Joon Son and Huh, Jaesung and Nagrani, Arsha and Afouras, Triantafyllos and Zisserman, Andrew},
booktitle={Interspeech},
year={2020}
}
Thanks to @kamilakesbi and @sanchit-gandhi for adding this dataset.