voxceleb2-dev-wds / README.md
gaunernst's picture
Update README.md
0bc36bf verified
metadata
license: cc-by-sa-4.0
task_categories:
  - audio-classification
size_categories:
  - 1M<n<10M
configs:
  - config_name: default
    data_files: '*.tar'
    default: true

VoxCeleb2 - dev set

This is a copy of VoxCeleb2 dev set in WebDataset format. The audio data is the original AAC-encoded files without any transcoding. Refer to https://arxiv.org/abs/1806.05622 for more details about the dataset.

There are 1,092,009 samples covering 5,994 unique speakers. The dataset is split into 779 shards of ~100MB.

Usage

import torchaudio
import webdataset as wds
from datasets import load_dataset

def decode_audio(sample):
    audio, fs = torchaudio.load(sample.pop("m4a"))  # requires FFMPEG to decode AAC. refer to torchaudio doc
    # optionally resample audio and other pre-processing
    sample["audio"] = audio
    return sample

# using webdataset library
ds = wds.WebDataset("https://huggingface.co/datasets/gaunernst/voxceleb2-dev-wds/resolve/main/voxceleb2-dev-{0000..0778}.tar")
ds = ds.map(decode_audio)
next(iter(ds))

# using HF datasets library
ds = load_dataset("gaunernst/voxceleb2-dev-wds", split="train", streaming=True)
ds = ds.map(decode_audio)
next(iter(ds))

The original filename is kept. In other words, if you download all shards and untar them, it will be exactly the same as the original folder (with extra .cls files containing pre-defined speaker_id-to-integer mapping). You can also retrieve the original speaker ID and YouTube video ID from the __key__ field.

Citation

@InProceedings{Chung18b,
  author       = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
  title        = "VoxCeleb2: Deep Speaker Recognition",
  booktitle    = "INTERSPEECH",
  year         = "2018",
}