Datasets:
metadata
license: cc0-1.0
language:
- en
task_categories:
- text-to-speech
- text-generation
pretty_name: Common Voice 11 (en) Cleaned and Tokenized
size_categories:
- 10K<n<100K
A cleaned and tokenized version of the English data from Mozilla Common Voice 11 dataset.
Cleaning steps:
- Filtered on samples with >2 upvotes and <1 downvotes]
- Removed non voice audio at start and end through pytorch VAD
Tokenization
- Audio tokenized through EnCodec by Meta
- Using 24khz pre-trained model, and target bandwidth of 1.5
- Represented in text as audio_token_0 - audio_token_1023
- Prompts constructed as "text: <common voice transcript>\naudio: <audio tokens>"
- Prompts tokenized with GPT tokenizer with added vocab of audio tokens.
- Tokenized prompts padded to size 1024 with eos_token.
Each sample has 3 properties: input_ids, attention_mask and labels. input_ids and labels are the tokenized prompts and attention_mask is the attention mask.