whisperspeech / README.md
jpc's picture
Update README.md (#1)
a051a2a
metadata
license: mit
task_categories:
  - text-to-speech
language:
  - en
pretty_name: WhisperSpeech

The WhisperSpeech Dataset

This dataset contains data to train SPEAR TTS-like text-to-speech models that utilized semantic tokens derived from the OpenAI Whisper speech recognition model.

We currently provide semantic and acoustic tokens for the LibriLight and LibriTTS datasets (English only).

Acoustic tokens:

  • 24kHz EnCodec 6kbps (8 quantizers)

Semantic tokens:

  • Whisper tiny VQ bottleneck trained on a subset of LibriLight

Available LibriLight subsets:

  • small/medium/large (following the original dataset division but with large excluding the speaker 6454)
  • a separate ≈1300hr single-speaker subset based on the 6454 speaker from the large subset for training single-speaker TTS models

We plan to add more acoustic tokens from other codecs in the future.