license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
'-pretty_name': LibriSpeech ASR
Distil Whisper: LibriSpeech ASR With Timestamps
This is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper large-v2 model with greedy sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original dataset card.
Standalone Usage
First, install the latest version of the 🤗 Datasets package:
pip install --upgrade pip
pip install --upgrade datasets[audio]
The dataset can be downloaded and pre-processed on disk using the load_dataset
function:
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all")
# take the first sample of the validation set
sample = dataset["validation.clean"][0]
It can also be streamed directly from the Hub using Datasets' streaming mode. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk:
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation.clean"]))
Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the Distil Whisper repository.
License
This dataset is licensed under cc-by-4.0.