language:
- bn
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: whisper-small-bn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: bn
split: test
args:
language: bn
metrics:
- name: Test WER
type: wer
value: 35.14
pipeline_tag: automatic-speech-recognition
license: apache-2.0
Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
Whisper was proposed in the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford et al from OpenAI. The original code repository can be found here.
Usage
To transcribe audio samples, the model has to be used alongside a WhisperProcessor
.
The WhisperProcessor
is used to:
- Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
- Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
- The transcription always starts with the
<|startoftranscript|>
token - The second token is the language token (e.g.
<|en|>
for English) - The third token is the "task token". It can take one of two values:
<|transcribe|>
for speech recognition or<|translate|>
for speech translation - In addition, a
<|notimestamps|>
token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
Which forces the model to predict in English under the task of speech recognition.
Training Data
Common Voice 11.0 Bengali Train OpenSLR 53 Bengali Train
BibTeX entry and citation info
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}