File size: 3,685 Bytes
01f501f
2e48ce9
01f501f
 
c49b6ca
 
 
96f87d1
 
 
 
 
12a7e7c
96f87d1
12a7e7c
 
 
 
 
 
 
96f87d1
12a7e7c
 
96f87d1
12a7e7c
 
 
96f87d1
01f501f
2e48ce9
01f501f
 
12a7e7c
2e48ce9
12a7e7c
 
 
2e48ce9
12a7e7c
 
2e48ce9
 
12a7e7c
2e48ce9
12a7e7c
2e48ce9
12a7e7c
 
 
2e48ce9
12a7e7c
 
 
 
 
 
2e48ce9
12a7e7c
 
 
 
 
2e48ce9
12a7e7c
 
 
2e48ce9
12a7e7c
2e48ce9
12a7e7c
 
 
2e48ce9
12a7e7c
2e48ce9
01f501f
 
2e48ce9
01f501f
96f87d1
 
01f501f
 
2e48ce9
 
 
12a7e7c
 
 
 
 
 
 
2e48ce9
 
12a7e7c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language: 
- bn
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
# widget:
# - example_title: Librispeech sample 1
#   src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
# - example_title: Librispeech sample 2
#   src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-small-bn
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice 11.0
      type: mozilla-foundation/common_voice_11_0
      config: bn
      split: test
      args:
        language: bn
    metrics:
    - name: Test WER
      type: wer
      value: 35.14
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---

# Whisper

Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours 
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need 
for fine-tuning.

Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) 
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).


# Usage

To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).

The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)

The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens 
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction

Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.

These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at 
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, 
the Whisper model will automatically predict the output langauge and task itself.

The context tokens can be set accordingly:

```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```

Which forces the model to predict in English under the task of speech recognition.



## Training Data

Common Voice 11.0 Bengali Train 
OpenSLR 53 Bengali Train 


### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
  doi = {10.48550/ARXIV.2212.04356},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
```