ArthurZ HF staff commited on
Commit
22d9b2b
1 Parent(s): 767312c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +370 -0
README.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - de
6
+ - es
7
+ - ru
8
+ - ko
9
+ - fr
10
+ - ja
11
+ - pt
12
+ - tr
13
+ - pl
14
+ - ca
15
+ - nl
16
+ - ar
17
+ - sv
18
+ - it
19
+ - id
20
+ - hi
21
+ - fi
22
+ - vi
23
+ - iw
24
+ - uk
25
+ - el
26
+ - ms
27
+ - cs
28
+ - ro
29
+ - da
30
+ - hu
31
+ - ta
32
+ - no
33
+ - th
34
+ - ur
35
+ - hr
36
+ - bg
37
+ - lt
38
+ - la
39
+ - mi
40
+ - ml
41
+ - cy
42
+ - sk
43
+ - te
44
+ - fa
45
+ - lv
46
+ - bn
47
+ - sr
48
+ - az
49
+ - sl
50
+ - kn
51
+ - et
52
+ - mk
53
+ - br
54
+ - eu
55
+ - is
56
+ - hy
57
+ - ne
58
+ - mn
59
+ - bs
60
+ - kk
61
+ - sq
62
+ - sw
63
+ - gl
64
+ - mr
65
+ - pa
66
+ - si
67
+ - km
68
+ - sn
69
+ - yo
70
+ - so
71
+ - af
72
+ - oc
73
+ - ka
74
+ - be
75
+ - tg
76
+ - sd
77
+ - gu
78
+ - am
79
+ - yi
80
+ - lo
81
+ - uz
82
+ - fo
83
+ - ht
84
+ - ps
85
+ - tk
86
+ - nn
87
+ - mt
88
+ - sa
89
+ - lb
90
+ - my
91
+ - bo
92
+ - tl
93
+ - mg
94
+ - as
95
+ - tt
96
+ - haw
97
+ - ln
98
+ - ha
99
+ - ba
100
+ - jw
101
+ - su
102
+ tags:
103
+ - audio
104
+ - automatic-speech-recognition
105
+ - hf-asr-leaderboard
106
+ widget:
107
+ - example_title: Librispeech sample 1
108
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
109
+ - example_title: Librispeech sample 2
110
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
111
+ model-index:
112
+ - name: whisper-large
113
+ results:
114
+ - task:
115
+ name: Automatic Speech Recognition
116
+ type: automatic-speech-recognition
117
+ dataset:
118
+ name: LibriSpeech (clean)
119
+ type: librispeech_asr
120
+ config: clean
121
+ split: test
122
+ args:
123
+ language: en
124
+ metrics:
125
+ - name: Test WER
126
+ type: wer
127
+ value: 3.0
128
+ - task:
129
+ name: Automatic Speech Recognition
130
+ type: automatic-speech-recognition
131
+ dataset:
132
+ name: LibriSpeech (other)
133
+ type: librispeech_asr
134
+ config: other
135
+ split: test
136
+ args:
137
+ language: en
138
+ metrics:
139
+ - name: Test WER
140
+ type: wer
141
+ value: 5.4
142
+ - task:
143
+ name: Automatic Speech Recognition
144
+ type: automatic-speech-recognition
145
+ dataset:
146
+ name: Common Voice 11.0
147
+ type: mozilla-foundation/common_voice_11_0
148
+ config: hi
149
+ split: test
150
+ args:
151
+ language: hi
152
+ metrics:
153
+ - name: Test WER
154
+ type: wer
155
+ value: 54.8
156
+ pipeline_tag: automatic-speech-recognition
157
+ license: apache-2.0
158
+ ---
159
+
160
+ # Whisper
161
+
162
+ [OpenAI's Whisper](https://openai.com/blog/whisper/)
163
+
164
+ The Whisper model was proposed in [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
165
+
166
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original model card.
167
+
168
+
169
+ ## Intro
170
+
171
+ The first paragraphs of the abstract read as follows :
172
+
173
+ > We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning.
174
+ > When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
175
+
176
+ The original code repository can be found [here](https://github.com/openai/whisper).
177
+
178
+ ## Model details
179
+
180
+ The Whisper models are trained for speech recognition and translation tasks, capable of transcribing speech audio into the text in the language it is spoken (ASR) as well as translated into English (speech translation). Researchers at OpenAI developed the models to study the robustness of speech processing systems trained under large-scale weak supervision. There are 9 models of different sizes and capabilities, summarised in the following table.
181
+
182
+ | Size | Parameters | English-only model | Multilingual model |
183
+ |:------:|:----------:|:------------------:|:------------------:|
184
+ | tiny | 39 M | ✓ | ✓ |
185
+ | base | 74 M | ✓ | ✓ |
186
+ | small | 244 M | ✓ | ✓ |
187
+ | medium | 769 M | ✓ | ✓ |
188
+ | large | 1550 M | | ✓ |
189
+
190
+
191
+
192
+ ## Model description
193
+
194
+ Whisper is an auto-regressive automatic speech recognition encoder-decoder model that was trained on 680 000 hours of 16kHz sampled multilingual audio. It was fully trained in a supervised manner, with multiple tasks :
195
+
196
+ - English transcription
197
+ - Any-to-English speech translation
198
+ - Non-English transcription
199
+ - No speech prediction
200
+
201
+ To each task corresponds a sequence of tokens that are given to the decoder as *context tokens*. The beginning of a transcription always starts with `<|startoftranscript|>` which is why the `decoder_start_token` is always set to `tokenizer.encode("<|startoftranscript|>")`. The following token should be the language token, which is automatically detected in the original code. Finally, the task is define using either `<|transcribe|>` or `<|translate|>`. In addition, a `<|notimestamps|>` token is added if the task does not include timestamp prediction.
202
+
203
+
204
+ # Usage
205
+
206
+ To transcribe or translate audio files, the model has to be used along a `WhisperProcessor`. The `WhisperProcessor.get_decoder_prompt_ids` function is used to get a list of `( idx, token )` tuples, which can either be set in the config, or directly passed to the generate function, as `forced_decoder_ids`.
207
+
208
+
209
+ ## Transcription
210
+ In the following example, the english only model is used. We set the `decoder_input_ids` accordingly.
211
+
212
+
213
+ ### English to english
214
+ The "<|en|>" token is used to specify that the speech is in english and should be transcribed to english
215
+
216
+ ```python
217
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
218
+ >>> from datasets import load_dataset
219
+ >>> import torch
220
+
221
+ >>> # load model and processor
222
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
223
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
224
+
225
+ >>> # load dummy dataset and read soundfiles
226
+ >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
227
+ >>> input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
228
+
229
+ >>> # Generate logits
230
+ >>> logits = model(input_features, decoder_input_ids = torch.tensor([[50258]])).logits
231
+ >>> # take argmax and decode
232
+ >>> predicted_ids = torch.argmax(logits, dim=-1)
233
+ >>> transcription = processor.batch_decode(predicted_ids)
234
+ ['<|en|>']
235
+ ```
236
+
237
+ ### French to French
238
+ In order to obtain the full transcription, the `generate()` function is used. The following example demonstrates a french to french
239
+ transcription.
240
+
241
+ ```python
242
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
243
+ >>> from datasets import load_dataset
244
+ >>> import torch
245
+
246
+ >>> # load model and processor
247
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
248
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
249
+
250
+ >>> # load dummy dataset and read soundfiles
251
+ >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
252
+ >>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
253
+ >>> input_speech = next(iter(ds))["audio"]["array"]
254
+ >>> model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "fr", task = "transcribe")
255
+ >>> input_features = processor(input_speech, return_tensors="pt").input_features
256
+ >>> predicted_ids = model.generate(input_features)
257
+ >>> transcription = processor.batch_decode(predicted_ids)
258
+ ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
259
+
260
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens = True)
261
+ [' Un vrai travail intéressant va enfin être mené sur ce sujet.']
262
+ ```
263
+
264
+ ## Translation
265
+ The "<|translate|>" is used as the first decoder input token to specify the transcription task.
266
+
267
+ ### French to English
268
+
269
+ ```python
270
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
271
+ >>> from datasets import load_dataset
272
+ >>> import torch
273
+
274
+ >>> # load model and processor
275
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
276
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
277
+
278
+ >>> # load dummy dataset and read soundfiles
279
+ >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
280
+ >>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
281
+ >>> input_speech = next(iter(ds))["audio"]["array"]
282
+ >>> # tokenize
283
+ >>> input_features = processor(input_speech, return_tensors="pt").input_features
284
+ >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language = "fr", task = "translate")
285
+
286
+ >>> predicted_ids = model.generate(input_features, forced_decoder_ids = forced_decoder_ids)
287
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens = True)
288
+ [' A real interesting work will be done on this subject.']
289
+ ```
290
+
291
+ ## Evaluation
292
+
293
+ This code snippet shows how to evaluate **openai/whisper-large** on LibriSpeech's "clean" and "other" test data.
294
+
295
+ ```python
296
+ >>> from datasets import load_dataset
297
+ >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
298
+ >>> import soundfile as sf
299
+ >>> import torch
300
+ >>> from jiwer import wer
301
+
302
+
303
+ >>> librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
304
+
305
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large").to("cuda")
306
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
307
+
308
+ >>> def map_to_pred(batch):
309
+ >>> input_features = processor(batch["audio"]["array"], return_tensors="pt").input_features
310
+
311
+ >>> with torch.no_grad():
312
+ >>> logits = model(input_features.to("cuda")).logits
313
+
314
+ >>> predicted_ids = torch.argmax(logits, dim=-1)
315
+ >>> transcription = processor.batch_decode(predicted_ids, normalize = True)
316
+ >>> batch['text'] = processor.tokenizer._normalize(batch['text'])
317
+ >>> batch["transcription"] = transcription
318
+ >>> return batch
319
+
320
+ >>> result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
321
+
322
+ >>> print("WER:", wer(result["text"], result["transcription"]))
323
+ 0.030003583080317572
324
+ ```
325
+
326
+
327
+ ### Evaluated Use
328
+
329
+ The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
330
+
331
+ The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
332
+
333
+ In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
334
+
335
+
336
+ ## Training Data
337
+
338
+ The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
339
+
340
+ As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
341
+
342
+
343
+ ## Performance and Limitations
344
+
345
+ Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
346
+
347
+ However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
348
+
349
+ Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
350
+
351
+ In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
352
+
353
+
354
+ ## Broader Implications
355
+
356
+ We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
357
+
358
+ There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
359
+
360
+
361
+ ### BibTeX entry and citation info
362
+ *Since no official citation was provided, we use the following in the mean time*
363
+ ```bibtex
364
+ @misc{radford2022whisper,
365
+ title={Robust Speech Recognition via Large-Scale Weak Supervision.},
366
+ author={Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever},
367
+ year={2022},
368
+ url={https://cdn.openai.com/papers/whisper.pdf},
369
+ }
370
+ ```