Automatic Speech Recognition
Transformers
Safetensors
French
whisper
asr
Eval Results
Inference Endpoints
eustlb HF staff commited on
Commit
9916679
1 Parent(s): 2acb252

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +565 -0
README.md ADDED
@@ -0,0 +1,565 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Distil-Whisper: distil-large-v3-fr
2
+
3
+ Distil-Whisper for English Automatic Speech Recognition (ASR) was proposed in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430).
4
+
5
+ This is the knowledge distilled version of OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) for French ASR.
6
+
7
+ The result is a distilled model that performs within **2% WER of whisper-large-v3** on out-of-distribution evaluation sets for both short-form and long form transcription. Moreover, it is **5.9x** faster than whisper-large-v3 and **1.3** times faster than the tiniest version of whisper while being uncomparably more accurate.
8
+
9
+ | Model | Params (M) | Rel. Latency | Short-Form WER | Long-Form WER |
10
+ | :--------------------- | :--------: | :----------: | :------------: | :-----------: |
11
+ | whisper-tiny | 37.8 | 4.7 | 43.24 | 28.28 |
12
+ | whisper-base | 72.6 | 3.7 | 30.48 | 19.23 |
13
+ | whisper-small | 242 | 2.3 | 16.36 | 12.47 |
14
+ | whisper-medium | 764 | 1.3 | 11.53 | 10.77 |
15
+ | whisper-large-v3 | 1540 | 1.0 | 7.84 | 9.07 |
16
+ | **distil-large-v3-fr** | **756** | **5.9** | **9.36** | **11.47** |
17
+
18
+ *latencies benchmarked to generate 128 tokens on A100 40GB with a batch size of 1. More details about inference performances in [inference speed](#inference-speed) section.
19
+ *WERs are averaged on the test sets. More details in [short-form](#short-form) and [long-form](#long-form) results sections.
20
+
21
+
22
+
23
+ ## Transformers Usage
24
+
25
+ distil-large-v3-fr is supported in the Hugging Face 🤗 Transformers library from version 4.41 onwards. To run the model, first
26
+ install the latest version of Transformers. For this example, we'll also install 🤗 Datasets to load a toy audio dataset
27
+ from the Hugging Face Hub:
28
+
29
+ ```bash
30
+ pip install --upgrade pip
31
+ pip install --upgrade transformers accelerate datasets[audio]
32
+ ```
33
+
34
+ ### Short-Form Transcription
35
+
36
+ The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
37
+ class to transcribe short-form audio files (< 30-seconds) as follows:
38
+
39
+ ```python
40
+ import torch
41
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
42
+ from datasets import load_dataset
43
+
44
+
45
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
46
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
47
+
48
+ model_id = "eustlb/distil-large-v3-fr"
49
+
50
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
51
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
52
+ )
53
+ model.to(device)
54
+
55
+ processor = AutoProcessor.from_pretrained(model_id)
56
+
57
+ pipe = pipeline(
58
+ "automatic-speech-recognition",
59
+ model=model,
60
+ tokenizer=processor.tokenizer,
61
+ feature_extractor=processor.feature_extractor,
62
+ max_new_tokens=128,
63
+ torch_dtype=torch_dtype,
64
+ device=device,
65
+ )
66
+
67
+ dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
68
+ sample = next(iter(dataset))["audio"]
69
+
70
+ result = pipe(sample)
71
+ print(result["text"])
72
+ ```
73
+
74
+ To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
75
+ ```diff
76
+ - result = pipe(sample)
77
+ + result = pipe("audio.mp3")
78
+ ```
79
+
80
+ For segment-level timestamps, pass the argument `return_timestamps=True` and return the `"chunks"` output:
81
+ ```python
82
+ result = pipe(sample, return_timestamps=True)
83
+ print(result["chunks"])
84
+ ```
85
+
86
+ <details>
87
+
88
+ <summary> For more control over the generation parameters, use the model + processor API directly: </summary>
89
+
90
+ Ad-hoc generation arguments can be passed to `model.generate`, including `num_beams` for beam-search, `return_timestamps`
91
+ for segment-level timestamps, and `prompt_ids` for prompting. See the [docstrings](https://huggingface.co/docs/transformers/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate)
92
+ for more details.
93
+
94
+ ```python
95
+ import torch
96
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
97
+ from datasets import Audio, load_dataset
98
+
99
+
100
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
101
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
102
+
103
+ model_id = "eustlb/distil-large-v3-fr"
104
+
105
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
106
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
107
+ )
108
+ model.to(device)
109
+
110
+ processor = AutoProcessor.from_pretrained(model_id)
111
+
112
+ dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
113
+ dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
114
+ sample = next(iter(dataset))["audio"]
115
+
116
+ input_features = processor(
117
+ sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
118
+ ).input_features
119
+
120
+ input_features = input_features.to(device, dtype=torch_dtype)
121
+
122
+ gen_kwargs = {
123
+ "max_new_tokens": 128,
124
+ "num_beams": 1,
125
+ "return_timestamps": False,
126
+ }
127
+
128
+ pred_ids = model.generate(input_features, **gen_kwargs)
129
+ pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=gen_kwargs["return_timestamps"])
130
+
131
+ print(pred_text)
132
+ ```
133
+
134
+ </details>
135
+
136
+ ### Sequential Long-Form
137
+
138
+ distil-large-v3 is compatible with OpenAI's sequential
139
+ long-form transcription algorithm. This algorithm uses a sliding window for buffered inference of long audio files (> 30-seconds),
140
+ and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form).
141
+
142
+ The sequential long-form algorithm should be used in either of the following scenarios:
143
+ 1. Transcription accuracy is the most important factor, and latency is less of a consideration
144
+ 2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
145
+
146
+ If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm
147
+ described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of
148
+ the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf).
149
+
150
+ The [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
151
+ class can be used to transcribe long audio files with the sequential algorithm as follows:
152
+
153
+ ```python
154
+ import torch
155
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
156
+ from datasets import load_dataset
157
+
158
+
159
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
160
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
161
+
162
+ model_id = "eustlb/distil-large-v3-fr"
163
+
164
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
165
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
166
+ )
167
+ model.to(device)
168
+
169
+ processor = AutoProcessor.from_pretrained(model_id)
170
+
171
+ pipe = pipeline(
172
+ "automatic-speech-recognition",
173
+ model=model,
174
+ tokenizer=processor.tokenizer,
175
+ feature_extractor=processor.feature_extractor,
176
+ max_new_tokens=128,
177
+ torch_dtype=torch_dtype,
178
+ device=device,
179
+ )
180
+
181
+ dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
182
+ sample = next(iter(dataset))["audio"]
183
+
184
+ result = pipe(sample)
185
+ print(result["text"])
186
+ ```
187
+
188
+ <details>
189
+
190
+ <summary> For more control over the generation parameters, use the model + processor API directly: </summary>
191
+
192
+ ```python
193
+ import torch
194
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
195
+ from datasets import Audio, load_dataset
196
+
197
+
198
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
199
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
200
+
201
+ model_id = "eustlb/distil-large-v3-fr"
202
+
203
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
204
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
205
+ )
206
+ model.to(device)
207
+
208
+ processor = AutoProcessor.from_pretrained(model_id)
209
+
210
+ dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
211
+ dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
212
+ sample = next(iter(dataset))["audio"]
213
+
214
+ inputs = processor(
215
+ sample["array"],
216
+ sampling_rate=sample["sampling_rate"],
217
+ return_tensors="pt",
218
+ truncation=False,
219
+ padding="longest",
220
+ return_attention_mask=True,
221
+ )
222
+ inputs = inputs.to(device, dtype=torch_dtype)
223
+
224
+ gen_kwargs = {
225
+ "max_new_tokens": 448,
226
+ "num_beams": 1,
227
+ "condition_on_prev_tokens": False,
228
+ "compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
229
+ "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
230
+ "logprob_threshold": -1.0,
231
+ "no_speech_threshold": 0.6,
232
+ "return_timestamps": True,
233
+ }
234
+
235
+ pred_ids = model.generate(**inputs, **gen_kwargs)
236
+ pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False)
237
+
238
+ print(pred_text)
239
+ ```
240
+
241
+ </details>
242
+
243
+
244
+ ### Chunked Long-Form
245
+
246
+ distil-large-v3-fr remains compatible with the Transformers chunked long-form algorithm. This algorithm should be used when
247
+ a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances,
248
+ the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the
249
+ [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)).
250
+
251
+ To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3-fr, a chunk length of 25-seconds
252
+ is optimal. To activate batching over long audio files, pass the argument `batch_size`:
253
+
254
+ ```python
255
+ import torch
256
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
257
+ from datasets import load_dataset
258
+
259
+
260
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
261
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
262
+
263
+ model_id = "eustlb/distil-large-v3-fr"
264
+
265
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
266
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
267
+ )
268
+ model.to(device)
269
+
270
+ processor = AutoProcessor.from_pretrained(model_id)
271
+
272
+ pipe = pipeline(
273
+ "automatic-speech-recognition",
274
+ model=model,
275
+ tokenizer=processor.tokenizer,
276
+ feature_extractor=processor.feature_extractor,
277
+ max_new_tokens=128,
278
+ chunk_length_s=25,
279
+ batch_size=16,
280
+ torch_dtype=torch_dtype,
281
+ device=device,
282
+ )
283
+
284
+ dataset = load_dataset("eustlb/french-long-form-test", split="test", streaming=True)
285
+ sample = next(iter(dataset))["audio"]
286
+
287
+ result = pipe(sample)
288
+ print(result["text"])
289
+ ```
290
+
291
+ ### Speculative Decoding
292
+
293
+ distil-large-v3 is the first Distil-Whisper model that can be used as an assistant to Whisper large-v3 for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
294
+ Speculative decoding mathematically ensures that exactly the same outputs as Whisper are obtained, while being 2 times faster.
295
+ This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
296
+
297
+ In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
298
+ specify it as the "assistant model" for generation:
299
+
300
+ ```python
301
+ from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
302
+ import torch
303
+ from datasets import load_dataset
304
+
305
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
306
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
307
+
308
+ assistant_model_id = "eustlb/distil-large-v3-fr"
309
+
310
+ assistant_model = AutoModelForCausalLM.from_pretrained(
311
+ assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
312
+ )
313
+ assistant_model.to(device)
314
+
315
+ model_id = "openai/whisper-large-v3"
316
+
317
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
318
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
319
+ )
320
+ model.to(device)
321
+
322
+ processor = AutoProcessor.from_pretrained(model_id)
323
+
324
+ pipe = pipeline(
325
+ "automatic-speech-recognition",
326
+ model=model,
327
+ tokenizer=processor.tokenizer,
328
+ feature_extractor=processor.feature_extractor,
329
+ max_new_tokens=128,
330
+ generate_kwargs={"assistant_model": assistant_model},
331
+ torch_dtype=torch_dtype,
332
+ device=device,
333
+ )
334
+
335
+ dataset = load_dataset("google/fleurs", "fr_fr", split="train", streaming=True)
336
+ sample = next(iter(dataset))["audio"]
337
+
338
+ result = pipe(sample)
339
+ print(result["text"])
340
+ ```
341
+
342
+ For more details on speculative decoding, refer to the blog post [Speculative Decoding for 2x Faster Whisper Inference](https://huggingface.co/blog/whisper-speculative-decoding).
343
+
344
+ ### Additional Speed & Memory Improvements
345
+
346
+ You can apply additional speed and memory improvements to Distil-Whisper to further reduce the inference speed and VRAM
347
+ requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a
348
+ more efficient flash attention version.
349
+
350
+ #### Flash Attention 2
351
+
352
+ We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2)
353
+ if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
354
+
355
+ ```
356
+ pip install flash-attn --no-build-isolation
357
+ ```
358
+
359
+ Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
360
+
361
+ ```diff
362
+ - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
363
+ + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="flash_attention_2")
364
+ ```
365
+
366
+ #### Torch Scale-Product-Attention (SDPA)
367
+
368
+ If your GPU does not support Flash Attention, we recommend making use of PyTorch [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html).
369
+ This attention implementation is activated **by default** for PyTorch versions 2.1.1 or greater. To check
370
+ whether you have a compatible PyTorch version, run the following Python code snippet:
371
+
372
+ ```python
373
+ from transformers.utils import is_torch_sdpa_available
374
+
375
+ print(is_torch_sdpa_available())
376
+ ```
377
+
378
+ If the above returns `True`, you have a valid version of PyTorch installed and SDPA is activated by default. If it
379
+ returns `False`, you need to upgrade your PyTorch version according to the [official instructions](https://pytorch.org/get-started/locally/)
380
+
381
+ Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying
382
+ `attn_implementation="sdpa"` as follows:
383
+
384
+ ```diff
385
+ - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
386
+ + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="sdpa")
387
+ ```
388
+
389
+ For more information about how to use the SDPA refer to the [Transformers SDPA documentation](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).
390
+
391
+ #### Torch compile
392
+
393
+ Coming soon...
394
+
395
+ #### 4-bit and 8-bit Inference
396
+
397
+ Coming soon...
398
+
399
+ ## Library Integrations
400
+
401
+ ### Whisper.cpp
402
+
403
+ distil-large-v3-fr can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original
404
+ sequential long-form transcription algorithm. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster
405
+ than Whisper large-v3, while performing to within 0.8% WER over long-form audio.
406
+
407
+ Steps for getting started:
408
+
409
+ 1. Clone the Whisper.cpp repository:
410
+ ```
411
+ git clone https://github.com/ggerganov/whisper.cpp.git
412
+ cd whisper.cpp
413
+ ```whis
414
+ 2. Install the Hugging Face Hub Python package:
415
+ ```bash
416
+ pip install --upgrade huggingface_hub
417
+ ```
418
+ And download the GGML weights for distil-large-v3 using the following Python snippet:
419
+
420
+ ```python
421
+ from huggingface_hub import hf_hub_download
422
+
423
+ hf_hub_download(repo_id='eustlb/distil-large-v3-fr-ggml', filename='ggml-distil-large-v3-fr.bin', local_dir='./models')
424
+ ```
425
+
426
+ Note that if you do not have a Python environment set-up, you can also download the weights directly with `wget`:
427
+
428
+ ```bash
429
+ wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3-fr.bin -P ./models
430
+ ````
431
+
432
+ ### Transformers.js
433
+
434
+ Distil-Whisper can be run completely in your web browser with [Transformers.js](http://github.com/xenova/transformers.js):
435
+
436
+ 1. Install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers):
437
+
438
+ ```bash
439
+ npm i @xenova/transformers
440
+ ```
441
+
442
+ 2. Import the library and perform inference with the pipeline API.
443
+
444
+ ```js
445
+ import { pipeline } from '@xenova/transformers';
446
+
447
+ const transcriber = await pipeline('automatic-speech-recognition', 'eustlb/distil-large-v3-fr');
448
+
449
+ const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
450
+ const output = await transcriber(url);
451
+ // { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." }
452
+ ```
453
+
454
+ Refer to the Transformers.js [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline)
455
+ for further information.
456
+
457
+ ## Data
458
+
459
+ distil-large-v3-fr is trained on 4,515 hours of audio data from three open-source, permissively licensed speech datasets on the
460
+ Hugging Face Hub:
461
+
462
+ | Dataset | Size / h | Speakers | Domain | Licence |
463
+ | --------------------------------------------------------------------------------------------- | -------- | -------- | ------------------ | ----------------------------------------------------------- |
464
+ | [Common Voice 17](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) | 1,014 | unknown | Narrated Wikipedia | [CC0-1.0](https://choosealicense.com/licenses/cc0-1.0/) |
465
+ | [MultiLingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) | 1,077 | 142 | Audiobook | [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/) |
466
+ | [YODAS fr000 split](https://huggingface.co/datasets/espnet/yodas) | 2,424 | unknown | YouTube | [CC-BY-3.0](https://creativecommons.org/licenses/by/3.0/) |
467
+ | **Total** | 4,515 | 142+ | | |
468
+
469
+
470
+ The audio data is then pseudo-labelled using the Whisper large-v3 model: we use Whisper to generate predictions for all
471
+ the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the
472
+ transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training.
473
+
474
+ ## WER Filter
475
+
476
+ The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on
477
+ accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels
478
+ and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds
479
+ a specified threshold, we discard the training example. Otherwise, we keep it for training.
480
+
481
+ We chose for this training a WER threshold of 20%, resulting in an effective training set of 2110 hours (750 for [Common Voice 17](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0), 1040 for [MultiLingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and 320 for [YODAS fr000 split](https://huggingface.co/datasets/espnet/yodas)).
482
+
483
+
484
+ Section 9.2 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) demonstrates the effectiveness of this filter
485
+ for improving downstream performance of the distilled model. We also partially attribute Distil-Whisper's robustness to
486
+ hallucinations to this filter.
487
+
488
+ ## Training
489
+
490
+ The model was trained for 18,000 optimisation steps (or 14 epochs) with batch size 256. We saved the best model, based on the global wer score on validation splits, reached after 14,000 optimization steps (or 11 epochs). The two decoder layers were initialized from distil-large-v3 to leverage language transfer from English to French (more details [here](https://github.com/huggingface/distil-whisper/tree/main/training#22-language-transfer)).
491
+
492
+ ## Results
493
+
494
+ The distilled model performs to within 1% WER of Whisper large-v3 on out-of-distribution (Voxpopuli, Fleurs) short-form audio and within
495
+ 2.5% WER on out-of-distribuion sequential long-form decoding.
496
+
497
+
498
+ ### Short-Form
499
+
500
+ | Model Name | RTF | Common Voice 17 | Multilingual Librispeech | Voxpopuli | Fleurs |
501
+ | :----------------: | :-----: | :-------------: | :----------------------: | :-------: | :----: |
502
+ | distil-large-v3-fr | 319.543 | 12.726 | 5.823 | 10.808 | 8.067 |
503
+ | whisper-tiny | 280.576 | 56.757 | 37.512 | 32.505 | 46.173 |
504
+ | whisper-base | 261.235 | 42.447 | 25.2 | 26.434 | 27.851 |
505
+ | whisper-small | 249.676 | 22.469 | 14.097 | 14.61 | 14.283 |
506
+ | whisper-medium | 170.9 | 15.432 | 9.602 | 11.92 | 9.155 |
507
+ | whisper-large-v3 | 150.719 | 11.024 | 4.783 | 9.948 | 5.624 |
508
+
509
+ *the above datasets correspond to test splits, RTF co
510
+
511
+ ### Long-Form
512
+
513
+
514
+ | Model Name | RTF | [long-form test set](https://huggingface.co/datasets/speech-recognition-community-v2/dev_data) |
515
+ | :----------------: | :-----: | :--------------------------------------------------------------------------------------------: |
516
+ | distil-large-v3-fr | 176.626 | 11.467 |
517
+ | whisper-tiny | 125.367 | 28.277 |
518
+ | whisper-base | 110.139 | 19.228 |
519
+ | whisper-small | 83.417 | 12.467 |
520
+ | whisper-medium | 56.677 | 10.772 |
521
+ | whisper-large-v3 | 41.805 | 9.073 |
522
+
523
+
524
+ ### Inference speed
525
+
526
+ Reported latencies were benchmarked on a 40GB nvidia A100, generating 128 tokens with SDPA, bfloat16, 3 warmup steps, 5 measures, one beam.
527
+ The benchmarking script can be found here. The time measured is the time do one forward pass of the encoder and 128 autoregressive forward passes of the decoder.
528
+
529
+
530
+ <p align="center">
531
+ <img src="figures/relative_latencies.png" alt="latencies" width="80%">
532
+ </p>
533
+
534
+
535
+
536
+ ## Reproducing Distil-Whisper
537
+
538
+ Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training
539
+
540
+ ## License
541
+
542
+ distil-large-v3-fr inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model.
543
+
544
+ ## Citation
545
+
546
+ If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430):
547
+ ```
548
+ @misc{gandhi2023distilwhisper,
549
+ title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
550
+ author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
551
+ year={2023},
552
+ eprint={2311.00430},
553
+ archivePrefix={arXiv},
554
+ primaryClass={cs.CL}
555
+ }
556
+ ```
557
+
558
+ ## Acknowledgements
559
+ * OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v3), in particular Jong Wook Kim for the [original codebase](https://github.com/openai/whisper) and training discussions
560
+ * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration
561
+ * [Georgi Gerganov](https://huggingface.co/ggerganov) for the Whisper cpp integration
562
+ * [Joshua Lochner](https://huggingface.co/xenova) for the Transformers.js integration
563
+ * [Laurent Mazare](https://huggingface.co/lmz) for the Candle integration
564
+ * [Vaibhav Srivastav](https://huggingface.co/reach-vb) for Distil-Whisper distribution
565
+ * [Raghav Sonavane](https://huggingface.co/rsonavane/distil-whisper-large-v2-8-ls) for an early iteration of Distil-Whisper on the LibriSpeech dataset