File size: 24,411 Bytes
09b13b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
#!/home/haroon/python_virtual_envs/whisper_fine_tuning/bin/python
#!/usr/bin/env python
# coding: utf-8

# # Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers

# In this Colab, we present a step-by-step guide on how to fine-tune Whisper 
# for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This is a 
# more "hands-on" version of the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). 
# For a more in-depth explanation of Whisper, the Common Voice dataset and the theory behind fine-tuning, the reader is advised to refer to the blog post.

# ## Introduction

# Whisper is a pre-trained model for automatic speech recognition (ASR) 
# published in [September 2022](https://openai.com/blog/whisper/) by the authors 
# Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as 
# [Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained 
# on un-labelled audio data, Whisper is pre-trained on a vast quantity of 
# **labelled** audio-transcription data, 680,000 hours to be precise. 
# This is an order of magnitude more data than the un-labelled audio data used 
# to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this 
# pre-training data is multilingual ASR data. This results in checkpoints 
# that can be applied to over 96 languages, many of which are considered 
# _low-resource_.
# 
# When scaled to 680,000 hours of labelled pre-training data, Whisper models 
# demonstrate a strong ability to generalise to many datasets and domains.
# The pre-trained checkpoints achieve competitive results to state-of-the-art 
# ASR systems, with near 3% word error rate (WER) on the test-clean subset of 
# LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ 
# Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).
# The extensive multilingual ASR knowledge acquired by Whisper during pre-training 
# can be leveraged for other low-resource languages; through fine-tuning, the 
# pre-trained checkpoints can be adapted for specific datasets and languages 
# to further improve upon these results. We'll show just how Whisper can be fine-tuned 
# for low-resource languages in this Colab.

# <figure>
# <img src="https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/whisper_architecture.svg" alt="Trulli" style="width:100%">
# <figcaption align = "center"><b>Figure 1:</b> Whisper model. The architecture 
# follows the standard Transformer-based encoder-decoder model. A 
# log-Mel spectrogram is input to the encoder. The last encoder 
# hidden states are input to the decoder via cross-attention mechanisms. The 
# decoder autoregressively predicts text tokens, jointly conditional on the 
# encoder hidden states and previously predicted tokens. Figure source: 
# <a href="https://openai.com/blog/whisper/">OpenAI Whisper Blog</a>.</figcaption>
# </figure>

# The Whisper checkpoints come in five configurations of varying model sizes.
# The smallest four are trained on either English-only or multilingual data.
# The largest checkpoint is multilingual only. All nine of the pre-trained checkpoints 
# are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The 
# checkpoints are summarised in the following table with links to the models on the Hub:
# 
# | Size   | Layers | Width | Heads | Parameters | English-only                                         | Multilingual                                      |
# |--------|--------|-------|-------|------------|------------------------------------------------------|---------------------------------------------------|
# | tiny   | 4      | 384   | 6     | 39 M       | [✓](https://huggingface.co/openai/whisper-tiny.en)   | [✓](https://huggingface.co/openai/whisper-tiny.)  |
# | base   | 6      | 512   | 8     | 74 M       | [✓](https://huggingface.co/openai/whisper-base.en)   | [✓](https://huggingface.co/openai/whisper-base)   |
# | small  | 12     | 768   | 12    | 244 M      | [✓](https://huggingface.co/openai/whisper-small.en)  | [✓](https://huggingface.co/openai/whisper-small)  |
# | medium | 24     | 1024  | 16    | 769 M      | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
# | large  | 32     | 1280  | 20    | 1550 M     | x                                                    | [✓](https://huggingface.co/openai/whisper-large)  |
# 
# For demonstration purposes, we'll fine-tune the multilingual version of the 
# [`"small"`](https://huggingface.co/openai/whisper-small) checkpoint with 244M params (~= 1GB). 
# As for our data, we'll train and evaluate our system on a low-resource language 
# taken from the [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)
# dataset. We'll show that with as little as 8 hours of fine-tuning data, we can achieve 
# strong performance in this language.

# ------------------------------------------------------------------------
# 
# \\({}^1\\) The name Whisper follows from the acronym “WSPSR”, which stands for “Web-scale Supervised Pre-training for Speech Recognition”.

# ## Load Dataset

# Using 🤗 Datasets, downloading and preparing data is extremely simple. 
# We can download and prepare the Common Voice splits in just one line of code. 
# 
# First, ensure you have accepted the terms of use on the Hugging Face Hub: [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0). Once you have accepted the terms, you will have full access to the dataset and be able to download the data locally.
# 
# Since Hindi is very low-resource, we'll combine the `train` and `validation` 
# splits to give approximately 8 hours of training data. We'll use the 4 hours 
# of `test` data as our held-out test set:

# In[1]:


from datasets import load_dataset, DatasetDict

common_voice = DatasetDict()

common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train+validation", token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="test", token=True)

print(common_voice)


# Most ASR datasets only provide input audio samples (`audio`) and the 
# corresponding transcribed text (`sentence`). Common Voice contains additional 
# metadata information, such as `accent` and `locale`, which we can disregard for ASR.
# Keeping the notebook as general as possible, we only consider the input audio and
# transcribed text for fine-tuning, discarding the additional metadata information:

# In[2]:


common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])

print(common_voice)


# ## Prepare Feature Extractor, Tokenizer and Data

# The ASR pipeline can be de-composed into three stages: 
# 1) A feature extractor which pre-processes the raw audio-inputs
# 2) The model which performs the sequence-to-sequence mapping 
# 3) A tokenizer which post-processes the model outputs to text format
# 
# In 🤗 Transformers, the Whisper model has an associated feature extractor and tokenizer, 
# called [WhisperFeatureExtractor](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperFeatureExtractor)
# and [WhisperTokenizer](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperTokenizer) 
# respectively.
# 
# We'll go through details for setting-up the feature extractor and tokenizer one-by-one!

# ### Load WhisperFeatureExtractor

# The Whisper feature extractor performs two operations:
# 1. Pads / truncates the audio inputs to 30s: any audio inputs shorter than 30s are padded to 30s with silence (zeros), and those longer that 30s are truncated to 30s
# 2. Converts the audio inputs to _log-Mel spectrogram_ input features, a visual representation of the audio and the form of the input expected by the Whisper model

# <figure>
# <img src="https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/spectrogram.jpg" alt="Trulli" style="width:100%">
# <figcaption align = "center"><b>Figure 2:</b> Conversion of sampled audio array to log-Mel spectrogram.
# Left: sampled 1-dimensional audio signal. Right: corresponding log-Mel spectrogram. Figure source:
# <a href="https://ai.googleblog.com/2019/04/specaugment-new-data-augmentation.html">Google SpecAugment Blog</a>.
# </figcaption>

# We'll load the feature extractor from the pre-trained checkpoint with the default values:

# In[3]:


from transformers import WhisperFeatureExtractor

feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")


# ### Load WhisperTokenizer

# The Whisper model outputs a sequence of _token ids_. The tokenizer maps each of these token ids to their corresponding text string. For Hindi, we can load the pre-trained tokenizer and use it for fine-tuning without any further modifications. We simply have to 
# specify the target language and the task. These arguments inform the 
# tokenizer to prefix the language and task tokens to the start of encoded 
# label sequences:

# In[4]:


from transformers import WhisperTokenizer

tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="Hindi", task="transcribe")


# ### Combine To Create A WhisperProcessor

# To simplify using the feature extractor and tokenizer, we can _wrap_ 
# both into a single `WhisperProcessor` class. This processor object 
# inherits from the `WhisperFeatureExtractor` and `WhisperProcessor`, 
# and can be used on the audio inputs and model predictions as required. 
# In doing so, we only need to keep track of two objects during training: 
# the `processor` and the `model`:

# In[5]:


from transformers import WhisperProcessor

processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="Hindi", task="transcribe")


# ### Prepare Data

# Let's print the first example of the Common Voice dataset to see 
# what form the data is in:

# In[6]:


print(common_voice["train"][0])


# Since 
# our input audio is sampled at 48kHz, we need to _downsample_ it to 
# 16kHz prior to passing it to the Whisper feature extractor, 16kHz being the sampling rate expected by the Whisper model. 
# 
# We'll set the audio inputs to the correct sampling rate using dataset's 
# [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=cast_column#datasets.DatasetDict.cast_column)
# method. This operation does not change the audio in-place, 
# but rather signals to `datasets` to resample audio samples _on the fly_ the 
# first time that they are loaded:

# In[7]:


from datasets import Audio

common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))


# Re-loading the first audio sample in the Common Voice dataset will resample 
# it to the desired sampling rate:

# In[8]:


print(common_voice["train"][0])


# We'll define our pre-processing strategy. We advise that you **do not** lower-case the transcriptions or remove punctuation unless mixing different datasets. This will enable you to fine-tune Whisper models that can predict punctuation and casing. Later, you will see how we can evaluate the predictions without punctuation or casing, so that the models benefit from the WER improvement obtained by normalising the transcriptions while still predicting fully formatted transcriptions.

# In[9]:


from transformers.models.whisper.english_normalizer import BasicTextNormalizer

do_lower_case = False
do_remove_punctuation = False

normalizer = BasicTextNormalizer()


# Now we can write a function to prepare our data ready for the model:
# 1. We load and resample the audio data by calling `batch["audio"]`. As explained above, 🤗 Datasets performs any necessary resampling operations on the fly.
# 2. We use the feature extractor to compute the log-Mel spectrogram input features from our 1-dimensional audio array.
# 3. We perform any optional pre-processing (lower-case or remove punctuation).
# 4. We encode the transcriptions to label ids through the use of the tokenizer.

# In[10]:


def prepare_dataset(batch):
    # load and (possibly) resample audio data to 16kHz
    audio = batch["audio"]

    # compute log-Mel input features from input audio array 
    batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
    # compute input length of audio sample in seconds
    batch["input_length"] = len(audio["array"]) / audio["sampling_rate"]
    
    # optional pre-processing steps
    transcription = batch["sentence"]
    if do_lower_case:
        transcription = transcription.lower()
    if do_remove_punctuation:
        transcription = normalizer(transcription).strip()
    
    # encode target text to label ids
    batch["labels"] = processor.tokenizer(transcription).input_ids
    return batch


# We can apply the data preparation function to all of our training examples using dataset's `.map` method. The argument `num_proc` specifies how many CPU cores to use. Setting `num_proc` > 1 will enable multiprocessing. If the `.map` method hangs with multiprocessing, set `num_proc=1` and process the dataset sequentially.

# In[11]:


common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=2)


# Finally, we filter any training data with audio samples longer than 30s. These samples would otherwise be truncated by the Whisper feature-extractor which could affect the stability of training. We define a function that returns `True` for samples that are less than 30s, and `False` for those that are longer:

# In[12]:


max_input_length = 30.0

def is_audio_in_length_range(length):
    return length < max_input_length


# We apply our filter function to all samples of our training dataset through 🤗 Datasets' `.filter` method:

# In[13]:


common_voice["train"] = common_voice["train"].filter(
    is_audio_in_length_range,
    input_columns=["input_length"],
)


# ## Training and Evaluation

# Now that we've prepared our data, we're ready to dive into the training pipeline. 
# The [🤗 Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer)
# will do much of the heavy lifting for us. All we have to do is:
# 
# - Define a data collator: the data collator takes our pre-processed data and prepares PyTorch tensors ready for the model.
# 
# - Evaluation metrics: during evaluation, we want to evaluate the model using the [word error rate (WER)](https://huggingface.co/metrics/wer) metric. We need to define a `compute_metrics` function that handles this computation.
# 
# - Load a pre-trained checkpoint: we need to load a pre-trained checkpoint and configure it correctly for training.
# 
# - Define the training configuration: this will be used by the 🤗 Trainer to define the training schedule.
# 
# Once we've fine-tuned the model, we will evaluate it on the test data to verify that we have correctly trained it 
# to transcribe speech in Hindi.

# ### Define a Data Collator

# The data collator for a sequence-to-sequence speech model is unique in the sense that it 
# treats the `input_features` and `labels` independently: the  `input_features` must be 
# handled by the feature extractor and the `labels` by the tokenizer.
# 
# The `input_features` are already padded to 30s and converted to a log-Mel spectrogram 
# of fixed dimension by action of the feature extractor, so all we have to do is convert the `input_features`
# to batched PyTorch tensors. We do this using the feature extractor's `.pad` method with `return_tensors=pt`.
# 
# The `labels` on the other hand are un-padded. We first pad the sequences
# to the maximum length in the batch using the tokenizer's `.pad` method. The padding tokens 
# are then replaced by `-100` so that these tokens are **not** taken into account when 
# computing the loss. We then cut the BOS token from the start of the label sequence as we 
# append it later during training.
# 
# We can leverage the `WhisperProcessor` we defined earlier to perform both the 
# feature extractor and the tokenizer operations:

# In[14]:


import torch

from dataclasses import dataclass
from typing import Any, Dict, List, Union

@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
    processor: Any

    def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
        # split inputs and labels since they have to be of different lengths and need different padding methods
        # first treat the audio inputs by simply returning torch tensors
        input_features = [{"input_features": feature["input_features"]} for feature in features]
        batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")

        # get the tokenized label sequences
        label_features = [{"input_ids": feature["labels"]} for feature in features]
        # pad the labels to max length
        labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")

        # replace padding with -100 to ignore loss correctly
        labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)

        # if bos token is appended in previous tokenization step,
        # cut bos token here as it's append later anyways
        if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
            labels = labels[:, 1:]

        batch["labels"] = labels

        return batch


# Let's initialise the data collator we've just defined:

# In[15]:


data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)


# ### Evaluation Metrics

# We'll use the word error rate (WER) metric, the 'de-facto' metric for assessing 
# ASR systems. For more information, refer to the WER [docs](https://huggingface.co/metrics/wer). We'll load the WER metric from 🤗 Evaluate:

# In[16]:


import evaluate

metric = evaluate.load("wer")


# We then simply have to define a function that takes our model 
# predictions and returns the WER metric. This function, called
# `compute_metrics`, first replaces `-100` with the `pad_token_id`
# in the `label_ids` (undoing the step we applied in the 
# data collator to ignore padded tokens correctly in the loss).
# It then decodes the predicted and label ids to strings. Finally,
# it computes the WER between the predictions and reference labels. 
# Here, we have the option of evaluating with the 'normalised' transcriptions 
# and predictions. We recommend you set this to `True` to benefit from the WER 
# improvement obtained by normalising the transcriptions.

# In[17]:


# evaluate with the 'normalised' WER
do_normalize_eval = True

def compute_metrics(pred):
    pred_ids = pred.predictions
    label_ids = pred.label_ids

    # replace -100 with the pad_token_id
    label_ids[label_ids == -100] = processor.tokenizer.pad_token_id

    # we do not want to group tokens when computing the metrics
    pred_str = processor.tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
    label_str = processor.tokenizer.batch_decode(label_ids, skip_special_tokens=True)

    if do_normalize_eval:
        pred_str = [normalizer(pred) for pred in pred_str]
        label_str = [normalizer(label) for label in label_str]

    wer = 100 * metric.compute(predictions=pred_str, references=label_str)

    return {"wer": wer}


# ### Load a Pre-Trained Checkpoint

# Now let's load the pre-trained Whisper `small` checkpoint. Again, this 
# is trivial through use of 🤗 Transformers!

# In[18]:


from transformers import WhisperForConditionalGeneration

model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.generation_config.language = "hi"  # define your language of choice here


# Override generation arguments - no tokens are forced as decoder outputs (see [`forced_decoder_ids`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.forced_decoder_ids)), no tokens are suppressed during generation (see [`suppress_tokens`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.suppress_tokens)). Set `use_cache` to False since we're using gradient checkpointing, and the two are incompatible:

# In[19]:


model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
model.config.use_cache = False


# ### Define the Training Configuration

# In the final step, we define all the parameters related to training. For more detail on the training arguments, refer to the Seq2SeqTrainingArguments [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments).

# In[20]:


from transformers import Seq2SeqTrainingArguments

training_args = Seq2SeqTrainingArguments(
    output_dir="./",
    per_device_train_batch_size=8,
    gradient_accumulation_steps=8,  # increase by 2x for every 2x decrease in batch size
    learning_rate=1e-5,
    warmup_steps=500,
    max_steps=5000,
    gradient_checkpointing=True,
    fp16=True,
    evaluation_strategy="steps",
    per_device_eval_batch_size=4,
    predict_with_generate=True,
    generation_max_length=225,
    save_steps=1000,
    eval_steps=1000,
    logging_steps=25,
    report_to=["tensorboard"],
    load_best_model_at_end=True,
    metric_for_best_model="wer",
    greater_is_better=False,
    push_to_hub=True,
)


# **Note**: if one does not want to upload the model checkpoints to the Hub, 
# set `push_to_hub=False`.

# We can forward the training arguments to the 🤗 Trainer along with our model,
# dataset, data collator and `compute_metrics` function:

# In[21]:


from transformers import Seq2SeqTrainer

trainer = Seq2SeqTrainer(
    args=training_args,
    model=model,
    train_dataset=common_voice["train"],
    eval_dataset=common_voice["test"],
    data_collator=data_collator,
    compute_metrics=compute_metrics,
    tokenizer=processor.feature_extractor,
)


# We'll save the processor object once before starting training. Since the processor is not trainable, it won't change over the course of training:

# In[22]:


processor.save_pretrained(training_args.output_dir)


# ### Training

# Training will take approximately 5-10 hours depending on your GPU. The peak GPU memory for the given training configuration is approximately 36GB. 
# Depending on your GPU, it is possible that you will encounter a CUDA `"out-of-memory"` error when you launch training. 
# In this case, you can reduce the `per_device_train_batch_size` incrementally by factors of 2 
# and employ [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.gradient_accumulation_steps)
# to compensate.
# 
# To launch training, simply execute:

# In[ ]:


trainer.train()


# We can label our checkpoint with the `whisper-event` tag on push by setting the appropriate key-word arguments (kwargs):

# In[ ]:


kwargs = {
    "dataset_tags": "mozilla-foundation/common_voice_11_0",
    "dataset": "Common Voice 11.0",  # a 'pretty' name for the training dataset
    "language": "hi",
    "model_name": "Whisper Small Hi - Sanchit Gandhi",  # a 'pretty' name for your model
    "finetuned_from": "openai/whisper-small",
    "tasks": "automatic-speech-recognition",
    "tags": "whisper-event",
}


# The training results can now be uploaded to the Hub. To do so, execute the `push_to_hub` command and save the preprocessor object we created:

# In[ ]:


trainer.push_to_hub(**kwargs)


# ## Closing Remarks

# In this blog, we covered a step-by-step guide on fine-tuning Whisper for multilingual ASR 
# using 🤗 Datasets, Transformers and the Hugging Face Hub. For more details on the Whisper model, the Common Voice dataset and the theory behind fine-tuning, refere to the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). If you're interested in fine-tuning other 
# Transformers models, both for English and multilingual ASR, be sure to check out the 
# examples scripts at [examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).