ashraf-ali commited on
Commit
d14c2ac
1 Parent(s): dc85497

Add notebook

Browse files
fine_tune_whisper.ipynb ADDED
@@ -0,0 +1,1363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6",
6
+ "metadata": {
7
+ "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6"
8
+ },
9
+ "source": [
10
+ "# Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "markdown",
15
+ "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a",
16
+ "metadata": {
17
+ "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a"
18
+ },
19
+ "source": [
20
+ "In this Colab, we present a step-by-step guide on how to fine-tune Whisper \n",
21
+ "for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This is a \n",
22
+ "more \"hands-on\" version of the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). \n",
23
+ "For a more in-depth explanation of Whisper, the Common Voice dataset and the theory behind fine-tuning, the reader is advised to refer to the blog post."
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "markdown",
28
+ "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e",
29
+ "metadata": {
30
+ "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e"
31
+ },
32
+ "source": [
33
+ "## Introduction"
34
+ ]
35
+ },
36
+ {
37
+ "cell_type": "markdown",
38
+ "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0",
39
+ "metadata": {
40
+ "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0"
41
+ },
42
+ "source": [
43
+ "Whisper is a pre-trained model for automatic speech recognition (ASR) \n",
44
+ "published in [September 2022](https://openai.com/blog/whisper/) by the authors \n",
45
+ "Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as \n",
46
+ "[Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained \n",
47
+ "on un-labelled audio data, Whisper is pre-trained on a vast quantity of \n",
48
+ "**labelled** audio-transcription data, 680,000 hours to be precise. \n",
49
+ "This is an order of magnitude more data than the un-labelled audio data used \n",
50
+ "to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this \n",
51
+ "pre-training data is multilingual ASR data. This results in checkpoints \n",
52
+ "that can be applied to over 96 languages, many of which are considered \n",
53
+ "_low-resource_.\n",
54
+ "\n",
55
+ "When scaled to 680,000 hours of labelled pre-training data, Whisper models \n",
56
+ "demonstrate a strong ability to generalise to many datasets and domains.\n",
57
+ "The pre-trained checkpoints achieve competitive results to state-of-the-art \n",
58
+ "ASR systems, with near 3% word error rate (WER) on the test-clean subset of \n",
59
+ "LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ \n",
60
+ "Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).\n",
61
+ "The extensive multilingual ASR knowledge acquired by Whisper during pre-training \n",
62
+ "can be leveraged for other low-resource languages; through fine-tuning, the \n",
63
+ "pre-trained checkpoints can be adapted for specific datasets and languages \n",
64
+ "to further improve upon these results. We'll show just how Whisper can be fine-tuned \n",
65
+ "for low-resource languages in this Colab."
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "markdown",
70
+ "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72",
71
+ "metadata": {
72
+ "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72"
73
+ },
74
+ "source": [
75
+ "<figure>\n",
76
+ "<img src=\"https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/whisper_architecture.svg\" alt=\"Trulli\" style=\"width:100%\">\n",
77
+ "<figcaption align = \"center\"><b>Figure 1:</b> Whisper model. The architecture \n",
78
+ "follows the standard Transformer-based encoder-decoder model. A \n",
79
+ "log-Mel spectrogram is input to the encoder. The last encoder \n",
80
+ "hidden states are input to the decoder via cross-attention mechanisms. The \n",
81
+ "decoder autoregressively predicts text tokens, jointly conditional on the \n",
82
+ "encoder hidden states and previously predicted tokens. Figure source: \n",
83
+ "<a href=\"https://openai.com/blog/whisper/\">OpenAI Whisper Blog</a>.</figcaption>\n",
84
+ "</figure>"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "markdown",
89
+ "id": "21b6316e-8a55-4549-a154-66d3da2ab74a",
90
+ "metadata": {
91
+ "id": "21b6316e-8a55-4549-a154-66d3da2ab74a"
92
+ },
93
+ "source": [
94
+ "The Whisper checkpoints come in five configurations of varying model sizes.\n",
95
+ "The smallest four are trained on either English-only or multilingual data.\n",
96
+ "The largest checkpoint is multilingual only. All nine of the pre-trained checkpoints \n",
97
+ "are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The \n",
98
+ "checkpoints are summarised in the following table with links to the models on the Hub:\n",
99
+ "\n",
100
+ "| Size | Layers | Width | Heads | Parameters | English-only | Multilingual |\n",
101
+ "|--------|--------|-------|-------|------------|------------------------------------------------------|---------------------------------------------------|\n",
102
+ "| tiny | 4 | 384 | 6 | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny.) |\n",
103
+ "| base | 6 | 512 | 8 | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |\n",
104
+ "| small | 12 | 768 | 12 | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |\n",
105
+ "| medium | 24 | 1024 | 16 | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |\n",
106
+ "| large | 32 | 1280 | 20 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |\n",
107
+ "\n",
108
+ "For demonstration purposes, we'll fine-tune the multilingual version of the \n",
109
+ "[`\"small\"`](https://huggingface.co/openai/whisper-small) checkpoint with 244M params (~= 1GB). \n",
110
+ "As for our data, we'll train and evaluate our system on a low-resource language \n",
111
+ "taken from the [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)\n",
112
+ "dataset. We'll show that with as little as 8 hours of fine-tuning data, we can achieve \n",
113
+ "strong performance in this language."
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "markdown",
118
+ "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a",
119
+ "metadata": {
120
+ "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a"
121
+ },
122
+ "source": [
123
+ "------------------------------------------------------------------------\n",
124
+ "\n",
125
+ "\\\\({}^1\\\\) The name Whisper follows from the acronym “WSPSR”, which stands for “Web-scale Supervised Pre-training for Speech Recognition”."
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "markdown",
130
+ "id": "55fb8d21-df06-472a-99dd-b59567be6dad",
131
+ "metadata": {
132
+ "id": "55fb8d21-df06-472a-99dd-b59567be6dad"
133
+ },
134
+ "source": [
135
+ "## Prepare Environment"
136
+ ]
137
+ },
138
+ {
139
+ "cell_type": "markdown",
140
+ "id": "844a4861-929c-4762-b29b-80b1e95aba4b",
141
+ "metadata": {
142
+ "id": "844a4861-929c-4762-b29b-80b1e95aba4b"
143
+ },
144
+ "source": [
145
+ "First of all, let's try to secure a decent GPU for our Colab! Unfortunately, it's becoming much harder to get access to a good GPU with the free version of Google Colab. However, with Google Colab Pro one should have no issues in being allocated a V100 or P100 GPU.\n",
146
+ "\n",
147
+ "To get a GPU, click _Runtime_ -> _Change runtime type_, then change _Hardware accelerator_ from _None_ to _GPU_."
148
+ ]
149
+ },
150
+ {
151
+ "cell_type": "markdown",
152
+ "id": "9abea5d7-9d54-434b-a6bd-399d1b3c6c1a",
153
+ "metadata": {
154
+ "id": "9abea5d7-9d54-434b-a6bd-399d1b3c6c1a"
155
+ },
156
+ "source": [
157
+ "We can verify that we've been assigned a GPU and view its specifications:"
158
+ ]
159
+ },
160
+ {
161
+ "cell_type": "code",
162
+ "execution_count": 1,
163
+ "id": "95048026-a3b7-43f0-a274-1bad65e407b4",
164
+ "metadata": {
165
+ "id": "95048026-a3b7-43f0-a274-1bad65e407b4"
166
+ },
167
+ "outputs": [
168
+ {
169
+ "name": "stdout",
170
+ "output_type": "stream",
171
+ "text": [
172
+ "zsh:1: command not found: nvidia-smi\n"
173
+ ]
174
+ }
175
+ ],
176
+ "source": [
177
+ "gpu_info = !nvidia-smi\n",
178
+ "gpu_info = '\\n'.join(gpu_info)\n",
179
+ "if gpu_info.find('failed') >= 0:\n",
180
+ " print('Not connected to a GPU')\n",
181
+ "else:\n",
182
+ " print(gpu_info)"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "markdown",
187
+ "id": "9cd52dc1-ade1-44bb-a2d7-2ed98f110fed",
188
+ "metadata": {
189
+ "id": "9cd52dc1-ade1-44bb-a2d7-2ed98f110fed"
190
+ },
191
+ "source": [
192
+ "Next, we need to update the Unix package `ffmpeg` to version 4:"
193
+ ]
194
+ },
195
+ {
196
+ "cell_type": "code",
197
+ "execution_count": null,
198
+ "id": "69ee227d-60c5-44bf-b04d-c2092f997454",
199
+ "metadata": {
200
+ "id": "69ee227d-60c5-44bf-b04d-c2092f997454"
201
+ },
202
+ "outputs": [],
203
+ "source": [
204
+ "!add-apt-repository -y ppa:jonathonf/ffmpeg-4\n",
205
+ "!apt update\n",
206
+ "!apt install -y ffmpeg"
207
+ ]
208
+ },
209
+ {
210
+ "cell_type": "markdown",
211
+ "id": "1d85d613-1c7e-46ac-9134-660bbe7ebc9d",
212
+ "metadata": {
213
+ "id": "1d85d613-1c7e-46ac-9134-660bbe7ebc9d"
214
+ },
215
+ "source": [
216
+ "We'll employ several popular Python packages to fine-tune the Whisper model.\n",
217
+ "We'll use `datasets` to download and prepare our training data and \n",
218
+ "`transformers` to load and train our Whisper model. We'll also require\n",
219
+ "the `soundfile` package to pre-process audio files, `evaluate` and `jiwer` to\n",
220
+ "assess the performance of our model. Finally, we'll\n",
221
+ "use `gradio` to build a flashy demo of our fine-tuned model."
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": null,
227
+ "id": "e68ea9f8-9b61-414e-8885-3033b67c2850",
228
+ "metadata": {
229
+ "id": "e68ea9f8-9b61-414e-8885-3033b67c2850"
230
+ },
231
+ "outputs": [],
232
+ "source": [
233
+ "!pip install datasets>=2.6.1\n",
234
+ "!pip install git+https://github.com/huggingface/transformers\n",
235
+ "!pip install librosa\n",
236
+ "!pip install evaluate>=0.30\n",
237
+ "!pip install jiwer\n",
238
+ "!pip install gradio"
239
+ ]
240
+ },
241
+ {
242
+ "cell_type": "markdown",
243
+ "id": "1f60d173-8de1-4ed7-bc9a-d281cf237203",
244
+ "metadata": {
245
+ "id": "1f60d173-8de1-4ed7-bc9a-d281cf237203"
246
+ },
247
+ "source": [
248
+ "We strongly advise you to upload model checkpoints directly the [Hugging Face Hub](https://huggingface.co/) \n",
249
+ "whilst training. The Hub provides:\n",
250
+ "- Integrated version control: you can be sure that no model checkpoint is lost during training.\n",
251
+ "- Tensorboard logs: track important metrics over the course of training.\n",
252
+ "- Model cards: document what a model does and its intended use cases.\n",
253
+ "- Community: an easy way to share and collaborate with the community!\n",
254
+ "\n",
255
+ "Linking the notebook to the Hub is straightforward - it simply requires entering your \n",
256
+ "Hub authentication token when prompted. Find your Hub authentication token [here](https://huggingface.co/settings/tokens):"
257
+ ]
258
+ },
259
+ {
260
+ "cell_type": "code",
261
+ "execution_count": null,
262
+ "id": "b045a39e-2a3e-4153-bdb5-281500bcd348",
263
+ "metadata": {
264
+ "id": "b045a39e-2a3e-4153-bdb5-281500bcd348"
265
+ },
266
+ "outputs": [],
267
+ "source": [
268
+ "from huggingface_hub import notebook_login\n",
269
+ "\n",
270
+ "notebook_login()"
271
+ ]
272
+ },
273
+ {
274
+ "cell_type": "markdown",
275
+ "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0",
276
+ "metadata": {
277
+ "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0"
278
+ },
279
+ "source": [
280
+ "## Load Dataset"
281
+ ]
282
+ },
283
+ {
284
+ "cell_type": "markdown",
285
+ "id": "674429c5-0ab4-4adf-975b-621bb69eca38",
286
+ "metadata": {
287
+ "id": "674429c5-0ab4-4adf-975b-621bb69eca38"
288
+ },
289
+ "source": [
290
+ "Using 🤗 Datasets, downloading and preparing data is extremely simple. \n",
291
+ "We can download and prepare the Common Voice splits in just one line of code. \n",
292
+ "\n",
293
+ "First, ensure you have accepted the terms of use on the Hugging Face Hub: [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0). Once you have accepted the terms, you will have full access to the dataset and be able to download the data locally.\n",
294
+ "\n",
295
+ "Since Hindi is very low-resource, we'll combine the `train` and `validation` \n",
296
+ "splits to give approximately 8 hours of training data. We'll use the 4 hours \n",
297
+ "of `test` data as our held-out test set:"
298
+ ]
299
+ },
300
+ {
301
+ "cell_type": "code",
302
+ "execution_count": null,
303
+ "id": "a2787582-554f-44ce-9f38-4180a5ed6b44",
304
+ "metadata": {
305
+ "id": "a2787582-554f-44ce-9f38-4180a5ed6b44"
306
+ },
307
+ "outputs": [],
308
+ "source": [
309
+ "from datasets import load_dataset, DatasetDict\n",
310
+ "\n",
311
+ "common_voice = DatasetDict()\n",
312
+ "\n",
313
+ "common_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"hi\", split=\"train+validation\", use_auth_token=True)\n",
314
+ "common_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"hi\", split=\"test\", use_auth_token=True)\n",
315
+ "\n",
316
+ "print(common_voice)"
317
+ ]
318
+ },
319
+ {
320
+ "cell_type": "markdown",
321
+ "id": "d5c7c3d6-7197-41e7-a088-49b753c1681f",
322
+ "metadata": {
323
+ "id": "d5c7c3d6-7197-41e7-a088-49b753c1681f"
324
+ },
325
+ "source": [
326
+ "Most ASR datasets only provide input audio samples (`audio`) and the \n",
327
+ "corresponding transcribed text (`sentence`). Common Voice contains additional \n",
328
+ "metadata information, such as `accent` and `locale`, which we can disregard for ASR.\n",
329
+ "Keeping the notebook as general as possible, we only consider the input audio and\n",
330
+ "transcribed text for fine-tuning, discarding the additional metadata information:"
331
+ ]
332
+ },
333
+ {
334
+ "cell_type": "code",
335
+ "execution_count": null,
336
+ "id": "20ba635d-518c-47ac-97ee-3cad25f1e0ce",
337
+ "metadata": {
338
+ "id": "20ba635d-518c-47ac-97ee-3cad25f1e0ce"
339
+ },
340
+ "outputs": [],
341
+ "source": [
342
+ "common_voice = common_voice.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"path\", \"segment\", \"up_votes\"])\n",
343
+ "\n",
344
+ "print(common_voice)"
345
+ ]
346
+ },
347
+ {
348
+ "cell_type": "markdown",
349
+ "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605",
350
+ "metadata": {
351
+ "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605"
352
+ },
353
+ "source": [
354
+ "## Prepare Feature Extractor, Tokenizer and Data"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "markdown",
359
+ "id": "601c3099-1026-439e-93e2-5635b3ba5a73",
360
+ "metadata": {
361
+ "id": "601c3099-1026-439e-93e2-5635b3ba5a73"
362
+ },
363
+ "source": [
364
+ "The ASR pipeline can be de-composed into three stages: \n",
365
+ "1) A feature extractor which pre-processes the raw audio-inputs\n",
366
+ "2) The model which performs the sequence-to-sequence mapping \n",
367
+ "3) A tokenizer which post-processes the model outputs to text format\n",
368
+ "\n",
369
+ "In 🤗 Transformers, the Whisper model has an associated feature extractor and tokenizer, \n",
370
+ "called [WhisperFeatureExtractor](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperFeatureExtractor)\n",
371
+ "and [WhisperTokenizer](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperTokenizer) \n",
372
+ "respectively.\n",
373
+ "\n",
374
+ "We'll go through details for setting-up the feature extractor and tokenizer one-by-one!"
375
+ ]
376
+ },
377
+ {
378
+ "cell_type": "markdown",
379
+ "id": "560332eb-3558-41a1-b500-e83a9f695f84",
380
+ "metadata": {
381
+ "id": "560332eb-3558-41a1-b500-e83a9f695f84"
382
+ },
383
+ "source": [
384
+ "### Load WhisperFeatureExtractor"
385
+ ]
386
+ },
387
+ {
388
+ "cell_type": "markdown",
389
+ "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365",
390
+ "metadata": {
391
+ "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365"
392
+ },
393
+ "source": [
394
+ "The Whisper feature extractor performs two operations:\n",
395
+ "1. Pads / truncates the audio inputs to 30s: any audio inputs shorter than 30s are padded to 30s with silence (zeros), and those longer that 30s are truncated to 30s\n",
396
+ "2. Converts the audio inputs to _log-Mel spectrogram_ input features, a visual representation of the audio and the form of the input expected by the Whisper model"
397
+ ]
398
+ },
399
+ {
400
+ "cell_type": "markdown",
401
+ "id": "589d9ec1-d12b-4b64-93f7-04c63997da19",
402
+ "metadata": {
403
+ "id": "589d9ec1-d12b-4b64-93f7-04c63997da19"
404
+ },
405
+ "source": [
406
+ "<figure>\n",
407
+ "<img src=\"https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/spectrogram.jpg\" alt=\"Trulli\" style=\"width:100%\">\n",
408
+ "<figcaption align = \"center\"><b>Figure 2:</b> Conversion of sampled audio array to log-Mel spectrogram.\n",
409
+ "Left: sampled 1-dimensional audio signal. Right: corresponding log-Mel spectrogram. Figure source:\n",
410
+ "<a href=\"https://ai.googleblog.com/2019/04/specaugment-new-data-augmentation.html\">Google SpecAugment Blog</a>.\n",
411
+ "</figcaption>"
412
+ ]
413
+ },
414
+ {
415
+ "cell_type": "markdown",
416
+ "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa",
417
+ "metadata": {
418
+ "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa"
419
+ },
420
+ "source": [
421
+ "We'll load the feature extractor from the pre-trained checkpoint with the default values:"
422
+ ]
423
+ },
424
+ {
425
+ "cell_type": "code",
426
+ "execution_count": null,
427
+ "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5",
428
+ "metadata": {
429
+ "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5"
430
+ },
431
+ "outputs": [],
432
+ "source": [
433
+ "from transformers import WhisperFeatureExtractor\n",
434
+ "\n",
435
+ "feature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-small\")"
436
+ ]
437
+ },
438
+ {
439
+ "cell_type": "markdown",
440
+ "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb",
441
+ "metadata": {
442
+ "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb"
443
+ },
444
+ "source": [
445
+ "### Load WhisperTokenizer"
446
+ ]
447
+ },
448
+ {
449
+ "cell_type": "markdown",
450
+ "id": "2bc82609-a9fb-447a-a2af-99597c864029",
451
+ "metadata": {
452
+ "id": "2bc82609-a9fb-447a-a2af-99597c864029"
453
+ },
454
+ "source": [
455
+ "The Whisper model outputs a sequence of _token ids_. The tokenizer maps each of these token ids to their corresponding text string. For Hindi, we can load the pre-trained tokenizer and use it for fine-tuning without any further modifications. We simply have to \n",
456
+ "specify the target language and the task. These arguments inform the \n",
457
+ "tokenizer to prefix the language and task tokens to the start of encoded \n",
458
+ "label sequences:"
459
+ ]
460
+ },
461
+ {
462
+ "cell_type": "code",
463
+ "execution_count": null,
464
+ "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6",
465
+ "metadata": {
466
+ "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6",
467
+ "outputId": "5c004b44-86e7-4e00-88be-39e0af5eed69"
468
+ },
469
+ "outputs": [
470
+ {
471
+ "data": {
472
+ "application/vnd.jupyter.widget-view+json": {
473
+ "model_id": "90d056e20b3e4f14ae0199a1a4ab1bb0",
474
+ "version_major": 2,
475
+ "version_minor": 0
476
+ },
477
+ "text/plain": [
478
+ "Downloading: 0%| | 0.00/829 [00:00<?, ?B/s]"
479
+ ]
480
+ },
481
+ "metadata": {},
482
+ "output_type": "display_data"
483
+ },
484
+ {
485
+ "data": {
486
+ "application/vnd.jupyter.widget-view+json": {
487
+ "model_id": "d82a88daec0e4f14add691b7b903064c",
488
+ "version_major": 2,
489
+ "version_minor": 0
490
+ },
491
+ "text/plain": [
492
+ "Downloading: 0%| | 0.00/1.04M [00:00<?, ?B/s]"
493
+ ]
494
+ },
495
+ "metadata": {},
496
+ "output_type": "display_data"
497
+ },
498
+ {
499
+ "data": {
500
+ "application/vnd.jupyter.widget-view+json": {
501
+ "model_id": "350acdb0f40e454099fa901e66de55f0",
502
+ "version_major": 2,
503
+ "version_minor": 0
504
+ },
505
+ "text/plain": [
506
+ "Downloading: 0%| | 0.00/494k [00:00<?, ?B/s]"
507
+ ]
508
+ },
509
+ "metadata": {},
510
+ "output_type": "display_data"
511
+ },
512
+ {
513
+ "data": {
514
+ "application/vnd.jupyter.widget-view+json": {
515
+ "model_id": "2e6a82a462cc411d90fa1bea4ee60790",
516
+ "version_major": 2,
517
+ "version_minor": 0
518
+ },
519
+ "text/plain": [
520
+ "Downloading: 0%| | 0.00/52.7k [00:00<?, ?B/s]"
521
+ ]
522
+ },
523
+ "metadata": {},
524
+ "output_type": "display_data"
525
+ },
526
+ {
527
+ "data": {
528
+ "application/vnd.jupyter.widget-view+json": {
529
+ "model_id": "c74bfee0198b4817832ea86e8e88d96c",
530
+ "version_major": 2,
531
+ "version_minor": 0
532
+ },
533
+ "text/plain": [
534
+ "Downloading: 0%| | 0.00/2.11k [00:00<?, ?B/s]"
535
+ ]
536
+ },
537
+ "metadata": {},
538
+ "output_type": "display_data"
539
+ },
540
+ {
541
+ "data": {
542
+ "application/vnd.jupyter.widget-view+json": {
543
+ "model_id": "04fb2d81eff646068e10475a08ae42f4",
544
+ "version_major": 2,
545
+ "version_minor": 0
546
+ },
547
+ "text/plain": [
548
+ "Downloading: 0%| | 0.00/2.06k [00:00<?, ?B/s]"
549
+ ]
550
+ },
551
+ "metadata": {},
552
+ "output_type": "display_data"
553
+ }
554
+ ],
555
+ "source": [
556
+ "from transformers import WhisperTokenizer\n",
557
+ "\n",
558
+ "tokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-small\", language=\"Hindi\", task=\"transcribe\")"
559
+ ]
560
+ },
561
+ {
562
+ "cell_type": "markdown",
563
+ "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b",
564
+ "metadata": {
565
+ "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b"
566
+ },
567
+ "source": [
568
+ "### Combine To Create A WhisperProcessor"
569
+ ]
570
+ },
571
+ {
572
+ "cell_type": "markdown",
573
+ "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d",
574
+ "metadata": {
575
+ "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d"
576
+ },
577
+ "source": [
578
+ "To simplify using the feature extractor and tokenizer, we can _wrap_ \n",
579
+ "both into a single `WhisperProcessor` class. This processor object \n",
580
+ "inherits from the `WhisperFeatureExtractor` and `WhisperProcessor`, \n",
581
+ "and can be used on the audio inputs and model predictions as required. \n",
582
+ "In doing so, we only need to keep track of two objects during training: \n",
583
+ "the `processor` and the `model`:"
584
+ ]
585
+ },
586
+ {
587
+ "cell_type": "code",
588
+ "execution_count": null,
589
+ "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6",
590
+ "metadata": {
591
+ "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6"
592
+ },
593
+ "outputs": [],
594
+ "source": [
595
+ "from transformers import WhisperProcessor\n",
596
+ "\n",
597
+ "processor = WhisperProcessor.from_pretrained(\"openai/whisper-small\", language=\"Hindi\", task=\"transcribe\")"
598
+ ]
599
+ },
600
+ {
601
+ "cell_type": "markdown",
602
+ "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c",
603
+ "metadata": {
604
+ "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c"
605
+ },
606
+ "source": [
607
+ "### Prepare Data"
608
+ ]
609
+ },
610
+ {
611
+ "cell_type": "markdown",
612
+ "id": "9649bf01-2e8a-45e5-8fca-441c13637b8f",
613
+ "metadata": {
614
+ "id": "9649bf01-2e8a-45e5-8fca-441c13637b8f"
615
+ },
616
+ "source": [
617
+ "Let's print the first example of the Common Voice dataset to see \n",
618
+ "what form the data is in:"
619
+ ]
620
+ },
621
+ {
622
+ "cell_type": "code",
623
+ "execution_count": null,
624
+ "id": "6e6b0ec5-0c94-4e2c-ae24-c791be1b2255",
625
+ "metadata": {
626
+ "id": "6e6b0ec5-0c94-4e2c-ae24-c791be1b2255"
627
+ },
628
+ "outputs": [],
629
+ "source": [
630
+ "print(common_voice[\"train\"][0])"
631
+ ]
632
+ },
633
+ {
634
+ "cell_type": "markdown",
635
+ "id": "5a679f05-063d-41b3-9b58-4fc9c6ccf4fd",
636
+ "metadata": {
637
+ "id": "5a679f05-063d-41b3-9b58-4fc9c6ccf4fd"
638
+ },
639
+ "source": [
640
+ "Since \n",
641
+ "our input audio is sampled at 48kHz, we need to _downsample_ it to \n",
642
+ "16kHz prior to passing it to the Whisper feature extractor, 16kHz being the sampling rate expected by the Whisper model. \n",
643
+ "\n",
644
+ "We'll set the audio inputs to the correct sampling rate using dataset's \n",
645
+ "[`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=cast_column#datasets.DatasetDict.cast_column)\n",
646
+ "method. This operation does not change the audio in-place, \n",
647
+ "but rather signals to `datasets` to resample audio samples _on the fly_ the \n",
648
+ "first time that they are loaded:"
649
+ ]
650
+ },
651
+ {
652
+ "cell_type": "code",
653
+ "execution_count": null,
654
+ "id": "f12e2e57-156f-417b-8cfb-69221cc198e8",
655
+ "metadata": {
656
+ "id": "f12e2e57-156f-417b-8cfb-69221cc198e8"
657
+ },
658
+ "outputs": [],
659
+ "source": [
660
+ "from datasets import Audio\n",
661
+ "\n",
662
+ "common_voice = common_voice.cast_column(\"audio\", Audio(sampling_rate=16000))"
663
+ ]
664
+ },
665
+ {
666
+ "cell_type": "markdown",
667
+ "id": "00382a3e-abec-4cdd-a54c-d1aaa3ea4707",
668
+ "metadata": {
669
+ "id": "00382a3e-abec-4cdd-a54c-d1aaa3ea4707"
670
+ },
671
+ "source": [
672
+ "Re-loading the first audio sample in the Common Voice dataset will resample \n",
673
+ "it to the desired sampling rate:"
674
+ ]
675
+ },
676
+ {
677
+ "cell_type": "code",
678
+ "execution_count": null,
679
+ "id": "87122d71-289a-466a-afcf-fa354b18946b",
680
+ "metadata": {
681
+ "id": "87122d71-289a-466a-afcf-fa354b18946b"
682
+ },
683
+ "outputs": [],
684
+ "source": [
685
+ "print(common_voice[\"train\"][0])"
686
+ ]
687
+ },
688
+ {
689
+ "cell_type": "markdown",
690
+ "id": "91edc72d-08f8-4f01-899d-74e65ce441fc",
691
+ "metadata": {
692
+ "id": "91edc72d-08f8-4f01-899d-74e65ce441fc"
693
+ },
694
+ "source": [
695
+ "Now we can write a function to prepare our data ready for the model:\n",
696
+ "1. We load and resample the audio data by calling `batch[\"audio\"]`. As explained above, 🤗 Datasets performs any necessary resampling operations on the fly.\n",
697
+ "2. We use the feature extractor to compute the log-Mel spectrogram input features from our 1-dimensional audio array.\n",
698
+ "3. We encode the transcriptions to label ids through the use of the tokenizer."
699
+ ]
700
+ },
701
+ {
702
+ "cell_type": "code",
703
+ "execution_count": null,
704
+ "id": "6525c478-8962-4394-a1c4-103c54cce170",
705
+ "metadata": {
706
+ "id": "6525c478-8962-4394-a1c4-103c54cce170"
707
+ },
708
+ "outputs": [],
709
+ "source": [
710
+ "def prepare_dataset(batch):\n",
711
+ " # load and resample audio data from 48 to 16kHz\n",
712
+ " audio = batch[\"audio\"]\n",
713
+ "\n",
714
+ " # compute log-Mel input features from input audio array \n",
715
+ " batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\n",
716
+ "\n",
717
+ " # encode target text to label ids \n",
718
+ " batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\n",
719
+ " return batch"
720
+ ]
721
+ },
722
+ {
723
+ "cell_type": "markdown",
724
+ "id": "70b319fb-2439-4ef6-a70d-a47bf41c4a13",
725
+ "metadata": {
726
+ "id": "70b319fb-2439-4ef6-a70d-a47bf41c4a13"
727
+ },
728
+ "source": [
729
+ "We can apply the data preparation function to all of our training examples using dataset's `.map` method. The argument `num_proc` specifies how many CPU cores to use. Setting `num_proc` > 1 will enable multiprocessing. If the `.map` method hangs with multiprocessing, set `num_proc=1` and process the dataset sequentially."
730
+ ]
731
+ },
732
+ {
733
+ "cell_type": "code",
734
+ "execution_count": null,
735
+ "id": "7b73ab39-ffaf-4b9e-86e5-782963c6134b",
736
+ "metadata": {
737
+ "id": "7b73ab39-ffaf-4b9e-86e5-782963c6134b"
738
+ },
739
+ "outputs": [],
740
+ "source": [
741
+ "common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=2)"
742
+ ]
743
+ },
744
+ {
745
+ "cell_type": "markdown",
746
+ "id": "263a5a58-0239-4a25-b0df-c625fc9c5810",
747
+ "metadata": {
748
+ "id": "263a5a58-0239-4a25-b0df-c625fc9c5810"
749
+ },
750
+ "source": [
751
+ "## Training and Evaluation"
752
+ ]
753
+ },
754
+ {
755
+ "cell_type": "markdown",
756
+ "id": "a693e768-c5a6-453f-89a1-b601dcf7daf7",
757
+ "metadata": {
758
+ "id": "a693e768-c5a6-453f-89a1-b601dcf7daf7"
759
+ },
760
+ "source": [
761
+ "Now that we've prepared our data, we're ready to dive into the training pipeline. \n",
762
+ "The [🤗 Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer)\n",
763
+ "will do much of the heavy lifting for us. All we have to do is:\n",
764
+ "\n",
765
+ "- Define a data collator: the data collator takes our pre-processed data and prepares PyTorch tensors ready for the model.\n",
766
+ "\n",
767
+ "- Evaluation metrics: during evaluation, we want to evaluate the model using the [word error rate (WER)](https://huggingface.co/metrics/wer) metric. We need to define a `compute_metrics` function that handles this computation.\n",
768
+ "\n",
769
+ "- Load a pre-trained checkpoint: we need to load a pre-trained checkpoint and configure it correctly for training.\n",
770
+ "\n",
771
+ "- Define the training configuration: this will be used by the 🤗 Trainer to define the training schedule.\n",
772
+ "\n",
773
+ "Once we've fine-tuned the model, we will evaluate it on the test data to verify that we have correctly trained it \n",
774
+ "to transcribe speech in Hindi."
775
+ ]
776
+ },
777
+ {
778
+ "cell_type": "markdown",
779
+ "id": "8d230e6d-624c-400a-bbf5-fa660881df25",
780
+ "metadata": {
781
+ "id": "8d230e6d-624c-400a-bbf5-fa660881df25"
782
+ },
783
+ "source": [
784
+ "### Define a Data Collator"
785
+ ]
786
+ },
787
+ {
788
+ "cell_type": "markdown",
789
+ "id": "04def221-0637-4a69-b242-d3f0c1d0ee78",
790
+ "metadata": {
791
+ "id": "04def221-0637-4a69-b242-d3f0c1d0ee78"
792
+ },
793
+ "source": [
794
+ "The data collator for a sequence-to-sequence speech model is unique in the sense that it \n",
795
+ "treats the `input_features` and `labels` independently: the `input_features` must be \n",
796
+ "handled by the feature extractor and the `labels` by the tokenizer.\n",
797
+ "\n",
798
+ "The `input_features` are already padded to 30s and converted to a log-Mel spectrogram \n",
799
+ "of fixed dimension by action of the feature extractor, so all we have to do is convert the `input_features`\n",
800
+ "to batched PyTorch tensors. We do this using the feature extractor's `.pad` method with `return_tensors=pt`.\n",
801
+ "\n",
802
+ "The `labels` on the other hand are un-padded. We first pad the sequences\n",
803
+ "to the maximum length in the batch using the tokenizer's `.pad` method. The padding tokens \n",
804
+ "are then replaced by `-100` so that these tokens are **not** taken into account when \n",
805
+ "computing the loss. We then cut the BOS token from the start of the label sequence as we \n",
806
+ "append it later during training.\n",
807
+ "\n",
808
+ "We can leverage the `WhisperProcessor` we defined earlier to perform both the \n",
809
+ "feature extractor and the tokenizer operations:"
810
+ ]
811
+ },
812
+ {
813
+ "cell_type": "code",
814
+ "execution_count": null,
815
+ "id": "8326221e-ec13-4731-bb4e-51e5fc1486c5",
816
+ "metadata": {
817
+ "id": "8326221e-ec13-4731-bb4e-51e5fc1486c5"
818
+ },
819
+ "outputs": [],
820
+ "source": [
821
+ "import torch\n",
822
+ "\n",
823
+ "from dataclasses import dataclass\n",
824
+ "from typing import Any, Dict, List, Union\n",
825
+ "\n",
826
+ "@dataclass\n",
827
+ "class DataCollatorSpeechSeq2SeqWithPadding:\n",
828
+ " processor: Any\n",
829
+ "\n",
830
+ " def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\n",
831
+ " # split inputs and labels since they have to be of different lengths and need different padding methods\n",
832
+ " # first treat the audio inputs by simply returning torch tensors\n",
833
+ " input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\n",
834
+ " batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\n",
835
+ "\n",
836
+ " # get the tokenized label sequences\n",
837
+ " label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n",
838
+ " # pad the labels to max length\n",
839
+ " labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\n",
840
+ "\n",
841
+ " # replace padding with -100 to ignore loss correctly\n",
842
+ " labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n",
843
+ "\n",
844
+ " # if bos token is appended in previous tokenization step,\n",
845
+ " # cut bos token here as it's append later anyways\n",
846
+ " if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\n",
847
+ " labels = labels[:, 1:]\n",
848
+ "\n",
849
+ " batch[\"labels\"] = labels\n",
850
+ "\n",
851
+ " return batch"
852
+ ]
853
+ },
854
+ {
855
+ "cell_type": "markdown",
856
+ "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86",
857
+ "metadata": {
858
+ "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86"
859
+ },
860
+ "source": [
861
+ "Let's initialise the data collator we've just defined:"
862
+ ]
863
+ },
864
+ {
865
+ "cell_type": "code",
866
+ "execution_count": null,
867
+ "id": "fc834702-c0d3-4a96-b101-7b87be32bf42",
868
+ "metadata": {
869
+ "id": "fc834702-c0d3-4a96-b101-7b87be32bf42"
870
+ },
871
+ "outputs": [],
872
+ "source": [
873
+ "data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)"
874
+ ]
875
+ },
876
+ {
877
+ "cell_type": "markdown",
878
+ "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698",
879
+ "metadata": {
880
+ "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698"
881
+ },
882
+ "source": [
883
+ "### Evaluation Metrics"
884
+ ]
885
+ },
886
+ {
887
+ "cell_type": "markdown",
888
+ "id": "66fee1a7-a44c-461e-b047-c3917221572e",
889
+ "metadata": {
890
+ "id": "66fee1a7-a44c-461e-b047-c3917221572e"
891
+ },
892
+ "source": [
893
+ "We'll use the word error rate (WER) metric, the 'de-facto' metric for assessing \n",
894
+ "ASR systems. For more information, refer to the WER [docs](https://huggingface.co/metrics/wer). We'll load the WER metric from 🤗 Evaluate:"
895
+ ]
896
+ },
897
+ {
898
+ "cell_type": "code",
899
+ "execution_count": null,
900
+ "id": "b22b4011-f31f-4b57-b684-c52332f92890",
901
+ "metadata": {
902
+ "id": "b22b4011-f31f-4b57-b684-c52332f92890"
903
+ },
904
+ "outputs": [],
905
+ "source": [
906
+ "import evaluate\n",
907
+ "\n",
908
+ "metric = evaluate.load(\"wer\")"
909
+ ]
910
+ },
911
+ {
912
+ "cell_type": "markdown",
913
+ "id": "4f32cab6-31f0-4cb9-af4c-40ba0f5fc508",
914
+ "metadata": {
915
+ "id": "4f32cab6-31f0-4cb9-af4c-40ba0f5fc508"
916
+ },
917
+ "source": [
918
+ "We then simply have to define a function that takes our model \n",
919
+ "predictions and returns the WER metric. This function, called\n",
920
+ "`compute_metrics`, first replaces `-100` with the `pad_token_id`\n",
921
+ "in the `label_ids` (undoing the step we applied in the \n",
922
+ "data collator to ignore padded tokens correctly in the loss).\n",
923
+ "It then decodes the predicted and label ids to strings. Finally,\n",
924
+ "it computes the WER between the predictions and reference labels:"
925
+ ]
926
+ },
927
+ {
928
+ "cell_type": "code",
929
+ "execution_count": null,
930
+ "id": "23959a70-22d0-4ffe-9fa1-72b61e75bb52",
931
+ "metadata": {
932
+ "id": "23959a70-22d0-4ffe-9fa1-72b61e75bb52"
933
+ },
934
+ "outputs": [],
935
+ "source": [
936
+ "def compute_metrics(pred):\n",
937
+ " pred_ids = pred.predictions\n",
938
+ " label_ids = pred.label_ids\n",
939
+ "\n",
940
+ " # replace -100 with the pad_token_id\n",
941
+ " label_ids[label_ids == -100] = tokenizer.pad_token_id\n",
942
+ "\n",
943
+ " # we do not want to group tokens when computing the metrics\n",
944
+ " pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\n",
945
+ " label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)\n",
946
+ "\n",
947
+ " wer = 100 * metric.compute(predictions=pred_str, references=label_str)\n",
948
+ "\n",
949
+ " return {\"wer\": wer}"
950
+ ]
951
+ },
952
+ {
953
+ "cell_type": "markdown",
954
+ "id": "daf2a825-6d9f-4a23-b145-c37c0039075b",
955
+ "metadata": {
956
+ "id": "daf2a825-6d9f-4a23-b145-c37c0039075b"
957
+ },
958
+ "source": [
959
+ "### Load a Pre-Trained Checkpoint"
960
+ ]
961
+ },
962
+ {
963
+ "cell_type": "markdown",
964
+ "id": "437a97fa-4864-476b-8abc-f28b8166cfa5",
965
+ "metadata": {
966
+ "id": "437a97fa-4864-476b-8abc-f28b8166cfa5"
967
+ },
968
+ "source": [
969
+ "Now let's load the pre-trained Whisper `small` checkpoint. Again, this \n",
970
+ "is trivial through use of 🤗 Transformers!"
971
+ ]
972
+ },
973
+ {
974
+ "cell_type": "code",
975
+ "execution_count": null,
976
+ "id": "5a10cc4b-07ec-4ebd-ac1d-7c601023594f",
977
+ "metadata": {
978
+ "id": "5a10cc4b-07ec-4ebd-ac1d-7c601023594f"
979
+ },
980
+ "outputs": [],
981
+ "source": [
982
+ "from transformers import WhisperForConditionalGeneration\n",
983
+ "\n",
984
+ "model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")"
985
+ ]
986
+ },
987
+ {
988
+ "cell_type": "markdown",
989
+ "id": "a15ead5f-2277-4a39-937b-585c2497b2df",
990
+ "metadata": {
991
+ "id": "a15ead5f-2277-4a39-937b-585c2497b2df"
992
+ },
993
+ "source": [
994
+ "Override generation arguments - no tokens are forced as decoder outputs (see [`forced_decoder_ids`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.forced_decoder_ids)), no tokens are suppressed during generation (see [`suppress_tokens`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.suppress_tokens)):"
995
+ ]
996
+ },
997
+ {
998
+ "cell_type": "code",
999
+ "execution_count": null,
1000
+ "id": "62038ba3-88ed-4fce-84db-338f50dcd04f",
1001
+ "metadata": {
1002
+ "id": "62038ba3-88ed-4fce-84db-338f50dcd04f"
1003
+ },
1004
+ "outputs": [],
1005
+ "source": [
1006
+ "model.config.forced_decoder_ids = None\n",
1007
+ "model.config.suppress_tokens = []"
1008
+ ]
1009
+ },
1010
+ {
1011
+ "cell_type": "markdown",
1012
+ "id": "2178dea4-80ca-47b6-b6ea-ba1915c90c06",
1013
+ "metadata": {
1014
+ "id": "2178dea4-80ca-47b6-b6ea-ba1915c90c06"
1015
+ },
1016
+ "source": [
1017
+ "### Define the Training Configuration"
1018
+ ]
1019
+ },
1020
+ {
1021
+ "cell_type": "markdown",
1022
+ "id": "c21af1e9-0188-4134-ac82-defc7bdcc436",
1023
+ "metadata": {
1024
+ "id": "c21af1e9-0188-4134-ac82-defc7bdcc436"
1025
+ },
1026
+ "source": [
1027
+ "In the final step, we define all the parameters related to training. For more detail on the training arguments, refer to the Seq2SeqTrainingArguments [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments)."
1028
+ ]
1029
+ },
1030
+ {
1031
+ "cell_type": "code",
1032
+ "execution_count": null,
1033
+ "id": "0ae3e9af-97b7-4aa0-ae85-20b23b5bcb3a",
1034
+ "metadata": {
1035
+ "id": "0ae3e9af-97b7-4aa0-ae85-20b23b5bcb3a"
1036
+ },
1037
+ "outputs": [],
1038
+ "source": [
1039
+ "from transformers import Seq2SeqTrainingArguments\n",
1040
+ "\n",
1041
+ "training_args = Seq2SeqTrainingArguments(\n",
1042
+ " output_dir=\"./whisper-small-hi\", # change to a repo name of your choice\n",
1043
+ " per_device_train_batch_size=16,\n",
1044
+ " gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\n",
1045
+ " learning_rate=1e-5,\n",
1046
+ " warmup_steps=500,\n",
1047
+ " max_steps=4000,\n",
1048
+ " gradient_checkpointing=True,\n",
1049
+ " fp16=True,\n",
1050
+ " evaluation_strategy=\"steps\",\n",
1051
+ " per_device_eval_batch_size=8,\n",
1052
+ " predict_with_generate=True,\n",
1053
+ " generation_max_length=225,\n",
1054
+ " save_steps=1000,\n",
1055
+ " eval_steps=1000,\n",
1056
+ " logging_steps=25,\n",
1057
+ " report_to=[\"tensorboard\"],\n",
1058
+ " load_best_model_at_end=True,\n",
1059
+ " metric_for_best_model=\"wer\",\n",
1060
+ " greater_is_better=False,\n",
1061
+ " push_to_hub=True,\n",
1062
+ ")"
1063
+ ]
1064
+ },
1065
+ {
1066
+ "cell_type": "markdown",
1067
+ "id": "b3a944d8-3112-4552-82a0-be25988b3857",
1068
+ "metadata": {
1069
+ "id": "b3a944d8-3112-4552-82a0-be25988b3857"
1070
+ },
1071
+ "source": [
1072
+ "**Note**: if one does not want to upload the model checkpoints to the Hub, \n",
1073
+ "set `push_to_hub=False`."
1074
+ ]
1075
+ },
1076
+ {
1077
+ "cell_type": "markdown",
1078
+ "id": "bac29114-d226-4f54-97cf-8718c9f94e1e",
1079
+ "metadata": {
1080
+ "id": "bac29114-d226-4f54-97cf-8718c9f94e1e"
1081
+ },
1082
+ "source": [
1083
+ "We can forward the training arguments to the 🤗 Trainer along with our model,\n",
1084
+ "dataset, data collator and `compute_metrics` function:"
1085
+ ]
1086
+ },
1087
+ {
1088
+ "cell_type": "code",
1089
+ "execution_count": null,
1090
+ "id": "d546d7fe-0543-479a-b708-2ebabec19493",
1091
+ "metadata": {
1092
+ "id": "d546d7fe-0543-479a-b708-2ebabec19493"
1093
+ },
1094
+ "outputs": [],
1095
+ "source": [
1096
+ "from transformers import Seq2SeqTrainer\n",
1097
+ "\n",
1098
+ "trainer = Seq2SeqTrainer(\n",
1099
+ " args=training_args,\n",
1100
+ " model=model,\n",
1101
+ " train_dataset=common_voice[\"train\"],\n",
1102
+ " eval_dataset=common_voice[\"test\"],\n",
1103
+ " data_collator=data_collator,\n",
1104
+ " compute_metrics=compute_metrics,\n",
1105
+ " tokenizer=processor.feature_extractor,\n",
1106
+ ")"
1107
+ ]
1108
+ },
1109
+ {
1110
+ "cell_type": "markdown",
1111
+ "id": "uOrRhDGtN5S4",
1112
+ "metadata": {
1113
+ "id": "uOrRhDGtN5S4"
1114
+ },
1115
+ "source": [
1116
+ "We'll save the processor object once before starting training. Since the processor is not trainable, it won't change over the course of training:"
1117
+ ]
1118
+ },
1119
+ {
1120
+ "cell_type": "code",
1121
+ "execution_count": null,
1122
+ "id": "-2zQwMfEOBJq",
1123
+ "metadata": {
1124
+ "id": "-2zQwMfEOBJq"
1125
+ },
1126
+ "outputs": [],
1127
+ "source": [
1128
+ "processor.save_pretrained(training_args.output_dir)"
1129
+ ]
1130
+ },
1131
+ {
1132
+ "cell_type": "markdown",
1133
+ "id": "7f404cf9-4345-468c-8196-4bd101d9bd51",
1134
+ "metadata": {
1135
+ "id": "7f404cf9-4345-468c-8196-4bd101d9bd51"
1136
+ },
1137
+ "source": [
1138
+ "### Training"
1139
+ ]
1140
+ },
1141
+ {
1142
+ "cell_type": "markdown",
1143
+ "id": "5e8b8d56-5a70-4f68-bd2e-f0752d0bd112",
1144
+ "metadata": {
1145
+ "id": "5e8b8d56-5a70-4f68-bd2e-f0752d0bd112"
1146
+ },
1147
+ "source": [
1148
+ "Training will take approximately 5-10 hours depending on your GPU or the one \n",
1149
+ "allocated to this Google Colab. If using this Google Colab directly to \n",
1150
+ "fine-tune a Whisper model, you should make sure that training isn't \n",
1151
+ "interrupted due to inactivity. A simple workaround to prevent this is \n",
1152
+ "to paste the following code into the console of this tab (_right mouse click_ \n",
1153
+ "-> _inspect_ -> _Console tab_ -> _insert code_)."
1154
+ ]
1155
+ },
1156
+ {
1157
+ "cell_type": "markdown",
1158
+ "id": "890a63ed-e87b-4e53-a35a-6ec1eca560af",
1159
+ "metadata": {
1160
+ "id": "890a63ed-e87b-4e53-a35a-6ec1eca560af"
1161
+ },
1162
+ "source": [
1163
+ "```javascript\n",
1164
+ "function ConnectButton(){\n",
1165
+ " console.log(\"Connect pushed\"); \n",
1166
+ " document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click() \n",
1167
+ "}\n",
1168
+ "setInterval(ConnectButton, 60000);\n",
1169
+ "```"
1170
+ ]
1171
+ },
1172
+ {
1173
+ "cell_type": "markdown",
1174
+ "id": "5a55168b-2f46-4678-afa0-ff22257ec06d",
1175
+ "metadata": {
1176
+ "id": "5a55168b-2f46-4678-afa0-ff22257ec06d"
1177
+ },
1178
+ "source": [
1179
+ "The peak GPU memory for the given training configuration is approximately 15.8GB. \n",
1180
+ "Depending on the GPU allocated to the Google Colab, it is possible that you will encounter a CUDA `\"out-of-memory\"` error when you launch training. \n",
1181
+ "In this case, you can reduce the `per_device_train_batch_size` incrementally by factors of 2 \n",
1182
+ "and employ [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.gradient_accumulation_steps)\n",
1183
+ "to compensate.\n",
1184
+ "\n",
1185
+ "To launch training, simply execute:"
1186
+ ]
1187
+ },
1188
+ {
1189
+ "cell_type": "code",
1190
+ "execution_count": null,
1191
+ "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
1192
+ "metadata": {
1193
+ "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de"
1194
+ },
1195
+ "outputs": [],
1196
+ "source": [
1197
+ "trainer.train()"
1198
+ ]
1199
+ },
1200
+ {
1201
+ "cell_type": "markdown",
1202
+ "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3",
1203
+ "metadata": {
1204
+ "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3"
1205
+ },
1206
+ "source": [
1207
+ "Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"
1208
+ ]
1209
+ },
1210
+ {
1211
+ "cell_type": "code",
1212
+ "execution_count": null,
1213
+ "id": "c704f91e-241b-48c9-b8e0-f0da396a9663",
1214
+ "metadata": {
1215
+ "id": "c704f91e-241b-48c9-b8e0-f0da396a9663"
1216
+ },
1217
+ "outputs": [],
1218
+ "source": [
1219
+ "kwargs = {\n",
1220
+ " \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\n",
1221
+ " \"dataset\": \"Common Voice 11.0\", # a 'pretty' name for the training dataset\n",
1222
+ " \"dataset_args\": \"config: hi, split: test\",\n",
1223
+ " \"language\": \"hi\",\n",
1224
+ " \"model_name\": \"Whisper Small Hi - Sanchit Gandhi\", # a 'pretty' name for our model\n",
1225
+ " \"finetuned_from\": \"openai/whisper-small\",\n",
1226
+ " \"tasks\": \"automatic-speech-recognition\",\n",
1227
+ " \"tags\": \"hf-asr-leaderboard\",\n",
1228
+ "}"
1229
+ ]
1230
+ },
1231
+ {
1232
+ "cell_type": "markdown",
1233
+ "id": "090d676a-f944-4297-a938-a40eda0b2b68",
1234
+ "metadata": {
1235
+ "id": "090d676a-f944-4297-a938-a40eda0b2b68"
1236
+ },
1237
+ "source": [
1238
+ "The training results can now be uploaded to the Hub. To do so, execute the `push_to_hub` command and save the preprocessor object we created:"
1239
+ ]
1240
+ },
1241
+ {
1242
+ "cell_type": "code",
1243
+ "execution_count": null,
1244
+ "id": "d7030622-caf7-4039-939b-6195cdaa2585",
1245
+ "metadata": {
1246
+ "id": "d7030622-caf7-4039-939b-6195cdaa2585"
1247
+ },
1248
+ "outputs": [],
1249
+ "source": [
1250
+ "trainer.push_to_hub(**kwargs)"
1251
+ ]
1252
+ },
1253
+ {
1254
+ "cell_type": "markdown",
1255
+ "id": "34d4360d-5721-426e-b6ac-178f833fedeb",
1256
+ "metadata": {
1257
+ "id": "34d4360d-5721-426e-b6ac-178f833fedeb"
1258
+ },
1259
+ "source": [
1260
+ "## Building a Demo"
1261
+ ]
1262
+ },
1263
+ {
1264
+ "cell_type": "markdown",
1265
+ "id": "e65489b7-18d1-447c-ba69-cd28dd80dad9",
1266
+ "metadata": {
1267
+ "id": "e65489b7-18d1-447c-ba69-cd28dd80dad9"
1268
+ },
1269
+ "source": [
1270
+ "Now that we've fine-tuned our model we can build a demo to show \n",
1271
+ "off its ASR capabilities! We'll make use of 🤗 Transformers \n",
1272
+ "`pipeline`, which will take care of the entire ASR pipeline, \n",
1273
+ "right from pre-processing the audio inputs to decoding the \n",
1274
+ "model predictions.\n",
1275
+ "\n",
1276
+ "Running the example below will generate a Gradio demo where we \n",
1277
+ "can record speech through the microphone of our computer and input it to \n",
1278
+ "our fine-tuned Whisper model to transcribe the corresponding text:"
1279
+ ]
1280
+ },
1281
+ {
1282
+ "cell_type": "code",
1283
+ "execution_count": null,
1284
+ "id": "e0ace3aa-1ef3-45cb-933f-6ddca037c5aa",
1285
+ "metadata": {
1286
+ "id": "e0ace3aa-1ef3-45cb-933f-6ddca037c5aa"
1287
+ },
1288
+ "outputs": [],
1289
+ "source": [
1290
+ "from transformers import pipeline\n",
1291
+ "import gradio as gr\n",
1292
+ "\n",
1293
+ "pipe = pipeline(model=\"sanchit-gandhi/whisper-small-hi\") # change to \"your-username/the-name-you-picked\"\n",
1294
+ "\n",
1295
+ "def transcribe(audio):\n",
1296
+ " text = pipe(audio)[\"text\"]\n",
1297
+ " return text\n",
1298
+ "\n",
1299
+ "iface = gr.Interface(\n",
1300
+ " fn=transcribe, \n",
1301
+ " inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \n",
1302
+ " outputs=\"text\",\n",
1303
+ " title=\"Whisper Small Hindi\",\n",
1304
+ " description=\"Realtime demo for Hindi speech recognition using a fine-tuned Whisper small model.\",\n",
1305
+ ")\n",
1306
+ "\n",
1307
+ "iface.launch()"
1308
+ ]
1309
+ },
1310
+ {
1311
+ "cell_type": "markdown",
1312
+ "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba",
1313
+ "metadata": {
1314
+ "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba"
1315
+ },
1316
+ "source": [
1317
+ "## Closing Remarks"
1318
+ ]
1319
+ },
1320
+ {
1321
+ "cell_type": "markdown",
1322
+ "id": "7f737783-2870-4e35-aa11-86a42d7d997a",
1323
+ "metadata": {
1324
+ "id": "7f737783-2870-4e35-aa11-86a42d7d997a"
1325
+ },
1326
+ "source": [
1327
+ "In this blog, we covered a step-by-step guide on fine-tuning Whisper for multilingual ASR \n",
1328
+ "using 🤗 Datasets, Transformers and the Hugging Face Hub. For more details on the Whisper model, the Common Voice dataset and the theory behind fine-tuning, refere to the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). If you're interested in fine-tuning other \n",
1329
+ "Transformers models, both for English and multilingual ASR, be sure to check out the \n",
1330
+ "examples scripts at [examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)."
1331
+ ]
1332
+ }
1333
+ ],
1334
+ "metadata": {
1335
+ "colab": {
1336
+ "provenance": []
1337
+ },
1338
+ "kernelspec": {
1339
+ "display_name": "Python 3.9.13",
1340
+ "language": "python",
1341
+ "name": "python3"
1342
+ },
1343
+ "language_info": {
1344
+ "codemirror_mode": {
1345
+ "name": "ipython",
1346
+ "version": 3
1347
+ },
1348
+ "file_extension": ".py",
1349
+ "mimetype": "text/x-python",
1350
+ "name": "python",
1351
+ "nbconvert_exporter": "python",
1352
+ "pygments_lexer": "ipython3",
1353
+ "version": "3.9.13"
1354
+ },
1355
+ "vscode": {
1356
+ "interpreter": {
1357
+ "hash": "38cca0c38332a56087b24af0bc80247f4fced29cb4f7f437d91dc159adec9c4e"
1358
+ }
1359
+ }
1360
+ },
1361
+ "nbformat": 4,
1362
+ "nbformat_minor": 5
1363
+ }
fine_tune_whisper_mac.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
imam_short_ayahs.tsv DELETED
The diff for this file is too large to render. See raw diff
 
users_mixed.tsv → metadata.csv RENAMED
The diff for this file is too large to render. See raw diff