Scrya commited on
Commit
f466825
1 Parent(s): fe99682

Training in progress, step 1000

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ checkpoint-*/
.ipynb_checkpoints/fine-tune-whisper-non-streaming-id-checkpoint.ipynb ADDED
@@ -0,0 +1,1930 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6",
6
+ "metadata": {
7
+ "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6"
8
+ },
9
+ "source": [
10
+ "# Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "markdown",
15
+ "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a",
16
+ "metadata": {
17
+ "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a"
18
+ },
19
+ "source": [
20
+ "In this Colab, we present a step-by-step guide on how to fine-tune Whisper \n",
21
+ "for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This is a \n",
22
+ "more \"hands-on\" version of the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). \n",
23
+ "For a more in-depth explanation of Whisper, the Common Voice dataset and the theory behind fine-tuning, the reader is advised to refer to the blog post."
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "markdown",
28
+ "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e",
29
+ "metadata": {
30
+ "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e"
31
+ },
32
+ "source": [
33
+ "## Introduction"
34
+ ]
35
+ },
36
+ {
37
+ "cell_type": "markdown",
38
+ "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0",
39
+ "metadata": {
40
+ "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0"
41
+ },
42
+ "source": [
43
+ "Whisper is a pre-trained model for automatic speech recognition (ASR) \n",
44
+ "published in [September 2022](https://openai.com/blog/whisper/) by the authors \n",
45
+ "Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as \n",
46
+ "[Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained \n",
47
+ "on un-labelled audio data, Whisper is pre-trained on a vast quantity of \n",
48
+ "**labelled** audio-transcription data, 680,000 hours to be precise. \n",
49
+ "This is an order of magnitude more data than the un-labelled audio data used \n",
50
+ "to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this \n",
51
+ "pre-training data is multilingual ASR data. This results in checkpoints \n",
52
+ "that can be applied to over 96 languages, many of which are considered \n",
53
+ "_low-resource_.\n",
54
+ "\n",
55
+ "When scaled to 680,000 hours of labelled pre-training data, Whisper models \n",
56
+ "demonstrate a strong ability to generalise to many datasets and domains.\n",
57
+ "The pre-trained checkpoints achieve competitive results to state-of-the-art \n",
58
+ "ASR systems, with near 3% word error rate (WER) on the test-clean subset of \n",
59
+ "LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ \n",
60
+ "Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).\n",
61
+ "The extensive multilingual ASR knowledge acquired by Whisper during pre-training \n",
62
+ "can be leveraged for other low-resource languages; through fine-tuning, the \n",
63
+ "pre-trained checkpoints can be adapted for specific datasets and languages \n",
64
+ "to further improve upon these results. We'll show just how Whisper can be fine-tuned \n",
65
+ "for low-resource languages in this Colab."
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "markdown",
70
+ "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72",
71
+ "metadata": {
72
+ "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72"
73
+ },
74
+ "source": [
75
+ "<figure>\n",
76
+ "<img src=\"https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/whisper_architecture.svg\" alt=\"Trulli\" style=\"width:100%\">\n",
77
+ "<figcaption align = \"center\"><b>Figure 1:</b> Whisper model. The architecture \n",
78
+ "follows the standard Transformer-based encoder-decoder model. A \n",
79
+ "log-Mel spectrogram is input to the encoder. The last encoder \n",
80
+ "hidden states are input to the decoder via cross-attention mechanisms. The \n",
81
+ "decoder autoregressively predicts text tokens, jointly conditional on the \n",
82
+ "encoder hidden states and previously predicted tokens. Figure source: \n",
83
+ "<a href=\"https://openai.com/blog/whisper/\">OpenAI Whisper Blog</a>.</figcaption>\n",
84
+ "</figure>"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "markdown",
89
+ "id": "21b6316e-8a55-4549-a154-66d3da2ab74a",
90
+ "metadata": {
91
+ "id": "21b6316e-8a55-4549-a154-66d3da2ab74a"
92
+ },
93
+ "source": [
94
+ "The Whisper checkpoints come in five configurations of varying model sizes.\n",
95
+ "The smallest four are trained on either English-only or multilingual data.\n",
96
+ "The largest checkpoint is multilingual only. All nine of the pre-trained checkpoints \n",
97
+ "are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The \n",
98
+ "checkpoints are summarised in the following table with links to the models on the Hub:\n",
99
+ "\n",
100
+ "| Size | Layers | Width | Heads | Parameters | English-only | Multilingual |\n",
101
+ "|--------|--------|-------|-------|------------|------------------------------------------------------|---------------------------------------------------|\n",
102
+ "| tiny | 4 | 384 | 6 | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny.) |\n",
103
+ "| base | 6 | 512 | 8 | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |\n",
104
+ "| small | 12 | 768 | 12 | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |\n",
105
+ "| medium | 24 | 1024 | 16 | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |\n",
106
+ "| large | 32 | 1280 | 20 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |\n",
107
+ "\n",
108
+ "For demonstration purposes, we'll fine-tune the multilingual version of the \n",
109
+ "[`\"small\"`](https://huggingface.co/openai/whisper-small) checkpoint with 244M params (~= 1GB). \n",
110
+ "As for our data, we'll train and evaluate our system on a low-resource language \n",
111
+ "taken from the [Common Voice](https://huggingface.co/datasets/mozilla-foundation/fleurs_11_0)\n",
112
+ "dataset. We'll show that with as little as 8 hours of fine-tuning data, we can achieve \n",
113
+ "strong performance in this language."
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "markdown",
118
+ "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a",
119
+ "metadata": {
120
+ "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a"
121
+ },
122
+ "source": [
123
+ "------------------------------------------------------------------------\n",
124
+ "\n",
125
+ "\\\\({}^1\\\\) The name Whisper follows from the acronym “WSPSR”, which stands for “Web-scale Supervised Pre-training for Speech Recognition”."
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "markdown",
130
+ "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0",
131
+ "metadata": {
132
+ "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0"
133
+ },
134
+ "source": [
135
+ "## Load Dataset\n",
136
+ "Loading MS-MY Dataset from FLEURS.\n",
137
+ "Combine train and validation set."
138
+ ]
139
+ },
140
+ {
141
+ "cell_type": "code",
142
+ "execution_count": 1,
143
+ "id": "a2787582-554f-44ce-9f38-4180a5ed6b44",
144
+ "metadata": {
145
+ "id": "a2787582-554f-44ce-9f38-4180a5ed6b44"
146
+ },
147
+ "outputs": [
148
+ {
149
+ "data": {
150
+ "application/vnd.jupyter.widget-view+json": {
151
+ "model_id": "6ff7d2f90d6046cfbd8532751c970e97",
152
+ "version_major": 2,
153
+ "version_minor": 0
154
+ },
155
+ "text/plain": [
156
+ "Downloading builder script: 0%| | 0.00/12.8k [00:00<?, ?B/s]"
157
+ ]
158
+ },
159
+ "metadata": {},
160
+ "output_type": "display_data"
161
+ },
162
+ {
163
+ "data": {
164
+ "application/vnd.jupyter.widget-view+json": {
165
+ "model_id": "0f7140f926e04b17a32ccdc6da15eb66",
166
+ "version_major": 2,
167
+ "version_minor": 0
168
+ },
169
+ "text/plain": [
170
+ "Downloading readme: 0%| | 0.00/11.2k [00:00<?, ?B/s]"
171
+ ]
172
+ },
173
+ "metadata": {},
174
+ "output_type": "display_data"
175
+ },
176
+ {
177
+ "name": "stdout",
178
+ "output_type": "stream",
179
+ "text": [
180
+ "Downloading and preparing dataset fleurs/id_id to /home/ubuntu/.cache/huggingface/datasets/google___fleurs/id_id/2.0.0/aabb39fb29739c495517ac904e2886819b6e344702f0a5b5283cb178b087c94a...\n"
181
+ ]
182
+ },
183
+ {
184
+ "data": {
185
+ "application/vnd.jupyter.widget-view+json": {
186
+ "model_id": "51b235a061c540a08d6e0ccd044666d0",
187
+ "version_major": 2,
188
+ "version_minor": 0
189
+ },
190
+ "text/plain": [
191
+ "Downloading data: 0%| | 0.00/64.8M [00:00<?, ?B/s]"
192
+ ]
193
+ },
194
+ "metadata": {},
195
+ "output_type": "display_data"
196
+ },
197
+ {
198
+ "data": {
199
+ "application/vnd.jupyter.widget-view+json": {
200
+ "model_id": "321e7e8c3a5941938b9e7105cd2c5c57",
201
+ "version_major": 2,
202
+ "version_minor": 0
203
+ },
204
+ "text/plain": [
205
+ "Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]"
206
+ ]
207
+ },
208
+ "metadata": {},
209
+ "output_type": "display_data"
210
+ },
211
+ {
212
+ "data": {
213
+ "application/vnd.jupyter.widget-view+json": {
214
+ "model_id": "dde8e56f96f549c1ae0ed258af583749",
215
+ "version_major": 2,
216
+ "version_minor": 0
217
+ },
218
+ "text/plain": [
219
+ "Downloading data: 0%| | 0.00/2.37G [00:00<?, ?B/s]"
220
+ ]
221
+ },
222
+ "metadata": {},
223
+ "output_type": "display_data"
224
+ },
225
+ {
226
+ "data": {
227
+ "application/vnd.jupyter.widget-view+json": {
228
+ "model_id": "69f964576ee743ddac511031cd5090c2",
229
+ "version_major": 2,
230
+ "version_minor": 0
231
+ },
232
+ "text/plain": [
233
+ "Extracting data files: 0%| | 0/1 [00:00<?, ?it/s]"
234
+ ]
235
+ },
236
+ "metadata": {},
237
+ "output_type": "display_data"
238
+ },
239
+ {
240
+ "data": {
241
+ "application/vnd.jupyter.widget-view+json": {
242
+ "model_id": "",
243
+ "version_major": 2,
244
+ "version_minor": 0
245
+ },
246
+ "text/plain": [
247
+ "Generating train split: 0 examples [00:00, ? examples/s]"
248
+ ]
249
+ },
250
+ "metadata": {},
251
+ "output_type": "display_data"
252
+ },
253
+ {
254
+ "data": {
255
+ "application/vnd.jupyter.widget-view+json": {
256
+ "model_id": "",
257
+ "version_major": 2,
258
+ "version_minor": 0
259
+ },
260
+ "text/plain": [
261
+ "Generating validation split: 0 examples [00:00, ? examples/s]"
262
+ ]
263
+ },
264
+ "metadata": {},
265
+ "output_type": "display_data"
266
+ },
267
+ {
268
+ "data": {
269
+ "application/vnd.jupyter.widget-view+json": {
270
+ "model_id": "",
271
+ "version_major": 2,
272
+ "version_minor": 0
273
+ },
274
+ "text/plain": [
275
+ "Generating test split: 0 examples [00:00, ? examples/s]"
276
+ ]
277
+ },
278
+ "metadata": {},
279
+ "output_type": "display_data"
280
+ },
281
+ {
282
+ "name": "stdout",
283
+ "output_type": "stream",
284
+ "text": [
285
+ "Dataset fleurs downloaded and prepared to /home/ubuntu/.cache/huggingface/datasets/google___fleurs/id_id/2.0.0/aabb39fb29739c495517ac904e2886819b6e344702f0a5b5283cb178b087c94a. Subsequent calls will reuse this data.\n"
286
+ ]
287
+ },
288
+ {
289
+ "name": "stderr",
290
+ "output_type": "stream",
291
+ "text": [
292
+ "Found cached dataset fleurs (/home/ubuntu/.cache/huggingface/datasets/google___fleurs/id_id/2.0.0/aabb39fb29739c495517ac904e2886819b6e344702f0a5b5283cb178b087c94a)\n"
293
+ ]
294
+ },
295
+ {
296
+ "name": "stdout",
297
+ "output_type": "stream",
298
+ "text": [
299
+ "DatasetDict({\n",
300
+ " train: Dataset({\n",
301
+ " features: ['audio', 'transcription'],\n",
302
+ " num_rows: 2929\n",
303
+ " })\n",
304
+ " test: Dataset({\n",
305
+ " features: ['audio', 'transcription'],\n",
306
+ " num_rows: 687\n",
307
+ " })\n",
308
+ "})\n"
309
+ ]
310
+ }
311
+ ],
312
+ "source": [
313
+ "from datasets import load_dataset, DatasetDict\n",
314
+ "\n",
315
+ "fleurs = DatasetDict()\n",
316
+ "fleurs[\"train\"] = load_dataset(\"google/fleurs\", \"id_id\", split=\"train+validation\", use_auth_token=True)\n",
317
+ "fleurs[\"test\"] = load_dataset(\"google/fleurs\", \"id_id\", split=\"test\", use_auth_token=True)\n",
318
+ "\n",
319
+ "fleurs = fleurs.remove_columns([\"id\", \"num_samples\", \"path\", \"raw_transcription\", \"gender\", \"lang_id\", \"language\", \"lang_group_id\"])\n",
320
+ "\n",
321
+ "print(fleurs)"
322
+ ]
323
+ },
324
+ {
325
+ "cell_type": "code",
326
+ "execution_count": 2,
327
+ "id": "d087b451",
328
+ "metadata": {},
329
+ "outputs": [
330
+ {
331
+ "data": {
332
+ "application/vnd.jupyter.widget-view+json": {
333
+ "model_id": "5b389a3966884c0d8b9c9c58d44f4b51",
334
+ "version_major": 2,
335
+ "version_minor": 0
336
+ },
337
+ "text/plain": [
338
+ "Downloading builder script: 0%| | 0.00/8.30k [00:00<?, ?B/s]"
339
+ ]
340
+ },
341
+ "metadata": {},
342
+ "output_type": "display_data"
343
+ },
344
+ {
345
+ "data": {
346
+ "application/vnd.jupyter.widget-view+json": {
347
+ "model_id": "dc2587eb7d46437786cf2e17bdc641a4",
348
+ "version_major": 2,
349
+ "version_minor": 0
350
+ },
351
+ "text/plain": [
352
+ "Downloading readme: 0%| | 0.00/12.2k [00:00<?, ?B/s]"
353
+ ]
354
+ },
355
+ "metadata": {},
356
+ "output_type": "display_data"
357
+ },
358
+ {
359
+ "data": {
360
+ "application/vnd.jupyter.widget-view+json": {
361
+ "model_id": "00156957fb5f4034b31e0e37589be1ae",
362
+ "version_major": 2,
363
+ "version_minor": 0
364
+ },
365
+ "text/plain": [
366
+ "Downloading extra modules: 0%| | 0.00/3.44k [00:00<?, ?B/s]"
367
+ ]
368
+ },
369
+ "metadata": {},
370
+ "output_type": "display_data"
371
+ },
372
+ {
373
+ "data": {
374
+ "application/vnd.jupyter.widget-view+json": {
375
+ "model_id": "0e050456c9b44152898ec43806c4ace8",
376
+ "version_major": 2,
377
+ "version_minor": 0
378
+ },
379
+ "text/plain": [
380
+ "Downloading extra modules: 0%| | 0.00/60.9k [00:00<?, ?B/s]"
381
+ ]
382
+ },
383
+ "metadata": {},
384
+ "output_type": "display_data"
385
+ },
386
+ {
387
+ "name": "stdout",
388
+ "output_type": "stream",
389
+ "text": [
390
+ "Downloading and preparing dataset common_voice_11_0/id to /home/ubuntu/.cache/huggingface/datasets/mozilla-foundation___common_voice_11_0/id/11.0.0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f...\n"
391
+ ]
392
+ },
393
+ {
394
+ "data": {
395
+ "application/vnd.jupyter.widget-view+json": {
396
+ "model_id": "3be7b2ded271457e86f13e6ba8df36f1",
397
+ "version_major": 2,
398
+ "version_minor": 0
399
+ },
400
+ "text/plain": [
401
+ "Downloading data: 0%| | 0.00/12.2k [00:00<?, ?B/s]"
402
+ ]
403
+ },
404
+ "metadata": {},
405
+ "output_type": "display_data"
406
+ },
407
+ {
408
+ "data": {
409
+ "application/vnd.jupyter.widget-view+json": {
410
+ "model_id": "b2a2ba92d3cf4d9ea61e24b4293aa72f",
411
+ "version_major": 2,
412
+ "version_minor": 0
413
+ },
414
+ "text/plain": [
415
+ "Downloading data files: 0%| | 0/5 [00:00<?, ?it/s]"
416
+ ]
417
+ },
418
+ "metadata": {},
419
+ "output_type": "display_data"
420
+ },
421
+ {
422
+ "data": {
423
+ "application/vnd.jupyter.widget-view+json": {
424
+ "model_id": "8d6713f6edfe4623904be0b40a0e6b61",
425
+ "version_major": 2,
426
+ "version_minor": 0
427
+ },
428
+ "text/plain": [
429
+ "Downloading data: 0%| | 0.00/173M [00:00<?, ?B/s]"
430
+ ]
431
+ },
432
+ "metadata": {},
433
+ "output_type": "display_data"
434
+ },
435
+ {
436
+ "data": {
437
+ "application/vnd.jupyter.widget-view+json": {
438
+ "model_id": "46106464a869486081fe86a7daa26b38",
439
+ "version_major": 2,
440
+ "version_minor": 0
441
+ },
442
+ "text/plain": [
443
+ "Downloading data: 0%| | 0.00/100M [00:00<?, ?B/s]"
444
+ ]
445
+ },
446
+ "metadata": {},
447
+ "output_type": "display_data"
448
+ },
449
+ {
450
+ "data": {
451
+ "application/vnd.jupyter.widget-view+json": {
452
+ "model_id": "0a5aa6da4e94421ba4b7f96b6a763f67",
453
+ "version_major": 2,
454
+ "version_minor": 0
455
+ },
456
+ "text/plain": [
457
+ "Downloading data: 0%| | 0.00/114M [00:00<?, ?B/s]"
458
+ ]
459
+ },
460
+ "metadata": {},
461
+ "output_type": "display_data"
462
+ },
463
+ {
464
+ "data": {
465
+ "application/vnd.jupyter.widget-view+json": {
466
+ "model_id": "8652c6a3883f4cfd87e6502c2d4d9142",
467
+ "version_major": 2,
468
+ "version_minor": 0
469
+ },
470
+ "text/plain": [
471
+ "Downloading data: 0%| | 0.00/568M [00:00<?, ?B/s]"
472
+ ]
473
+ },
474
+ "metadata": {},
475
+ "output_type": "display_data"
476
+ },
477
+ {
478
+ "data": {
479
+ "application/vnd.jupyter.widget-view+json": {
480
+ "model_id": "45a448f7ceaa44ec811e1652cf678097",
481
+ "version_major": 2,
482
+ "version_minor": 0
483
+ },
484
+ "text/plain": [
485
+ "Downloading data: 0%| | 0.00/64.7M [00:00<?, ?B/s]"
486
+ ]
487
+ },
488
+ "metadata": {},
489
+ "output_type": "display_data"
490
+ },
491
+ {
492
+ "data": {
493
+ "application/vnd.jupyter.widget-view+json": {
494
+ "model_id": "8385bd45709343a4a9494e79cd99c881",
495
+ "version_major": 2,
496
+ "version_minor": 0
497
+ },
498
+ "text/plain": [
499
+ "Extracting data files: 0%| | 0/5 [00:00<?, ?it/s]"
500
+ ]
501
+ },
502
+ "metadata": {},
503
+ "output_type": "display_data"
504
+ },
505
+ {
506
+ "data": {
507
+ "application/vnd.jupyter.widget-view+json": {
508
+ "model_id": "aa985a65b95943ccb5b893301b248d96",
509
+ "version_major": 2,
510
+ "version_minor": 0
511
+ },
512
+ "text/plain": [
513
+ "Downloading data files: 0%| | 0/5 [00:00<?, ?it/s]"
514
+ ]
515
+ },
516
+ "metadata": {},
517
+ "output_type": "display_data"
518
+ },
519
+ {
520
+ "data": {
521
+ "application/vnd.jupyter.widget-view+json": {
522
+ "model_id": "033df621f5564796858e2a106923b840",
523
+ "version_major": 2,
524
+ "version_minor": 0
525
+ },
526
+ "text/plain": [
527
+ "Downloading data: 0%| | 0.00/1.23M [00:00<?, ?B/s]"
528
+ ]
529
+ },
530
+ "metadata": {},
531
+ "output_type": "display_data"
532
+ },
533
+ {
534
+ "data": {
535
+ "application/vnd.jupyter.widget-view+json": {
536
+ "model_id": "852e9bfe7ad9457c95985dcad30bba64",
537
+ "version_major": 2,
538
+ "version_minor": 0
539
+ },
540
+ "text/plain": [
541
+ "Downloading data: 0%| | 0.00/727k [00:00<?, ?B/s]"
542
+ ]
543
+ },
544
+ "metadata": {},
545
+ "output_type": "display_data"
546
+ },
547
+ {
548
+ "data": {
549
+ "application/vnd.jupyter.widget-view+json": {
550
+ "model_id": "5a66fccae97f4e3f94437f99c6224b09",
551
+ "version_major": 2,
552
+ "version_minor": 0
553
+ },
554
+ "text/plain": [
555
+ "Downloading data: 0%| | 0.00/783k [00:00<?, ?B/s]"
556
+ ]
557
+ },
558
+ "metadata": {},
559
+ "output_type": "display_data"
560
+ },
561
+ {
562
+ "data": {
563
+ "application/vnd.jupyter.widget-view+json": {
564
+ "model_id": "ac47cc0067a34812b489c9ca78eace20",
565
+ "version_major": 2,
566
+ "version_minor": 0
567
+ },
568
+ "text/plain": [
569
+ "Downloading data: 0%| | 0.00/5.24M [00:00<?, ?B/s]"
570
+ ]
571
+ },
572
+ "metadata": {},
573
+ "output_type": "display_data"
574
+ },
575
+ {
576
+ "data": {
577
+ "application/vnd.jupyter.widget-view+json": {
578
+ "model_id": "813b5d12a3104979b81a73e55ecd4f2a",
579
+ "version_major": 2,
580
+ "version_minor": 0
581
+ },
582
+ "text/plain": [
583
+ "Downloading data: 0%| | 0.00/557k [00:00<?, ?B/s]"
584
+ ]
585
+ },
586
+ "metadata": {},
587
+ "output_type": "display_data"
588
+ },
589
+ {
590
+ "data": {
591
+ "application/vnd.jupyter.widget-view+json": {
592
+ "model_id": "bac51512553440c1993e66cfde92965f",
593
+ "version_major": 2,
594
+ "version_minor": 0
595
+ },
596
+ "text/plain": [
597
+ "Extracting data files: 0%| | 0/5 [00:00<?, ?it/s]"
598
+ ]
599
+ },
600
+ "metadata": {},
601
+ "output_type": "display_data"
602
+ },
603
+ {
604
+ "data": {
605
+ "application/vnd.jupyter.widget-view+json": {
606
+ "model_id": "",
607
+ "version_major": 2,
608
+ "version_minor": 0
609
+ },
610
+ "text/plain": [
611
+ "Generating train split: 0 examples [00:00, ? examples/s]"
612
+ ]
613
+ },
614
+ "metadata": {},
615
+ "output_type": "display_data"
616
+ },
617
+ {
618
+ "name": "stderr",
619
+ "output_type": "stream",
620
+ "text": [
621
+ "\n",
622
+ "Reading metadata...: 5048it [00:00, 145536.85it/s]\n"
623
+ ]
624
+ },
625
+ {
626
+ "data": {
627
+ "application/vnd.jupyter.widget-view+json": {
628
+ "model_id": "",
629
+ "version_major": 2,
630
+ "version_minor": 0
631
+ },
632
+ "text/plain": [
633
+ "Generating validation split: 0 examples [00:00, ? examples/s]"
634
+ ]
635
+ },
636
+ "metadata": {},
637
+ "output_type": "display_data"
638
+ },
639
+ {
640
+ "name": "stderr",
641
+ "output_type": "stream",
642
+ "text": [
643
+ "\n",
644
+ "\n",
645
+ "Reading metadata...: 3226it [00:00, 151067.62it/s]\n"
646
+ ]
647
+ },
648
+ {
649
+ "data": {
650
+ "application/vnd.jupyter.widget-view+json": {
651
+ "model_id": "",
652
+ "version_major": 2,
653
+ "version_minor": 0
654
+ },
655
+ "text/plain": [
656
+ "Generating test split: 0 examples [00:00, ? examples/s]"
657
+ ]
658
+ },
659
+ "metadata": {},
660
+ "output_type": "display_data"
661
+ },
662
+ {
663
+ "name": "stderr",
664
+ "output_type": "stream",
665
+ "text": [
666
+ "\n",
667
+ "\n",
668
+ "\n",
669
+ "Reading metadata...: 3618it [00:00, 154872.14it/s]\n"
670
+ ]
671
+ },
672
+ {
673
+ "data": {
674
+ "application/vnd.jupyter.widget-view+json": {
675
+ "model_id": "",
676
+ "version_major": 2,
677
+ "version_minor": 0
678
+ },
679
+ "text/plain": [
680
+ "Generating other split: 0 examples [00:00, ? examples/s]"
681
+ ]
682
+ },
683
+ "metadata": {},
684
+ "output_type": "display_data"
685
+ },
686
+ {
687
+ "name": "stderr",
688
+ "output_type": "stream",
689
+ "text": [
690
+ "\n",
691
+ "\n",
692
+ "\n",
693
+ "\n",
694
+ "Reading metadata...: 0it [00:00, ?it/s]\u001b[A\u001b[A\u001b[A\u001b[A\n",
695
+ "\n",
696
+ "\n",
697
+ "\n",
698
+ "Reading metadata...: 24238it [00:00, 154243.20it/s]\u001b[A\u001b[A\u001b[A\u001b[A\n"
699
+ ]
700
+ },
701
+ {
702
+ "data": {
703
+ "application/vnd.jupyter.widget-view+json": {
704
+ "model_id": "",
705
+ "version_major": 2,
706
+ "version_minor": 0
707
+ },
708
+ "text/plain": [
709
+ "Generating invalidated split: 0 examples [00:00, ? examples/s]"
710
+ ]
711
+ },
712
+ "metadata": {},
713
+ "output_type": "display_data"
714
+ },
715
+ {
716
+ "name": "stderr",
717
+ "output_type": "stream",
718
+ "text": [
719
+ "\n",
720
+ "\n",
721
+ "\n",
722
+ "\n",
723
+ "\n",
724
+ "Reading metadata...: 2466it [00:00, 150116.16it/s]A\u001b[A\n"
725
+ ]
726
+ },
727
+ {
728
+ "name": "stdout",
729
+ "output_type": "stream",
730
+ "text": [
731
+ "Dataset common_voice_11_0 downloaded and prepared to /home/ubuntu/.cache/huggingface/datasets/mozilla-foundation___common_voice_11_0/id/11.0.0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f. Subsequent calls will reuse this data.\n"
732
+ ]
733
+ },
734
+ {
735
+ "name": "stderr",
736
+ "output_type": "stream",
737
+ "text": [
738
+ "Found cached dataset common_voice_11_0 (/home/ubuntu/.cache/huggingface/datasets/mozilla-foundation___common_voice_11_0/id/11.0.0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f)\n"
739
+ ]
740
+ },
741
+ {
742
+ "name": "stdout",
743
+ "output_type": "stream",
744
+ "text": [
745
+ "DatasetDict({\n",
746
+ " train: Dataset({\n",
747
+ " features: ['audio', 'transcription'],\n",
748
+ " num_rows: 8274\n",
749
+ " })\n",
750
+ " test: Dataset({\n",
751
+ " features: ['audio', 'transcription'],\n",
752
+ " num_rows: 3618\n",
753
+ " })\n",
754
+ "})\n"
755
+ ]
756
+ }
757
+ ],
758
+ "source": [
759
+ "cv = DatasetDict()\n",
760
+ "cv[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"id\", split=\"train+validation\", use_auth_token=True)\n",
761
+ "cv[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"id\", split=\"test\", use_auth_token=True)\n",
762
+ "\n",
763
+ "cv = cv.remove_columns([\"client_id\", \"path\", 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\n",
764
+ "cv = cv.rename_column('sentence', 'transcription')\n",
765
+ "print(cv)"
766
+ ]
767
+ },
768
+ {
769
+ "cell_type": "markdown",
770
+ "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605",
771
+ "metadata": {
772
+ "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605"
773
+ },
774
+ "source": [
775
+ "## Prepare Feature Extractor, Tokenizer and Data"
776
+ ]
777
+ },
778
+ {
779
+ "cell_type": "markdown",
780
+ "id": "601c3099-1026-439e-93e2-5635b3ba5a73",
781
+ "metadata": {
782
+ "id": "601c3099-1026-439e-93e2-5635b3ba5a73"
783
+ },
784
+ "source": [
785
+ "The ASR pipeline can be de-composed into three stages: \n",
786
+ "1) A feature extractor which pre-processes the raw audio-inputs\n",
787
+ "2) The model which performs the sequence-to-sequence mapping \n",
788
+ "3) A tokenizer which post-processes the model outputs to text format\n",
789
+ "\n",
790
+ "In 🤗 Transformers, the Whisper model has an associated feature extractor and tokenizer, \n",
791
+ "called [WhisperFeatureExtractor](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperFeatureExtractor)\n",
792
+ "and [WhisperTokenizer](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperTokenizer) \n",
793
+ "respectively.\n",
794
+ "\n",
795
+ "We'll go through details for setting-up the feature extractor and tokenizer one-by-one!"
796
+ ]
797
+ },
798
+ {
799
+ "cell_type": "markdown",
800
+ "id": "560332eb-3558-41a1-b500-e83a9f695f84",
801
+ "metadata": {
802
+ "id": "560332eb-3558-41a1-b500-e83a9f695f84"
803
+ },
804
+ "source": [
805
+ "### Load WhisperFeatureExtractor"
806
+ ]
807
+ },
808
+ {
809
+ "cell_type": "markdown",
810
+ "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365",
811
+ "metadata": {
812
+ "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365"
813
+ },
814
+ "source": [
815
+ "The Whisper feature extractor performs two operations:\n",
816
+ "1. Pads / truncates the audio inputs to 30s: any audio inputs shorter than 30s are padded to 30s with silence (zeros), and those longer that 30s are truncated to 30s\n",
817
+ "2. Converts the audio inputs to _log-Mel spectrogram_ input features, a visual representation of the audio and the form of the input expected by the Whisper model"
818
+ ]
819
+ },
820
+ {
821
+ "cell_type": "markdown",
822
+ "id": "589d9ec1-d12b-4b64-93f7-04c63997da19",
823
+ "metadata": {
824
+ "id": "589d9ec1-d12b-4b64-93f7-04c63997da19"
825
+ },
826
+ "source": [
827
+ "<figure>\n",
828
+ "<img src=\"https://raw.githubusercontent.com/sanchit-gandhi/notebooks/main/spectrogram.jpg\" alt=\"Trulli\" style=\"width:100%\">\n",
829
+ "<figcaption align = \"center\"><b>Figure 2:</b> Conversion of sampled audio array to log-Mel spectrogram.\n",
830
+ "Left: sampled 1-dimensional audio signal. Right: corresponding log-Mel spectrogram. Figure source:\n",
831
+ "<a href=\"https://ai.googleblog.com/2019/04/specaugment-new-data-augmentation.html\">Google SpecAugment Blog</a>.\n",
832
+ "</figcaption>"
833
+ ]
834
+ },
835
+ {
836
+ "cell_type": "markdown",
837
+ "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa",
838
+ "metadata": {
839
+ "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa"
840
+ },
841
+ "source": [
842
+ "We'll load the feature extractor from the pre-trained checkpoint with the default values:"
843
+ ]
844
+ },
845
+ {
846
+ "cell_type": "code",
847
+ "execution_count": 3,
848
+ "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5",
849
+ "metadata": {
850
+ "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5"
851
+ },
852
+ "outputs": [
853
+ {
854
+ "data": {
855
+ "application/vnd.jupyter.widget-view+json": {
856
+ "model_id": "3ab6ee91872d461a86bae35c206a8d74",
857
+ "version_major": 2,
858
+ "version_minor": 0
859
+ },
860
+ "text/plain": [
861
+ "Downloading: 0%| | 0.00/185k [00:00<?, ?B/s]"
862
+ ]
863
+ },
864
+ "metadata": {},
865
+ "output_type": "display_data"
866
+ }
867
+ ],
868
+ "source": [
869
+ "from transformers import WhisperFeatureExtractor\n",
870
+ "\n",
871
+ "feature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-medium\")"
872
+ ]
873
+ },
874
+ {
875
+ "cell_type": "markdown",
876
+ "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb",
877
+ "metadata": {
878
+ "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb"
879
+ },
880
+ "source": [
881
+ "### Load WhisperTokenizer"
882
+ ]
883
+ },
884
+ {
885
+ "cell_type": "markdown",
886
+ "id": "2bc82609-a9fb-447a-a2af-99597c864029",
887
+ "metadata": {
888
+ "id": "2bc82609-a9fb-447a-a2af-99597c864029"
889
+ },
890
+ "source": [
891
+ "The Whisper model outputs a sequence of _token ids_. The tokenizer maps each of these token ids to their corresponding text string. For Hindi, we can load the pre-trained tokenizer and use it for fine-tuning without any further modifications. We simply have to \n",
892
+ "specify the target language and the task. These arguments inform the \n",
893
+ "tokenizer to prefix the language and task tokens to the start of encoded \n",
894
+ "label sequences:"
895
+ ]
896
+ },
897
+ {
898
+ "cell_type": "code",
899
+ "execution_count": 4,
900
+ "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6",
901
+ "metadata": {
902
+ "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6",
903
+ "outputId": "5c004b44-86e7-4e00-88be-39e0af5eed69"
904
+ },
905
+ "outputs": [
906
+ {
907
+ "data": {
908
+ "application/vnd.jupyter.widget-view+json": {
909
+ "model_id": "a3fa61645ce842b7b6e541d867711cf1",
910
+ "version_major": 2,
911
+ "version_minor": 0
912
+ },
913
+ "text/plain": [
914
+ "Downloading: 0%| | 0.00/830 [00:00<?, ?B/s]"
915
+ ]
916
+ },
917
+ "metadata": {},
918
+ "output_type": "display_data"
919
+ },
920
+ {
921
+ "data": {
922
+ "application/vnd.jupyter.widget-view+json": {
923
+ "model_id": "961511e141d549a4b146289108af612e",
924
+ "version_major": 2,
925
+ "version_minor": 0
926
+ },
927
+ "text/plain": [
928
+ "Downloading: 0%| | 0.00/1.04M [00:00<?, ?B/s]"
929
+ ]
930
+ },
931
+ "metadata": {},
932
+ "output_type": "display_data"
933
+ },
934
+ {
935
+ "data": {
936
+ "application/vnd.jupyter.widget-view+json": {
937
+ "model_id": "81dc19b026504801b31c212439daea0b",
938
+ "version_major": 2,
939
+ "version_minor": 0
940
+ },
941
+ "text/plain": [
942
+ "Downloading: 0%| | 0.00/494k [00:00<?, ?B/s]"
943
+ ]
944
+ },
945
+ "metadata": {},
946
+ "output_type": "display_data"
947
+ },
948
+ {
949
+ "data": {
950
+ "application/vnd.jupyter.widget-view+json": {
951
+ "model_id": "03044162f9df47c4b8841521aa1c8178",
952
+ "version_major": 2,
953
+ "version_minor": 0
954
+ },
955
+ "text/plain": [
956
+ "Downloading: 0%| | 0.00/52.7k [00:00<?, ?B/s]"
957
+ ]
958
+ },
959
+ "metadata": {},
960
+ "output_type": "display_data"
961
+ },
962
+ {
963
+ "data": {
964
+ "application/vnd.jupyter.widget-view+json": {
965
+ "model_id": "51593e9b509d4f50bca5900d8cb42745",
966
+ "version_major": 2,
967
+ "version_minor": 0
968
+ },
969
+ "text/plain": [
970
+ "Downloading: 0%| | 0.00/2.11k [00:00<?, ?B/s]"
971
+ ]
972
+ },
973
+ "metadata": {},
974
+ "output_type": "display_data"
975
+ },
976
+ {
977
+ "data": {
978
+ "application/vnd.jupyter.widget-view+json": {
979
+ "model_id": "70d0fad5e9d14c6fb718856ba6cd397e",
980
+ "version_major": 2,
981
+ "version_minor": 0
982
+ },
983
+ "text/plain": [
984
+ "Downloading: 0%| | 0.00/2.06k [00:00<?, ?B/s]"
985
+ ]
986
+ },
987
+ "metadata": {},
988
+ "output_type": "display_data"
989
+ }
990
+ ],
991
+ "source": [
992
+ "from transformers import WhisperTokenizer\n",
993
+ "\n",
994
+ "tokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-medium\", language=\"Indonesian\", task=\"transcribe\")"
995
+ ]
996
+ },
997
+ {
998
+ "cell_type": "markdown",
999
+ "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b",
1000
+ "metadata": {
1001
+ "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b"
1002
+ },
1003
+ "source": [
1004
+ "### Combine To Create A WhisperProcessor"
1005
+ ]
1006
+ },
1007
+ {
1008
+ "cell_type": "markdown",
1009
+ "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d",
1010
+ "metadata": {
1011
+ "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d"
1012
+ },
1013
+ "source": [
1014
+ "To simplify using the feature extractor and tokenizer, we can _wrap_ \n",
1015
+ "both into a single `WhisperProcessor` class. This processor object \n",
1016
+ "inherits from the `WhisperFeatureExtractor` and `WhisperProcessor`, \n",
1017
+ "and can be used on the audio inputs and model predictions as required. \n",
1018
+ "In doing so, we only need to keep track of two objects during training: \n",
1019
+ "the `processor` and the `model`:"
1020
+ ]
1021
+ },
1022
+ {
1023
+ "cell_type": "code",
1024
+ "execution_count": 5,
1025
+ "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6",
1026
+ "metadata": {
1027
+ "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6"
1028
+ },
1029
+ "outputs": [],
1030
+ "source": [
1031
+ "from transformers import WhisperProcessor\n",
1032
+ "\n",
1033
+ "processor = WhisperProcessor.from_pretrained(\"openai/whisper-medium\", language=\"Indonesian\", task=\"transcribe\")"
1034
+ ]
1035
+ },
1036
+ {
1037
+ "cell_type": "markdown",
1038
+ "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c",
1039
+ "metadata": {
1040
+ "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c"
1041
+ },
1042
+ "source": [
1043
+ "### Prepare Data"
1044
+ ]
1045
+ },
1046
+ {
1047
+ "cell_type": "code",
1048
+ "execution_count": 6,
1049
+ "id": "c69246a2",
1050
+ "metadata": {},
1051
+ "outputs": [],
1052
+ "source": [
1053
+ "from datasets import Audio\n",
1054
+ "\n",
1055
+ "cv = cv.cast_column(\"audio\", Audio(sampling_rate=16000))\n",
1056
+ "fleurs = fleurs.cast_column(\"audio\", Audio(sampling_rate=16000))"
1057
+ ]
1058
+ },
1059
+ {
1060
+ "cell_type": "markdown",
1061
+ "id": "3df7378a-a4c0-45d7-8d07-defbd1062ab6",
1062
+ "metadata": {},
1063
+ "source": [
1064
+ "We'll define our pre-processing strategy. We advise that you **do not** lower-case the transcriptions or remove punctuation unless mixing different datasets. This will enable you to fine-tune Whisper models that can predict punctuation and casing. Later, you will see how we can evaluate the predictions without punctuation or casing, so that the models benefit from the WER improvement obtained by normalising the transcriptions while still predicting fully formatted transcriptions."
1065
+ ]
1066
+ },
1067
+ {
1068
+ "cell_type": "code",
1069
+ "execution_count": 7,
1070
+ "id": "d041650e-1c48-4439-87b3-5b6f4a514107",
1071
+ "metadata": {},
1072
+ "outputs": [],
1073
+ "source": [
1074
+ "from transformers.models.whisper.english_normalizer import BasicTextNormalizer\n",
1075
+ "\n",
1076
+ "do_lower_case = False\n",
1077
+ "do_remove_punctuation = False\n",
1078
+ "\n",
1079
+ "normalizer = BasicTextNormalizer()"
1080
+ ]
1081
+ },
1082
+ {
1083
+ "cell_type": "markdown",
1084
+ "id": "89e12c2e-2f14-479b-987b-f0c75c881095",
1085
+ "metadata": {},
1086
+ "source": [
1087
+ "Now we can write a function to prepare our data ready for the model:\n",
1088
+ "1. We load and resample the audio data by calling `batch[\"audio\"]`. As explained above, 🤗 Datasets performs any necessary resampling operations on the fly.\n",
1089
+ "2. We use the feature extractor to compute the log-Mel spectrogram input features from our 1-dimensional audio array.\n",
1090
+ "3. We perform any optional pre-processing (lower-case or remove punctuation).\n",
1091
+ "4. We encode the transcriptions to label ids through the use of the tokenizer."
1092
+ ]
1093
+ },
1094
+ {
1095
+ "cell_type": "code",
1096
+ "execution_count": 8,
1097
+ "id": "c085911c-a10a-41ef-8874-306e0503e9bb",
1098
+ "metadata": {},
1099
+ "outputs": [],
1100
+ "source": [
1101
+ "def prepare_dataset(batch):\n",
1102
+ " # load and (possibly) resample audio data to 16kHz\n",
1103
+ " audio = batch[\"audio\"]\n",
1104
+ "\n",
1105
+ " # compute log-Mel input features from input audio array \n",
1106
+ " batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\n",
1107
+ " # compute input length of audio sample in seconds\n",
1108
+ " batch[\"input_length\"] = len(audio[\"array\"]) / audio[\"sampling_rate\"]\n",
1109
+ " \n",
1110
+ " # optional pre-processing steps\n",
1111
+ " transcription = batch[\"transcription\"]\n",
1112
+ " if do_lower_case:\n",
1113
+ " transcription = transcription.lower()\n",
1114
+ " if do_remove_punctuation:\n",
1115
+ " transcription = normalizer(transcription).strip()\n",
1116
+ " \n",
1117
+ " # encode target text to label ids\n",
1118
+ " batch[\"labels\"] = processor.tokenizer(transcription).input_ids\n",
1119
+ " return batch"
1120
+ ]
1121
+ },
1122
+ {
1123
+ "cell_type": "markdown",
1124
+ "id": "8c960965-9fb6-466f-9dbd-c9d43e71d9d0",
1125
+ "metadata": {
1126
+ "id": "70b319fb-2439-4ef6-a70d-a47bf41c4a13"
1127
+ },
1128
+ "source": [
1129
+ "We can apply the data preparation function to all of our training examples using dataset's `.map` method. The argument `num_proc` specifies how many CPU cores to use. Setting `num_proc` > 1 will enable multiprocessing. If the `.map` method hangs with multiprocessing, set `num_proc=1` and process the dataset sequentially."
1130
+ ]
1131
+ },
1132
+ {
1133
+ "cell_type": "code",
1134
+ "execution_count": 9,
1135
+ "id": "b459b0c5",
1136
+ "metadata": {},
1137
+ "outputs": [
1138
+ {
1139
+ "data": {
1140
+ "application/vnd.jupyter.widget-view+json": {
1141
+ "model_id": "5370b910ba054a4895c487fd81a8fb5b",
1142
+ "version_major": 2,
1143
+ "version_minor": 0
1144
+ },
1145
+ "text/plain": [
1146
+ " 0%| | 0/2929 [00:00<?, ?ex/s]"
1147
+ ]
1148
+ },
1149
+ "metadata": {},
1150
+ "output_type": "display_data"
1151
+ },
1152
+ {
1153
+ "data": {
1154
+ "application/vnd.jupyter.widget-view+json": {
1155
+ "model_id": "08eb36c67b614b5a9bd9522422282a3e",
1156
+ "version_major": 2,
1157
+ "version_minor": 0
1158
+ },
1159
+ "text/plain": [
1160
+ " 0%| | 0/687 [00:00<?, ?ex/s]"
1161
+ ]
1162
+ },
1163
+ "metadata": {},
1164
+ "output_type": "display_data"
1165
+ }
1166
+ ],
1167
+ "source": [
1168
+ "fleurs = fleurs.map(prepare_dataset, remove_columns=fleurs.column_names['train'], num_proc=1)"
1169
+ ]
1170
+ },
1171
+ {
1172
+ "cell_type": "code",
1173
+ "execution_count": 10,
1174
+ "id": "e43afca6",
1175
+ "metadata": {},
1176
+ "outputs": [
1177
+ {
1178
+ "data": {
1179
+ "application/vnd.jupyter.widget-view+json": {
1180
+ "model_id": "c282c621f7bb47bcb75ee95defe32621",
1181
+ "version_major": 2,
1182
+ "version_minor": 0
1183
+ },
1184
+ "text/plain": [
1185
+ " 0%| | 0/8274 [00:00<?, ?ex/s]"
1186
+ ]
1187
+ },
1188
+ "metadata": {},
1189
+ "output_type": "display_data"
1190
+ },
1191
+ {
1192
+ "data": {
1193
+ "application/vnd.jupyter.widget-view+json": {
1194
+ "model_id": "894beb4d94fc4a0d9393ef08aa5a723a",
1195
+ "version_major": 2,
1196
+ "version_minor": 0
1197
+ },
1198
+ "text/plain": [
1199
+ " 0%| | 0/3618 [00:00<?, ?ex/s]"
1200
+ ]
1201
+ },
1202
+ "metadata": {},
1203
+ "output_type": "display_data"
1204
+ }
1205
+ ],
1206
+ "source": [
1207
+ "cv = cv.map(prepare_dataset, remove_columns=cv.column_names['train'], num_proc=1)"
1208
+ ]
1209
+ },
1210
+ {
1211
+ "cell_type": "code",
1212
+ "execution_count": 11,
1213
+ "id": "e9034b52",
1214
+ "metadata": {},
1215
+ "outputs": [],
1216
+ "source": [
1217
+ "from datasets import concatenate_datasets\n",
1218
+ "\n",
1219
+ "cc = DatasetDict()\n",
1220
+ "cc['train'] = concatenate_datasets([fleurs['train'], cv['train']])\n",
1221
+ "cc['test'] = concatenate_datasets([fleurs['test'], cv['test']])"
1222
+ ]
1223
+ },
1224
+ {
1225
+ "cell_type": "markdown",
1226
+ "id": "54ce0fdb-7218-4a4d-b175-383980fec0df",
1227
+ "metadata": {},
1228
+ "source": [
1229
+ "Finally, we filter any training data with audio samples longer than 30s. These samples would otherwise be truncated by the Whisper feature-extractor which could affect the stability of training. We define a function that returns `True` for samples that are less than 30s, and `False` for those that are longer:"
1230
+ ]
1231
+ },
1232
+ {
1233
+ "cell_type": "code",
1234
+ "execution_count": 12,
1235
+ "id": "01cb25ef-4bb0-4325-9461-f59198acadf6",
1236
+ "metadata": {},
1237
+ "outputs": [],
1238
+ "source": [
1239
+ "max_input_length = 30.0\n",
1240
+ "\n",
1241
+ "def is_audio_in_length_range(length):\n",
1242
+ " return length < max_input_length"
1243
+ ]
1244
+ },
1245
+ {
1246
+ "cell_type": "markdown",
1247
+ "id": "30e676a8-7ca8-4850-8c5d-5b2b00d13fba",
1248
+ "metadata": {},
1249
+ "source": [
1250
+ "We apply our filter function to all samples of our training dataset through 🤗 Datasets' `.filter` method:"
1251
+ ]
1252
+ },
1253
+ {
1254
+ "cell_type": "code",
1255
+ "execution_count": 13,
1256
+ "id": "333f7f6e-6053-4d3b-8924-c733c79b82ac",
1257
+ "metadata": {},
1258
+ "outputs": [
1259
+ {
1260
+ "data": {
1261
+ "application/vnd.jupyter.widget-view+json": {
1262
+ "model_id": "731610a8f59741849e2b0b0aa332d14b",
1263
+ "version_major": 2,
1264
+ "version_minor": 0
1265
+ },
1266
+ "text/plain": [
1267
+ " 0%| | 0/12 [00:00<?, ?ba/s]"
1268
+ ]
1269
+ },
1270
+ "metadata": {},
1271
+ "output_type": "display_data"
1272
+ }
1273
+ ],
1274
+ "source": [
1275
+ "cc['train'] = cc['train'].filter(\n",
1276
+ " is_audio_in_length_range,\n",
1277
+ " input_columns=[\"input_length\"],\n",
1278
+ ")"
1279
+ ]
1280
+ },
1281
+ {
1282
+ "cell_type": "markdown",
1283
+ "id": "263a5a58-0239-4a25-b0df-c625fc9c5810",
1284
+ "metadata": {
1285
+ "id": "263a5a58-0239-4a25-b0df-c625fc9c5810"
1286
+ },
1287
+ "source": [
1288
+ "## Training and Evaluation"
1289
+ ]
1290
+ },
1291
+ {
1292
+ "cell_type": "markdown",
1293
+ "id": "a693e768-c5a6-453f-89a1-b601dcf7daf7",
1294
+ "metadata": {
1295
+ "id": "a693e768-c5a6-453f-89a1-b601dcf7daf7"
1296
+ },
1297
+ "source": [
1298
+ "Now that we've prepared our data, we're ready to dive into the training pipeline. \n",
1299
+ "The [🤗 Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer)\n",
1300
+ "will do much of the heavy lifting for us. All we have to do is:\n",
1301
+ "\n",
1302
+ "- Define a data collator: the data collator takes our pre-processed data and prepares PyTorch tensors ready for the model.\n",
1303
+ "\n",
1304
+ "- Evaluation metrics: during evaluation, we want to evaluate the model using the [word error rate (WER)](https://huggingface.co/metrics/wer) metric. We need to define a `compute_metrics` function that handles this computation.\n",
1305
+ "\n",
1306
+ "- Load a pre-trained checkpoint: we need to load a pre-trained checkpoint and configure it correctly for training.\n",
1307
+ "\n",
1308
+ "- Define the training configuration: this will be used by the 🤗 Trainer to define the training schedule.\n",
1309
+ "\n",
1310
+ "Once we've fine-tuned the model, we will evaluate it on the test data to verify that we have correctly trained it \n",
1311
+ "to transcribe speech in Hindi."
1312
+ ]
1313
+ },
1314
+ {
1315
+ "cell_type": "markdown",
1316
+ "id": "8d230e6d-624c-400a-bbf5-fa660881df25",
1317
+ "metadata": {
1318
+ "id": "8d230e6d-624c-400a-bbf5-fa660881df25"
1319
+ },
1320
+ "source": [
1321
+ "### Define a Data Collator"
1322
+ ]
1323
+ },
1324
+ {
1325
+ "cell_type": "markdown",
1326
+ "id": "04def221-0637-4a69-b242-d3f0c1d0ee78",
1327
+ "metadata": {
1328
+ "id": "04def221-0637-4a69-b242-d3f0c1d0ee78"
1329
+ },
1330
+ "source": [
1331
+ "The data collator for a sequence-to-sequence speech model is unique in the sense that it \n",
1332
+ "treats the `input_features` and `labels` independently: the `input_features` must be \n",
1333
+ "handled by the feature extractor and the `labels` by the tokenizer.\n",
1334
+ "\n",
1335
+ "The `input_features` are already padded to 30s and converted to a log-Mel spectrogram \n",
1336
+ "of fixed dimension by action of the feature extractor, so all we have to do is convert the `input_features`\n",
1337
+ "to batched PyTorch tensors. We do this using the feature extractor's `.pad` method with `return_tensors=pt`.\n",
1338
+ "\n",
1339
+ "The `labels` on the other hand are un-padded. We first pad the sequences\n",
1340
+ "to the maximum length in the batch using the tokenizer's `.pad` method. The padding tokens \n",
1341
+ "are then replaced by `-100` so that these tokens are **not** taken into account when \n",
1342
+ "computing the loss. We then cut the BOS token from the start of the label sequence as we \n",
1343
+ "append it later during training.\n",
1344
+ "\n",
1345
+ "We can leverage the `WhisperProcessor` we defined earlier to perform both the \n",
1346
+ "feature extractor and the tokenizer operations:"
1347
+ ]
1348
+ },
1349
+ {
1350
+ "cell_type": "code",
1351
+ "execution_count": 14,
1352
+ "id": "8326221e-ec13-4731-bb4e-51e5fc1486c5",
1353
+ "metadata": {
1354
+ "id": "8326221e-ec13-4731-bb4e-51e5fc1486c5"
1355
+ },
1356
+ "outputs": [],
1357
+ "source": [
1358
+ "import torch\n",
1359
+ "\n",
1360
+ "from dataclasses import dataclass\n",
1361
+ "from typing import Any, Dict, List, Union\n",
1362
+ "\n",
1363
+ "@dataclass\n",
1364
+ "class DataCollatorSpeechSeq2SeqWithPadding:\n",
1365
+ " processor: Any\n",
1366
+ "\n",
1367
+ " def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\n",
1368
+ " # split inputs and labels since they have to be of different lengths and need different padding methods\n",
1369
+ " # first treat the audio inputs by simply returning torch tensors\n",
1370
+ " input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\n",
1371
+ " batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\n",
1372
+ "\n",
1373
+ " # get the tokenized label sequences\n",
1374
+ " label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n",
1375
+ " # pad the labels to max length\n",
1376
+ " labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\n",
1377
+ "\n",
1378
+ " # replace padding with -100 to ignore loss correctly\n",
1379
+ " labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n",
1380
+ "\n",
1381
+ " # if bos token is appended in previous tokenization step,\n",
1382
+ " # cut bos token here as it's append later anyways\n",
1383
+ " if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\n",
1384
+ " labels = labels[:, 1:]\n",
1385
+ "\n",
1386
+ " batch[\"labels\"] = labels\n",
1387
+ "\n",
1388
+ " return batch"
1389
+ ]
1390
+ },
1391
+ {
1392
+ "cell_type": "markdown",
1393
+ "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86",
1394
+ "metadata": {
1395
+ "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86"
1396
+ },
1397
+ "source": [
1398
+ "Let's initialise the data collator we've just defined:"
1399
+ ]
1400
+ },
1401
+ {
1402
+ "cell_type": "code",
1403
+ "execution_count": 15,
1404
+ "id": "fc834702-c0d3-4a96-b101-7b87be32bf42",
1405
+ "metadata": {
1406
+ "id": "fc834702-c0d3-4a96-b101-7b87be32bf42"
1407
+ },
1408
+ "outputs": [],
1409
+ "source": [
1410
+ "data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)"
1411
+ ]
1412
+ },
1413
+ {
1414
+ "cell_type": "markdown",
1415
+ "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698",
1416
+ "metadata": {
1417
+ "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698"
1418
+ },
1419
+ "source": [
1420
+ "### Evaluation Metrics"
1421
+ ]
1422
+ },
1423
+ {
1424
+ "cell_type": "markdown",
1425
+ "id": "66fee1a7-a44c-461e-b047-c3917221572e",
1426
+ "metadata": {
1427
+ "id": "66fee1a7-a44c-461e-b047-c3917221572e"
1428
+ },
1429
+ "source": [
1430
+ "We'll use the word error rate (WER) metric, the 'de-facto' metric for assessing \n",
1431
+ "ASR systems. For more information, refer to the WER [docs](https://huggingface.co/metrics/wer). We'll load the WER metric from 🤗 Evaluate:"
1432
+ ]
1433
+ },
1434
+ {
1435
+ "cell_type": "code",
1436
+ "execution_count": 16,
1437
+ "id": "b22b4011-f31f-4b57-b684-c52332f92890",
1438
+ "metadata": {
1439
+ "id": "b22b4011-f31f-4b57-b684-c52332f92890"
1440
+ },
1441
+ "outputs": [
1442
+ {
1443
+ "data": {
1444
+ "application/vnd.jupyter.widget-view+json": {
1445
+ "model_id": "8e7e70b2e8ba47c6bb0da2ef1a34d3e7",
1446
+ "version_major": 2,
1447
+ "version_minor": 0
1448
+ },
1449
+ "text/plain": [
1450
+ "Downloading builder script: 0%| | 0.00/4.49k [00:00<?, ?B/s]"
1451
+ ]
1452
+ },
1453
+ "metadata": {},
1454
+ "output_type": "display_data"
1455
+ }
1456
+ ],
1457
+ "source": [
1458
+ "import evaluate\n",
1459
+ "\n",
1460
+ "metric = evaluate.load(\"wer\")"
1461
+ ]
1462
+ },
1463
+ {
1464
+ "cell_type": "markdown",
1465
+ "id": "4f32cab6-31f0-4cb9-af4c-40ba0f5fc508",
1466
+ "metadata": {
1467
+ "id": "4f32cab6-31f0-4cb9-af4c-40ba0f5fc508"
1468
+ },
1469
+ "source": [
1470
+ "We then simply have to define a function that takes our model \n",
1471
+ "predictions and returns the WER metric. This function, called\n",
1472
+ "`compute_metrics`, first replaces `-100` with the `pad_token_id`\n",
1473
+ "in the `label_ids` (undoing the step we applied in the \n",
1474
+ "data collator to ignore padded tokens correctly in the loss).\n",
1475
+ "It then decodes the predicted and label ids to strings. Finally,\n",
1476
+ "it computes the WER between the predictions and reference labels. \n",
1477
+ "Here, we have the option of evaluating with the 'normalised' transcriptions \n",
1478
+ "and predictions. We recommend you set this to `True` to benefit from the WER \n",
1479
+ "improvement obtained by normalising the transcriptions."
1480
+ ]
1481
+ },
1482
+ {
1483
+ "cell_type": "code",
1484
+ "execution_count": 17,
1485
+ "id": "23959a70-22d0-4ffe-9fa1-72b61e75bb52",
1486
+ "metadata": {
1487
+ "id": "23959a70-22d0-4ffe-9fa1-72b61e75bb52"
1488
+ },
1489
+ "outputs": [],
1490
+ "source": [
1491
+ "# evaluate with the 'normalised' WER\n",
1492
+ "do_normalize_eval = True\n",
1493
+ "\n",
1494
+ "def compute_metrics(pred):\n",
1495
+ " pred_ids = pred.predictions\n",
1496
+ " label_ids = pred.label_ids\n",
1497
+ "\n",
1498
+ " # replace -100 with the pad_token_id\n",
1499
+ " label_ids[label_ids == -100] = processor.tokenizer.pad_token_id\n",
1500
+ "\n",
1501
+ " # we do not want to group tokens when computing the metrics\n",
1502
+ " pred_str = processor.tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\n",
1503
+ " label_str = processor.tokenizer.batch_decode(label_ids, skip_special_tokens=True)\n",
1504
+ "\n",
1505
+ " if do_normalize_eval:\n",
1506
+ " pred_str = [normalizer(pred) for pred in pred_str]\n",
1507
+ " label_str = [normalizer(label) for label in label_str]\n",
1508
+ "\n",
1509
+ " wer = 100 * metric.compute(predictions=pred_str, references=label_str)\n",
1510
+ "\n",
1511
+ " return {\"wer\": wer}"
1512
+ ]
1513
+ },
1514
+ {
1515
+ "cell_type": "markdown",
1516
+ "id": "daf2a825-6d9f-4a23-b145-c37c0039075b",
1517
+ "metadata": {
1518
+ "id": "daf2a825-6d9f-4a23-b145-c37c0039075b"
1519
+ },
1520
+ "source": [
1521
+ "### Load a Pre-Trained Checkpoint"
1522
+ ]
1523
+ },
1524
+ {
1525
+ "cell_type": "markdown",
1526
+ "id": "437a97fa-4864-476b-8abc-f28b8166cfa5",
1527
+ "metadata": {
1528
+ "id": "437a97fa-4864-476b-8abc-f28b8166cfa5"
1529
+ },
1530
+ "source": [
1531
+ "Now let's load the pre-trained Whisper `small` checkpoint. Again, this \n",
1532
+ "is trivial through use of 🤗 Transformers!"
1533
+ ]
1534
+ },
1535
+ {
1536
+ "cell_type": "code",
1537
+ "execution_count": 18,
1538
+ "id": "5a10cc4b-07ec-4ebd-ac1d-7c601023594f",
1539
+ "metadata": {
1540
+ "id": "5a10cc4b-07ec-4ebd-ac1d-7c601023594f"
1541
+ },
1542
+ "outputs": [
1543
+ {
1544
+ "data": {
1545
+ "application/vnd.jupyter.widget-view+json": {
1546
+ "model_id": "92043a4a06b64ab48130ba1391f5dcb2",
1547
+ "version_major": 2,
1548
+ "version_minor": 0
1549
+ },
1550
+ "text/plain": [
1551
+ "Downloading: 0%| | 0.00/1.97k [00:00<?, ?B/s]"
1552
+ ]
1553
+ },
1554
+ "metadata": {},
1555
+ "output_type": "display_data"
1556
+ },
1557
+ {
1558
+ "data": {
1559
+ "application/vnd.jupyter.widget-view+json": {
1560
+ "model_id": "d211f932abad497aaca85c16b4ea9135",
1561
+ "version_major": 2,
1562
+ "version_minor": 0
1563
+ },
1564
+ "text/plain": [
1565
+ "Downloading: 0%| | 0.00/3.06G [00:00<?, ?B/s]"
1566
+ ]
1567
+ },
1568
+ "metadata": {},
1569
+ "output_type": "display_data"
1570
+ }
1571
+ ],
1572
+ "source": [
1573
+ "from transformers import WhisperForConditionalGeneration\n",
1574
+ "\n",
1575
+ "model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-medium\")"
1576
+ ]
1577
+ },
1578
+ {
1579
+ "cell_type": "markdown",
1580
+ "id": "a15ead5f-2277-4a39-937b-585c2497b2df",
1581
+ "metadata": {
1582
+ "id": "a15ead5f-2277-4a39-937b-585c2497b2df"
1583
+ },
1584
+ "source": [
1585
+ "Override generation arguments - no tokens are forced as decoder outputs (see [`forced_decoder_ids`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.forced_decoder_ids)), no tokens are suppressed during generation (see [`suppress_tokens`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.suppress_tokens)). Set `use_cache` to False since we're using gradient checkpointing, and the two are incompatible:"
1586
+ ]
1587
+ },
1588
+ {
1589
+ "cell_type": "code",
1590
+ "execution_count": 19,
1591
+ "id": "62038ba3-88ed-4fce-84db-338f50dcd04f",
1592
+ "metadata": {
1593
+ "id": "62038ba3-88ed-4fce-84db-338f50dcd04f"
1594
+ },
1595
+ "outputs": [],
1596
+ "source": [
1597
+ "model.config.forced_decoder_ids = None\n",
1598
+ "model.config.suppress_tokens = []\n",
1599
+ "model.config.use_cache = False"
1600
+ ]
1601
+ },
1602
+ {
1603
+ "cell_type": "markdown",
1604
+ "id": "2178dea4-80ca-47b6-b6ea-ba1915c90c06",
1605
+ "metadata": {
1606
+ "id": "2178dea4-80ca-47b6-b6ea-ba1915c90c06"
1607
+ },
1608
+ "source": [
1609
+ "### Define the Training Configuration"
1610
+ ]
1611
+ },
1612
+ {
1613
+ "cell_type": "markdown",
1614
+ "id": "c21af1e9-0188-4134-ac82-defc7bdcc436",
1615
+ "metadata": {
1616
+ "id": "c21af1e9-0188-4134-ac82-defc7bdcc436"
1617
+ },
1618
+ "source": [
1619
+ "In the final step, we define all the parameters related to training. For more detail on the training arguments, refer to the Seq2SeqTrainingArguments [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments)."
1620
+ ]
1621
+ },
1622
+ {
1623
+ "cell_type": "code",
1624
+ "execution_count": 20,
1625
+ "id": "0ae3e9af-97b7-4aa0-ae85-20b23b5bcb3a",
1626
+ "metadata": {
1627
+ "id": "0ae3e9af-97b7-4aa0-ae85-20b23b5bcb3a"
1628
+ },
1629
+ "outputs": [],
1630
+ "source": [
1631
+ "from transformers import Seq2SeqTrainingArguments\n",
1632
+ "\n",
1633
+ "training_args = Seq2SeqTrainingArguments(\n",
1634
+ " output_dir=\"./\",\n",
1635
+ " per_device_train_batch_size=32,\n",
1636
+ " gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\n",
1637
+ " learning_rate=1e-5,\n",
1638
+ " warmup_steps=500,\n",
1639
+ " max_steps=10000,\n",
1640
+ " gradient_checkpointing=True,\n",
1641
+ " fp16=True,\n",
1642
+ " evaluation_strategy=\"steps\",\n",
1643
+ " per_device_eval_batch_size=16,\n",
1644
+ " predict_with_generate=True,\n",
1645
+ " generation_max_length=225,\n",
1646
+ " save_steps=1000,\n",
1647
+ " eval_steps=1000,\n",
1648
+ " logging_steps=25,\n",
1649
+ " report_to=[\"tensorboard\"],\n",
1650
+ " load_best_model_at_end=True,\n",
1651
+ " metric_for_best_model=\"wer\",\n",
1652
+ " greater_is_better=False,\n",
1653
+ " push_to_hub=True,\n",
1654
+ ")"
1655
+ ]
1656
+ },
1657
+ {
1658
+ "cell_type": "markdown",
1659
+ "id": "b3a944d8-3112-4552-82a0-be25988b3857",
1660
+ "metadata": {
1661
+ "id": "b3a944d8-3112-4552-82a0-be25988b3857"
1662
+ },
1663
+ "source": [
1664
+ "**Note**: if one does not want to upload the model checkpoints to the Hub, \n",
1665
+ "set `push_to_hub=False`."
1666
+ ]
1667
+ },
1668
+ {
1669
+ "cell_type": "markdown",
1670
+ "id": "bac29114-d226-4f54-97cf-8718c9f94e1e",
1671
+ "metadata": {
1672
+ "id": "bac29114-d226-4f54-97cf-8718c9f94e1e"
1673
+ },
1674
+ "source": [
1675
+ "We can forward the training arguments to the 🤗 Trainer along with our model,\n",
1676
+ "dataset, data collator and `compute_metrics` function:"
1677
+ ]
1678
+ },
1679
+ {
1680
+ "cell_type": "code",
1681
+ "execution_count": 21,
1682
+ "id": "d546d7fe-0543-479a-b708-2ebabec19493",
1683
+ "metadata": {
1684
+ "id": "d546d7fe-0543-479a-b708-2ebabec19493"
1685
+ },
1686
+ "outputs": [
1687
+ {
1688
+ "name": "stderr",
1689
+ "output_type": "stream",
1690
+ "text": [
1691
+ "/home/ubuntu/whisper-medium-id/./ is already a clone of https://huggingface.co/Scrya/whisper-medium-id. Make sure you pull the latest changes with `repo.git_pull()`.\n",
1692
+ "max_steps is given, it will override any value given in num_train_epochs\n",
1693
+ "Using cuda_amp half precision backend\n"
1694
+ ]
1695
+ }
1696
+ ],
1697
+ "source": [
1698
+ "from transformers import Seq2SeqTrainer\n",
1699
+ "\n",
1700
+ "trainer = Seq2SeqTrainer(\n",
1701
+ " args=training_args,\n",
1702
+ " model=model,\n",
1703
+ " train_dataset=cc['train'],\n",
1704
+ " eval_dataset=cc['test'],\n",
1705
+ " data_collator=data_collator,\n",
1706
+ " compute_metrics=compute_metrics,\n",
1707
+ " tokenizer=processor.feature_extractor,\n",
1708
+ ")"
1709
+ ]
1710
+ },
1711
+ {
1712
+ "cell_type": "markdown",
1713
+ "id": "uOrRhDGtN5S4",
1714
+ "metadata": {
1715
+ "id": "uOrRhDGtN5S4"
1716
+ },
1717
+ "source": [
1718
+ "We'll save the processor object once before starting training. Since the processor is not trainable, it won't change over the course of training:"
1719
+ ]
1720
+ },
1721
+ {
1722
+ "cell_type": "code",
1723
+ "execution_count": 22,
1724
+ "id": "-2zQwMfEOBJq",
1725
+ "metadata": {
1726
+ "id": "-2zQwMfEOBJq"
1727
+ },
1728
+ "outputs": [
1729
+ {
1730
+ "name": "stderr",
1731
+ "output_type": "stream",
1732
+ "text": [
1733
+ "Feature extractor saved in ./preprocessor_config.json\n",
1734
+ "tokenizer config file saved in ./tokenizer_config.json\n",
1735
+ "Special tokens file saved in ./special_tokens_map.json\n",
1736
+ "added tokens file saved in ./added_tokens.json\n"
1737
+ ]
1738
+ }
1739
+ ],
1740
+ "source": [
1741
+ "processor.save_pretrained(training_args.output_dir)"
1742
+ ]
1743
+ },
1744
+ {
1745
+ "cell_type": "markdown",
1746
+ "id": "7f404cf9-4345-468c-8196-4bd101d9bd51",
1747
+ "metadata": {
1748
+ "id": "7f404cf9-4345-468c-8196-4bd101d9bd51"
1749
+ },
1750
+ "source": [
1751
+ "### Training"
1752
+ ]
1753
+ },
1754
+ {
1755
+ "cell_type": "markdown",
1756
+ "id": "5e8b8d56-5a70-4f68-bd2e-f0752d0bd112",
1757
+ "metadata": {
1758
+ "id": "5e8b8d56-5a70-4f68-bd2e-f0752d0bd112"
1759
+ },
1760
+ "source": [
1761
+ "Training will take approximately 5-10 hours depending on your GPU. The peak GPU memory for the given training configuration is approximately 36GB. \n",
1762
+ "Depending on your GPU, it is possible that you will encounter a CUDA `\"out-of-memory\"` error when you launch training. \n",
1763
+ "In this case, you can reduce the `per_device_train_batch_size` incrementally by factors of 2 \n",
1764
+ "and employ [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.gradient_accumulation_steps)\n",
1765
+ "to compensate.\n",
1766
+ "\n",
1767
+ "To launch training, simply execute:"
1768
+ ]
1769
+ },
1770
+ {
1771
+ "cell_type": "code",
1772
+ "execution_count": null,
1773
+ "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
1774
+ "metadata": {
1775
+ "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
1776
+ "scrolled": false
1777
+ },
1778
+ "outputs": [
1779
+ {
1780
+ "name": "stderr",
1781
+ "output_type": "stream",
1782
+ "text": [
1783
+ "The following columns in the training set don't have a corresponding argument in `WhisperForConditionalGeneration.forward` and have been ignored: input_length. If input_length are not expected by `WhisperForConditionalGeneration.forward`, you can safely ignore this message.\n",
1784
+ "/home/ubuntu/hf_env/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n",
1785
+ " warnings.warn(\n",
1786
+ "***** Running training *****\n",
1787
+ " Num examples = 11195\n",
1788
+ " Num Epochs = 29\n",
1789
+ " Instantaneous batch size per device = 32\n",
1790
+ " Total train batch size (w. parallel, distributed & accumulation) = 32\n",
1791
+ " Gradient Accumulation steps = 1\n",
1792
+ " Total optimization steps = 10000\n",
1793
+ " Number of trainable parameters = 763857920\n"
1794
+ ]
1795
+ },
1796
+ {
1797
+ "data": {
1798
+ "text/html": [
1799
+ "\n",
1800
+ " <div>\n",
1801
+ " \n",
1802
+ " <progress value='553' max='10000' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
1803
+ " [ 553/10000 1:00:12 < 17:12:17, 0.15 it/s, Epoch 1.58/29]\n",
1804
+ " </div>\n",
1805
+ " <table border=\"1\" class=\"dataframe\">\n",
1806
+ " <thead>\n",
1807
+ " <tr style=\"text-align: left;\">\n",
1808
+ " <th>Step</th>\n",
1809
+ " <th>Training Loss</th>\n",
1810
+ " <th>Validation Loss</th>\n",
1811
+ " </tr>\n",
1812
+ " </thead>\n",
1813
+ " <tbody>\n",
1814
+ " </tbody>\n",
1815
+ "</table><p>"
1816
+ ],
1817
+ "text/plain": [
1818
+ "<IPython.core.display.HTML object>"
1819
+ ]
1820
+ },
1821
+ "metadata": {},
1822
+ "output_type": "display_data"
1823
+ }
1824
+ ],
1825
+ "source": [
1826
+ "trainer.train()"
1827
+ ]
1828
+ },
1829
+ {
1830
+ "cell_type": "markdown",
1831
+ "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3",
1832
+ "metadata": {
1833
+ "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3"
1834
+ },
1835
+ "source": [
1836
+ "We can label our checkpoint with the `whisper-event` tag on push by setting the appropriate key-word arguments (kwargs):"
1837
+ ]
1838
+ },
1839
+ {
1840
+ "cell_type": "code",
1841
+ "execution_count": null,
1842
+ "id": "c704f91e-241b-48c9-b8e0-f0da396a9663",
1843
+ "metadata": {
1844
+ "id": "c704f91e-241b-48c9-b8e0-f0da396a9663"
1845
+ },
1846
+ "outputs": [],
1847
+ "source": [
1848
+ "kwargs = {\n",
1849
+ " \"dataset_tags\": [\"google/fleurs\", \"mozilla-foundation/common_voice_11_0\"],\n",
1850
+ " \"dataset\": [\"FLEURS\", \"Common Voice 11.0\"], # a 'pretty' name for the training dataset\n",
1851
+ " \"language\": \"id\",\n",
1852
+ " \"model_name\": \"Whisper Medium ID - FLEURS-CV\", # a 'pretty' name for your model\n",
1853
+ " \"finetuned_from\": \"openai/whisper-medium\",\n",
1854
+ " \"tasks\": \"automatic-speech-recognition\",\n",
1855
+ " \"tags\": \"whisper-event\",\n",
1856
+ "}"
1857
+ ]
1858
+ },
1859
+ {
1860
+ "cell_type": "markdown",
1861
+ "id": "090d676a-f944-4297-a938-a40eda0b2b68",
1862
+ "metadata": {
1863
+ "id": "090d676a-f944-4297-a938-a40eda0b2b68"
1864
+ },
1865
+ "source": [
1866
+ "The training results can now be uploaded to the Hub. To do so, execute the `push_to_hub` command and save the preprocessor object we created:"
1867
+ ]
1868
+ },
1869
+ {
1870
+ "cell_type": "code",
1871
+ "execution_count": null,
1872
+ "id": "d7030622-caf7-4039-939b-6195cdaa2585",
1873
+ "metadata": {
1874
+ "id": "d7030622-caf7-4039-939b-6195cdaa2585"
1875
+ },
1876
+ "outputs": [],
1877
+ "source": [
1878
+ "trainer.push_to_hub(**kwargs)"
1879
+ ]
1880
+ },
1881
+ {
1882
+ "cell_type": "markdown",
1883
+ "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba",
1884
+ "metadata": {
1885
+ "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba"
1886
+ },
1887
+ "source": [
1888
+ "## Closing Remarks"
1889
+ ]
1890
+ },
1891
+ {
1892
+ "cell_type": "markdown",
1893
+ "id": "7f737783-2870-4e35-aa11-86a42d7d997a",
1894
+ "metadata": {
1895
+ "id": "7f737783-2870-4e35-aa11-86a42d7d997a"
1896
+ },
1897
+ "source": [
1898
+ "In this blog, we covered a step-by-step guide on fine-tuning Whisper for multilingual ASR \n",
1899
+ "using 🤗 Datasets, Transformers and the Hugging Face Hub. For more details on the Whisper model, the Common Voice dataset and the theory behind fine-tuning, refere to the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). If you're interested in fine-tuning other \n",
1900
+ "Transformers models, both for English and multilingual ASR, be sure to check out the \n",
1901
+ "examples scripts at [examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)."
1902
+ ]
1903
+ }
1904
+ ],
1905
+ "metadata": {
1906
+ "colab": {
1907
+ "include_colab_link": true,
1908
+ "provenance": []
1909
+ },
1910
+ "kernelspec": {
1911
+ "display_name": "Python 3 (ipykernel)",
1912
+ "language": "python",
1913
+ "name": "python3"
1914
+ },
1915
+ "language_info": {
1916
+ "codemirror_mode": {
1917
+ "name": "ipython",
1918
+ "version": 3
1919
+ },
1920
+ "file_extension": ".py",
1921
+ "mimetype": "text/x-python",
1922
+ "name": "python",
1923
+ "nbconvert_exporter": "python",
1924
+ "pygments_lexer": "ipython3",
1925
+ "version": "3.8.10"
1926
+ }
1927
+ },
1928
+ "nbformat": 4,
1929
+ "nbformat_minor": 5
1930
+ }
added_tokens.json ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|af|>": 50327,
3
+ "<|am|>": 50334,
4
+ "<|ar|>": 50272,
5
+ "<|as|>": 50350,
6
+ "<|az|>": 50304,
7
+ "<|ba|>": 50355,
8
+ "<|be|>": 50330,
9
+ "<|bg|>": 50292,
10
+ "<|bn|>": 50302,
11
+ "<|bo|>": 50347,
12
+ "<|br|>": 50309,
13
+ "<|bs|>": 50315,
14
+ "<|ca|>": 50270,
15
+ "<|cs|>": 50283,
16
+ "<|cy|>": 50297,
17
+ "<|da|>": 50285,
18
+ "<|de|>": 50261,
19
+ "<|el|>": 50281,
20
+ "<|endoftext|>": 50257,
21
+ "<|en|>": 50259,
22
+ "<|es|>": 50262,
23
+ "<|et|>": 50307,
24
+ "<|eu|>": 50310,
25
+ "<|fa|>": 50300,
26
+ "<|fi|>": 50277,
27
+ "<|fo|>": 50338,
28
+ "<|fr|>": 50265,
29
+ "<|gl|>": 50319,
30
+ "<|gu|>": 50333,
31
+ "<|haw|>": 50352,
32
+ "<|ha|>": 50354,
33
+ "<|hi|>": 50276,
34
+ "<|hr|>": 50291,
35
+ "<|ht|>": 50339,
36
+ "<|hu|>": 50286,
37
+ "<|hy|>": 50312,
38
+ "<|id|>": 50275,
39
+ "<|is|>": 50311,
40
+ "<|it|>": 50274,
41
+ "<|iw|>": 50279,
42
+ "<|ja|>": 50266,
43
+ "<|jw|>": 50356,
44
+ "<|ka|>": 50329,
45
+ "<|kk|>": 50316,
46
+ "<|km|>": 50323,
47
+ "<|kn|>": 50306,
48
+ "<|ko|>": 50264,
49
+ "<|la|>": 50294,
50
+ "<|lb|>": 50345,
51
+ "<|ln|>": 50353,
52
+ "<|lo|>": 50336,
53
+ "<|lt|>": 50293,
54
+ "<|lv|>": 50301,
55
+ "<|mg|>": 50349,
56
+ "<|mi|>": 50295,
57
+ "<|mk|>": 50308,
58
+ "<|ml|>": 50296,
59
+ "<|mn|>": 50314,
60
+ "<|mr|>": 50320,
61
+ "<|ms|>": 50282,
62
+ "<|mt|>": 50343,
63
+ "<|my|>": 50346,
64
+ "<|ne|>": 50313,
65
+ "<|nl|>": 50271,
66
+ "<|nn|>": 50342,
67
+ "<|nocaptions|>": 50362,
68
+ "<|notimestamps|>": 50363,
69
+ "<|no|>": 50288,
70
+ "<|oc|>": 50328,
71
+ "<|pa|>": 50321,
72
+ "<|pl|>": 50269,
73
+ "<|ps|>": 50340,
74
+ "<|pt|>": 50267,
75
+ "<|ro|>": 50284,
76
+ "<|ru|>": 50263,
77
+ "<|sa|>": 50344,
78
+ "<|sd|>": 50332,
79
+ "<|si|>": 50322,
80
+ "<|sk|>": 50298,
81
+ "<|sl|>": 50305,
82
+ "<|sn|>": 50324,
83
+ "<|so|>": 50326,
84
+ "<|sq|>": 50317,
85
+ "<|sr|>": 50303,
86
+ "<|startoflm|>": 50360,
87
+ "<|startofprev|>": 50361,
88
+ "<|startoftranscript|>": 50258,
89
+ "<|su|>": 50357,
90
+ "<|sv|>": 50273,
91
+ "<|sw|>": 50318,
92
+ "<|ta|>": 50287,
93
+ "<|te|>": 50299,
94
+ "<|tg|>": 50331,
95
+ "<|th|>": 50289,
96
+ "<|tk|>": 50341,
97
+ "<|tl|>": 50348,
98
+ "<|transcribe|>": 50359,
99
+ "<|translate|>": 50358,
100
+ "<|tr|>": 50268,
101
+ "<|tt|>": 50351,
102
+ "<|uk|>": 50280,
103
+ "<|ur|>": 50290,
104
+ "<|uz|>": 50337,
105
+ "<|vi|>": 50278,
106
+ "<|yi|>": 50335,
107
+ "<|yo|>": 50325,
108
+ "<|zh|>": 50260
109
+ }
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openai/whisper-medium",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "gelu",
5
+ "architectures": [
6
+ "WhisperForConditionalGeneration"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "begin_suppress_tokens": [
10
+ 220,
11
+ 50257
12
+ ],
13
+ "bos_token_id": 50257,
14
+ "d_model": 1024,
15
+ "decoder_attention_heads": 16,
16
+ "decoder_ffn_dim": 4096,
17
+ "decoder_layerdrop": 0.0,
18
+ "decoder_layers": 24,
19
+ "decoder_start_token_id": 50258,
20
+ "dropout": 0.0,
21
+ "encoder_attention_heads": 16,
22
+ "encoder_ffn_dim": 4096,
23
+ "encoder_layerdrop": 0.0,
24
+ "encoder_layers": 24,
25
+ "eos_token_id": 50257,
26
+ "forced_decoder_ids": null,
27
+ "init_std": 0.02,
28
+ "is_encoder_decoder": true,
29
+ "max_length": 448,
30
+ "max_source_positions": 1500,
31
+ "max_target_positions": 448,
32
+ "model_type": "whisper",
33
+ "num_hidden_layers": 24,
34
+ "num_mel_bins": 80,
35
+ "pad_token_id": 50257,
36
+ "scale_embedding": false,
37
+ "suppress_tokens": [],
38
+ "torch_dtype": "float32",
39
+ "transformers_version": "4.26.0.dev0",
40
+ "use_cache": false,
41
+ "vocab_size": 51865
42
+ }
fine-tune-whisper-non-streaming-id.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
normalizer.json ADDED
@@ -0,0 +1,1742 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "accessorise": "accessorize",
3
+ "accessorised": "accessorized",
4
+ "accessorises": "accessorizes",
5
+ "accessorising": "accessorizing",
6
+ "acclimatisation": "acclimatization",
7
+ "acclimatise": "acclimatize",
8
+ "acclimatised": "acclimatized",
9
+ "acclimatises": "acclimatizes",
10
+ "acclimatising": "acclimatizing",
11
+ "accoutrements": "accouterments",
12
+ "aeon": "eon",
13
+ "aeons": "eons",
14
+ "aerogramme": "aerogram",
15
+ "aerogrammes": "aerograms",
16
+ "aeroplane": "airplane",
17
+ "aeroplanes": "airplanes",
18
+ "aesthete": "esthete",
19
+ "aesthetes": "esthetes",
20
+ "aesthetic": "esthetic",
21
+ "aesthetically": "esthetically",
22
+ "aesthetics": "esthetics",
23
+ "aetiology": "etiology",
24
+ "ageing": "aging",
25
+ "aggrandisement": "aggrandizement",
26
+ "agonise": "agonize",
27
+ "agonised": "agonized",
28
+ "agonises": "agonizes",
29
+ "agonising": "agonizing",
30
+ "agonisingly": "agonizingly",
31
+ "almanack": "almanac",
32
+ "almanacks": "almanacs",
33
+ "aluminium": "aluminum",
34
+ "amortisable": "amortizable",
35
+ "amortisation": "amortization",
36
+ "amortisations": "amortizations",
37
+ "amortise": "amortize",
38
+ "amortised": "amortized",
39
+ "amortises": "amortizes",
40
+ "amortising": "amortizing",
41
+ "amphitheatre": "amphitheater",
42
+ "amphitheatres": "amphitheaters",
43
+ "anaemia": "anemia",
44
+ "anaemic": "anemic",
45
+ "anaesthesia": "anesthesia",
46
+ "anaesthetic": "anesthetic",
47
+ "anaesthetics": "anesthetics",
48
+ "anaesthetise": "anesthetize",
49
+ "anaesthetised": "anesthetized",
50
+ "anaesthetises": "anesthetizes",
51
+ "anaesthetising": "anesthetizing",
52
+ "anaesthetist": "anesthetist",
53
+ "anaesthetists": "anesthetists",
54
+ "anaesthetize": "anesthetize",
55
+ "anaesthetized": "anesthetized",
56
+ "anaesthetizes": "anesthetizes",
57
+ "anaesthetizing": "anesthetizing",
58
+ "analogue": "analog",
59
+ "analogues": "analogs",
60
+ "analyse": "analyze",
61
+ "analysed": "analyzed",
62
+ "analyses": "analyzes",
63
+ "analysing": "analyzing",
64
+ "anglicise": "anglicize",
65
+ "anglicised": "anglicized",
66
+ "anglicises": "anglicizes",
67
+ "anglicising": "anglicizing",
68
+ "annualised": "annualized",
69
+ "antagonise": "antagonize",
70
+ "antagonised": "antagonized",
71
+ "antagonises": "antagonizes",
72
+ "antagonising": "antagonizing",
73
+ "apologise": "apologize",
74
+ "apologised": "apologized",
75
+ "apologises": "apologizes",
76
+ "apologising": "apologizing",
77
+ "appal": "appall",
78
+ "appals": "appalls",
79
+ "appetiser": "appetizer",
80
+ "appetisers": "appetizers",
81
+ "appetising": "appetizing",
82
+ "appetisingly": "appetizingly",
83
+ "arbour": "arbor",
84
+ "arbours": "arbors",
85
+ "archaeologically": "archeologically",
86
+ "archaeologist": "archeologist",
87
+ "archaeologists": "archeologists",
88
+ "archaeology": "archeology</span>",
89
+ "archeological": "archaeological",
90
+ "ardour": "ardor",
91
+ "armour": "armor",
92
+ "armoured": "armored",
93
+ "armourer": "armorer",
94
+ "armourers": "armorers",
95
+ "armouries": "armories",
96
+ "armoury": "armory",
97
+ "artefact": "artifact",
98
+ "artefacts": "artifacts",
99
+ "authorise": "authorize",
100
+ "authorised": "authorized",
101
+ "authorises": "authorizes",
102
+ "authorising": "authorizing",
103
+ "axe": "ax",
104
+ "backpedalled": "backpedaled",
105
+ "backpedalling": "backpedaling",
106
+ "bannister": "banister",
107
+ "bannisters": "banisters",
108
+ "baptise": "baptize",
109
+ "baptised": "baptized",
110
+ "baptises": "baptizes",
111
+ "baptising": "baptizing",
112
+ "bastardise": "bastardize",
113
+ "bastardised": "bastardized",
114
+ "bastardises": "bastardizes",
115
+ "bastardising": "bastardizing",
116
+ "battleax": "battleaxe",
117
+ "baulk": "balk",
118
+ "baulked": "balked",
119
+ "baulking": "balking",
120
+ "baulks": "balks",
121
+ "bedevilled": "bedeviled",
122
+ "bedevilling": "bedeviling",
123
+ "behaviour": "behavior",
124
+ "behavioural": "behavioral",
125
+ "behaviourism": "behaviorism",
126
+ "behaviourist": "behaviorist",
127
+ "behaviourists": "behaviorists",
128
+ "behaviours": "behaviors",
129
+ "behove": "behoove",
130
+ "behoved": "behooved",
131
+ "behoves": "behooves",
132
+ "bejewelled": "bejeweled",
133
+ "belabour": "belabor",
134
+ "belaboured": "belabored",
135
+ "belabouring": "belaboring",
136
+ "belabours": "belabors",
137
+ "bevelled": "beveled",
138
+ "bevvies": "bevies",
139
+ "bevvy": "bevy",
140
+ "biassed": "biased",
141
+ "biassing": "biasing",
142
+ "bingeing": "binging",
143
+ "bougainvillaea": "bougainvillea",
144
+ "bougainvillaeas": "bougainvilleas",
145
+ "bowdlerise": "bowdlerize",
146
+ "bowdlerised": "bowdlerized",
147
+ "bowdlerises": "bowdlerizes",
148
+ "bowdlerising": "bowdlerizing",
149
+ "breathalyse": "breathalyze",
150
+ "breathalysed": "breathalyzed",
151
+ "breathalyser": "breathalyzer",
152
+ "breathalysers": "breathalyzers",
153
+ "breathalyses": "breathalyzes",
154
+ "breathalysing": "breathalyzing",
155
+ "brutalise": "brutalize",
156
+ "brutalised": "brutalized",
157
+ "brutalises": "brutalizes",
158
+ "brutalising": "brutalizing",
159
+ "busses": "buses",
160
+ "bussing": "busing",
161
+ "caesarean": "cesarean",
162
+ "caesareans": "cesareans",
163
+ "calibre": "caliber",
164
+ "calibres": "calibers",
165
+ "calliper": "caliper",
166
+ "callipers": "calipers",
167
+ "callisthenics": "calisthenics",
168
+ "canalise": "canalize",
169
+ "canalised": "canalized",
170
+ "canalises": "canalizes",
171
+ "canalising": "canalizing",
172
+ "cancelation": "cancellation",
173
+ "cancelations": "cancellations",
174
+ "cancelled": "canceled",
175
+ "cancelling": "canceling",
176
+ "candour": "candor",
177
+ "cannibalise": "cannibalize",
178
+ "cannibalised": "cannibalized",
179
+ "cannibalises": "cannibalizes",
180
+ "cannibalising": "cannibalizing",
181
+ "canonise": "canonize",
182
+ "canonised": "canonized",
183
+ "canonises": "canonizes",
184
+ "canonising": "canonizing",
185
+ "capitalise": "capitalize",
186
+ "capitalised": "capitalized",
187
+ "capitalises": "capitalizes",
188
+ "capitalising": "capitalizing",
189
+ "caramelise": "caramelize",
190
+ "caramelised": "caramelized",
191
+ "caramelises": "caramelizes",
192
+ "caramelising": "caramelizing",
193
+ "carbonise": "carbonize",
194
+ "carbonised": "carbonized",
195
+ "carbonises": "carbonizes",
196
+ "carbonising": "carbonizing",
197
+ "carolled": "caroled",
198
+ "carolling": "caroling",
199
+ "catalogue": "catalog",
200
+ "catalogued": "cataloged",
201
+ "catalogues": "catalogs",
202
+ "cataloguing": "cataloging",
203
+ "catalyse": "catalyze",
204
+ "catalysed": "catalyzed",
205
+ "catalyses": "catalyzes",
206
+ "catalysing": "catalyzing",
207
+ "categorise": "categorize",
208
+ "categorised": "categorized",
209
+ "categorises": "categorizes",
210
+ "categorising": "categorizing",
211
+ "cauterise": "cauterize",
212
+ "cauterised": "cauterized",
213
+ "cauterises": "cauterizes",
214
+ "cauterising": "cauterizing",
215
+ "cavilled": "caviled",
216
+ "cavilling": "caviling",
217
+ "centigramme": "centigram",
218
+ "centigrammes": "centigrams",
219
+ "centilitre": "centiliter",
220
+ "centilitres": "centiliters",
221
+ "centimetre": "centimeter",
222
+ "centimetres": "centimeters",
223
+ "centralise": "centralize",
224
+ "centralised": "centralized",
225
+ "centralises": "centralizes",
226
+ "centralising": "centralizing",
227
+ "centre": "center",
228
+ "centred": "centered",
229
+ "centrefold": "centerfold",
230
+ "centrefolds": "centerfolds",
231
+ "centrepiece": "centerpiece",
232
+ "centrepieces": "centerpieces",
233
+ "centres": "centers",
234
+ "channelled": "channeled",
235
+ "channelling": "channeling",
236
+ "characterise": "characterize",
237
+ "characterised": "characterized",
238
+ "characterises": "characterizes",
239
+ "characterising": "characterizing",
240
+ "cheque": "check",
241
+ "chequebook": "checkbook",
242
+ "chequebooks": "checkbooks",
243
+ "chequered": "checkered",
244
+ "cheques": "checks",
245
+ "chilli": "chili",
246
+ "chimaera": "chimera",
247
+ "chimaeras": "chimeras",
248
+ "chiselled": "chiseled",
249
+ "chiselling": "chiseling",
250
+ "circularise": "circularize",
251
+ "circularised": "circularized",
252
+ "circularises": "circularizes",
253
+ "circularising": "circularizing",
254
+ "civilise": "civilize",
255
+ "civilised": "civilized",
256
+ "civilises": "civilizes",
257
+ "civilising": "civilizing",
258
+ "clamour": "clamor",
259
+ "clamoured": "clamored",
260
+ "clamouring": "clamoring",
261
+ "clamours": "clamors",
262
+ "clangour": "clangor",
263
+ "clarinettist": "clarinetist",
264
+ "clarinettists": "clarinetists",
265
+ "collectivise": "collectivize",
266
+ "collectivised": "collectivized",
267
+ "collectivises": "collectivizes",
268
+ "collectivising": "collectivizing",
269
+ "colonisation": "colonization",
270
+ "colonise": "colonize",
271
+ "colonised": "colonized",
272
+ "coloniser": "colonizer",
273
+ "colonisers": "colonizers",
274
+ "colonises": "colonizes",
275
+ "colonising": "colonizing",
276
+ "colour": "color",
277
+ "colourant": "colorant",
278
+ "colourants": "colorants",
279
+ "coloured": "colored",
280
+ "coloureds": "coloreds",
281
+ "colourful": "colorful",
282
+ "colourfully": "colorfully",
283
+ "colouring": "coloring",
284
+ "colourize": "colorize",
285
+ "colourized": "colorized",
286
+ "colourizes": "colorizes",
287
+ "colourizing": "colorizing",
288
+ "colourless": "colorless",
289
+ "colours": "colors",
290
+ "commercialise": "commercialize",
291
+ "commercialised": "commercialized",
292
+ "commercialises": "commercializes",
293
+ "commercialising": "commercializing",
294
+ "compartmentalise": "compartmentalize",
295
+ "compartmentalised": "compartmentalized",
296
+ "compartmentalises": "compartmentalizes",
297
+ "compartmentalising": "compartmentalizing",
298
+ "computerise": "computerize",
299
+ "computerised": "computerized",
300
+ "computerises": "computerizes",
301
+ "computerising": "computerizing",
302
+ "conceptualise": "conceptualize",
303
+ "conceptualised": "conceptualized",
304
+ "conceptualises": "conceptualizes",
305
+ "conceptualising": "conceptualizing",
306
+ "connexion": "connection",
307
+ "connexions": "connections",
308
+ "contextualise": "contextualize",
309
+ "contextualised": "contextualized",
310
+ "contextualises": "contextualizes",
311
+ "contextualising": "contextualizing",
312
+ "cosier": "cozier",
313
+ "cosies": "cozies",
314
+ "cosiest": "coziest",
315
+ "cosily": "cozily",
316
+ "cosiness": "coziness",
317
+ "cosy": "cozy",
318
+ "councillor": "councilor",
319
+ "councillors": "councilors",
320
+ "counselled": "counseled",
321
+ "counselling": "counseling",
322
+ "counsellor": "counselor",
323
+ "counsellors": "counselors",
324
+ "crenelated": "crenellated",
325
+ "criminalise": "criminalize",
326
+ "criminalised": "criminalized",
327
+ "criminalises": "criminalizes",
328
+ "criminalising": "criminalizing",
329
+ "criticise": "criticize",
330
+ "criticised": "criticized",
331
+ "criticises": "criticizes",
332
+ "criticising": "criticizing",
333
+ "crueller": "crueler",
334
+ "cruellest": "cruelest",
335
+ "crystallisation": "crystallization",
336
+ "crystallise": "crystallize",
337
+ "crystallised": "crystallized",
338
+ "crystallises": "crystallizes",
339
+ "crystallising": "crystallizing",
340
+ "cudgelled": "cudgeled",
341
+ "cudgelling": "cudgeling",
342
+ "customise": "customize",
343
+ "customised": "customized",
344
+ "customises": "customizes",
345
+ "customising": "customizing",
346
+ "cypher": "cipher",
347
+ "cyphers": "ciphers",
348
+ "decentralisation": "decentralization",
349
+ "decentralise": "decentralize",
350
+ "decentralised": "decentralized",
351
+ "decentralises": "decentralizes",
352
+ "decentralising": "decentralizing",
353
+ "decriminalisation": "decriminalization",
354
+ "decriminalise": "decriminalize",
355
+ "decriminalised": "decriminalized",
356
+ "decriminalises": "decriminalizes",
357
+ "decriminalising": "decriminalizing",
358
+ "defence": "defense",
359
+ "defenceless": "defenseless",
360
+ "defences": "defenses",
361
+ "dehumanisation": "dehumanization",
362
+ "dehumanise": "dehumanize",
363
+ "dehumanised": "dehumanized",
364
+ "dehumanises": "dehumanizes",
365
+ "dehumanising": "dehumanizing",
366
+ "demeanour": "demeanor",
367
+ "demilitarisation": "demilitarization",
368
+ "demilitarise": "demilitarize",
369
+ "demilitarised": "demilitarized",
370
+ "demilitarises": "demilitarizes",
371
+ "demilitarising": "demilitarizing",
372
+ "demobilisation": "demobilization",
373
+ "demobilise": "demobilize",
374
+ "demobilised": "demobilized",
375
+ "demobilises": "demobilizes",
376
+ "demobilising": "demobilizing",
377
+ "democratisation": "democratization",
378
+ "democratise": "democratize",
379
+ "democratised": "democratized",
380
+ "democratises": "democratizes",
381
+ "democratising": "democratizing",
382
+ "demonise": "demonize",
383
+ "demonised": "demonized",
384
+ "demonises": "demonizes",
385
+ "demonising": "demonizing",
386
+ "demoralisation": "demoralization",
387
+ "demoralise": "demoralize",
388
+ "demoralised": "demoralized",
389
+ "demoralises": "demoralizes",
390
+ "demoralising": "demoralizing",
391
+ "denationalisation": "denationalization",
392
+ "denationalise": "denationalize",
393
+ "denationalised": "denationalized",
394
+ "denationalises": "denationalizes",
395
+ "denationalising": "denationalizing",
396
+ "deodorise": "deodorize",
397
+ "deodorised": "deodorized",
398
+ "deodorises": "deodorizes",
399
+ "deodorising": "deodorizing",
400
+ "depersonalise": "depersonalize",
401
+ "depersonalised": "depersonalized",
402
+ "depersonalises": "depersonalizes",
403
+ "depersonalising": "depersonalizing",
404
+ "deputise": "deputize",
405
+ "deputised": "deputized",
406
+ "deputises": "deputizes",
407
+ "deputising": "deputizing",
408
+ "desensitisation": "desensitization",
409
+ "desensitise": "desensitize",
410
+ "desensitised": "desensitized",
411
+ "desensitises": "desensitizes",
412
+ "desensitising": "desensitizing",
413
+ "destabilisation": "destabilization",
414
+ "destabilise": "destabilize",
415
+ "destabilised": "destabilized",
416
+ "destabilises": "destabilizes",
417
+ "destabilising": "destabilizing",
418
+ "dialled": "dialed",
419
+ "dialling": "dialing",
420
+ "dialogue": "dialog",
421
+ "dialogues": "dialogs",
422
+ "diarrhoea": "diarrhea",
423
+ "digitise": "digitize",
424
+ "digitised": "digitized",
425
+ "digitises": "digitizes",
426
+ "digitising": "digitizing",
427
+ "disc": "disk",
428
+ "discolour": "discolor",
429
+ "discoloured": "discolored",
430
+ "discolouring": "discoloring",
431
+ "discolours": "discolors",
432
+ "discs": "disks",
433
+ "disembowelled": "disemboweled",
434
+ "disembowelling": "disemboweling",
435
+ "disfavour": "disfavor",
436
+ "dishevelled": "disheveled",
437
+ "dishonour": "dishonor",
438
+ "dishonourable": "dishonorable",
439
+ "dishonourably": "dishonorably",
440
+ "dishonoured": "dishonored",
441
+ "dishonouring": "dishonoring",
442
+ "dishonours": "dishonors",
443
+ "disorganisation": "disorganization",
444
+ "disorganised": "disorganized",
445
+ "distil": "distill",
446
+ "distils": "distills",
447
+ "dramatisation": "dramatization",
448
+ "dramatisations": "dramatizations",
449
+ "dramatise": "dramatize",
450
+ "dramatised": "dramatized",
451
+ "dramatises": "dramatizes",
452
+ "dramatising": "dramatizing",
453
+ "draught": "draft",
454
+ "draughtboard": "draftboard",
455
+ "draughtboards": "draftboards",
456
+ "draughtier": "draftier",
457
+ "draughtiest": "draftiest",
458
+ "draughts": "drafts",
459
+ "draughtsman": "draftsman",
460
+ "draughtsmanship": "draftsmanship",
461
+ "draughtsmen": "draftsmen",
462
+ "draughtswoman": "draftswoman",
463
+ "draughtswomen": "draftswomen",
464
+ "draughty": "drafty",
465
+ "drivelled": "driveled",
466
+ "drivelling": "driveling",
467
+ "duelled": "dueled",
468
+ "duelling": "dueling",
469
+ "economise": "economize",
470
+ "economised": "economized",
471
+ "economises": "economizes",
472
+ "economising": "economizing",
473
+ "editorialise": "editorialize",
474
+ "editorialised": "editorialized",
475
+ "editorialises": "editorializes",
476
+ "editorialising": "editorializing",
477
+ "edoema": "edema",
478
+ "empathise": "empathize",
479
+ "empathised": "empathized",
480
+ "empathises": "empathizes",
481
+ "empathising": "empathizing",
482
+ "emphasise": "emphasize",
483
+ "emphasised": "emphasized",
484
+ "emphasises": "emphasizes",
485
+ "emphasising": "emphasizing",
486
+ "enamelled": "enameled",
487
+ "enamelling": "enameling",
488
+ "enamoured": "enamored",
489
+ "encyclopaedia": "encyclopedia",
490
+ "encyclopaedias": "encyclopedias",
491
+ "encyclopaedic": "encyclopedic",
492
+ "endeavour": "endeavor",
493
+ "endeavoured": "endeavored",
494
+ "endeavouring": "endeavoring",
495
+ "endeavours": "endeavors",
496
+ "energise": "energize",
497
+ "energised": "energized",
498
+ "energises": "energizes",
499
+ "energising": "energizing",
500
+ "enrol": "enroll",
501
+ "enrols": "enrolls",
502
+ "enthral": "enthrall",
503
+ "enthrals": "enthralls",
504
+ "epaulette": "epaulet",
505
+ "epaulettes": "epaulets",
506
+ "epicentre": "epicenter",
507
+ "epicentres": "epicenters",
508
+ "epilogue": "epilog",
509
+ "epilogues": "epilogs",
510
+ "epitomise": "epitomize",
511
+ "epitomised": "epitomized",
512
+ "epitomises": "epitomizes",
513
+ "epitomising": "epitomizing",
514
+ "equalisation": "equalization",
515
+ "equalise": "equalize",
516
+ "equalised": "equalized",
517
+ "equaliser": "equalizer",
518
+ "equalisers": "equalizers",
519
+ "equalises": "equalizes",
520
+ "equalising": "equalizing",
521
+ "eulogise": "eulogize",
522
+ "eulogised": "eulogized",
523
+ "eulogises": "eulogizes",
524
+ "eulogising": "eulogizing",
525
+ "evangelise": "evangelize",
526
+ "evangelised": "evangelized",
527
+ "evangelises": "evangelizes",
528
+ "evangelising": "evangelizing",
529
+ "exorcise": "exorcize",
530
+ "exorcised": "exorcized",
531
+ "exorcises": "exorcizes",
532
+ "exorcising": "exorcizing",
533
+ "extemporisation": "extemporization",
534
+ "extemporise": "extemporize",
535
+ "extemporised": "extemporized",
536
+ "extemporises": "extemporizes",
537
+ "extemporising": "extemporizing",
538
+ "externalisation": "externalization",
539
+ "externalisations": "externalizations",
540
+ "externalise": "externalize",
541
+ "externalised": "externalized",
542
+ "externalises": "externalizes",
543
+ "externalising": "externalizing",
544
+ "factorise": "factorize",
545
+ "factorised": "factorized",
546
+ "factorises": "factorizes",
547
+ "factorising": "factorizing",
548
+ "faecal": "fecal",
549
+ "faeces": "feces",
550
+ "familiarisation": "familiarization",
551
+ "familiarise": "familiarize",
552
+ "familiarised": "familiarized",
553
+ "familiarises": "familiarizes",
554
+ "familiarising": "familiarizing",
555
+ "fantasise": "fantasize",
556
+ "fantasised": "fantasized",
557
+ "fantasises": "fantasizes",
558
+ "fantasising": "fantasizing",
559
+ "favour": "favor",
560
+ "favourable": "favorable",
561
+ "favourably": "favorably",
562
+ "favoured": "favored",
563
+ "favouring": "favoring",
564
+ "favourite": "favorite",
565
+ "favourites": "favorites",
566
+ "favouritism": "favoritism",
567
+ "favours": "favors",
568
+ "feminise": "feminize",
569
+ "feminised": "feminized",
570
+ "feminises": "feminizes",
571
+ "feminising": "feminizing",
572
+ "fertilisation": "fertilization",
573
+ "fertilise": "fertilize",
574
+ "fertilised": "fertilized",
575
+ "fertiliser": "fertilizer",
576
+ "fertilisers": "fertilizers",
577
+ "fertilises": "fertilizes",
578
+ "fertilising": "fertilizing",
579
+ "fervour": "fervor",
580
+ "fibre": "fiber",
581
+ "fibreglass": "fiberglass",
582
+ "fibres": "fibers",
583
+ "fictionalisation": "fictionalization",
584
+ "fictionalisations": "fictionalizations",
585
+ "fictionalise": "fictionalize",
586
+ "fictionalised": "fictionalized",
587
+ "fictionalises": "fictionalizes",
588
+ "fictionalising": "fictionalizing",
589
+ "fillet": "filet",
590
+ "filleted": "fileted",
591
+ "filleting": "fileting",
592
+ "fillets": "filets",
593
+ "finalisation": "finalization",
594
+ "finalise": "finalize",
595
+ "finalised": "finalized",
596
+ "finalises": "finalizes",
597
+ "finalising": "finalizing",
598
+ "flautist": "flutist",
599
+ "flautists": "flutists",
600
+ "flavour": "flavor",
601
+ "flavoured": "flavored",
602
+ "flavouring": "flavoring",
603
+ "flavourings": "flavorings",
604
+ "flavourless": "flavorless",
605
+ "flavours": "flavors",
606
+ "flavoursome": "flavorsome",
607
+ "flyer / flier": "flier / flyer",
608
+ "foetal": "fetal",
609
+ "foetid": "fetid",
610
+ "foetus": "fetus",
611
+ "foetuses": "fetuses",
612
+ "formalisation": "formalization",
613
+ "formalise": "formalize",
614
+ "formalised": "formalized",
615
+ "formalises": "formalizes",
616
+ "formalising": "formalizing",
617
+ "fossilisation": "fossilization",
618
+ "fossilise": "fossilize",
619
+ "fossilised": "fossilized",
620
+ "fossilises": "fossilizes",
621
+ "fossilising": "fossilizing",
622
+ "fraternisation": "fraternization",
623
+ "fraternise": "fraternize",
624
+ "fraternised": "fraternized",
625
+ "fraternises": "fraternizes",
626
+ "fraternising": "fraternizing",
627
+ "fulfil": "fulfill",
628
+ "fulfilment": "fulfillment",
629
+ "fulfils": "fulfills",
630
+ "funnelled": "funneled",
631
+ "funnelling": "funneling",
632
+ "gage": "gauge",
633
+ "gaged": "gauged",
634
+ "gages": "gauges",
635
+ "gaging": "gauging",
636
+ "galvanise": "galvanize",
637
+ "galvanised": "galvanized",
638
+ "galvanises": "galvanizes",
639
+ "galvanising": "galvanizing",
640
+ "gambolled": "gamboled",
641
+ "gambolling": "gamboling",
642
+ "gaol": "jail",
643
+ "gaolbird": "jailbird",
644
+ "gaolbirds": "jailbirds",
645
+ "gaolbreak": "jailbreak",
646
+ "gaolbreaks": "jailbreaks",
647
+ "gaoled": "jailed",
648
+ "gaoler": "jailer",
649
+ "gaolers": "jailers",
650
+ "gaoling": "jailing",
651
+ "gaols": "jails",
652
+ "gasses": "gases",
653
+ "generalisation": "generalization",
654
+ "generalisations": "generalizations",
655
+ "generalise": "generalize",
656
+ "generalised": "generalized",
657
+ "generalises": "generalizes",
658
+ "generalising": "generalizing",
659
+ "ghettoise": "ghettoize",
660
+ "ghettoised": "ghettoized",
661
+ "ghettoises": "ghettoizes",
662
+ "ghettoising": "ghettoizing",
663
+ "gipsies": "gypsies",
664
+ "glamor": "glamour",
665
+ "glamorise": "glamorize",
666
+ "glamorised": "glamorized",
667
+ "glamorises": "glamorizes",
668
+ "glamorising": "glamorizing",
669
+ "globalisation": "globalization",
670
+ "globalise": "globalize",
671
+ "globalised": "globalized",
672
+ "globalises": "globalizes",
673
+ "globalising": "globalizing",
674
+ "glueing": "gluing",
675
+ "goitre": "goiter",
676
+ "goitres": "goiters",
677
+ "gonorrhoea": "gonorrhea",
678
+ "gramme": "gram",
679
+ "grammes": "grams",
680
+ "gravelled": "graveled",
681
+ "grey": "gray",
682
+ "greyed": "grayed",
683
+ "greying": "graying",
684
+ "greyish": "grayish",
685
+ "greyness": "grayness",
686
+ "greys": "grays",
687
+ "grovelled": "groveled",
688
+ "grovelling": "groveling",
689
+ "groyne": "groin",
690
+ "groynes": "groins",
691
+ "gruelling": "grueling",
692
+ "gruellingly": "gruelingly",
693
+ "gryphon": "griffin",
694
+ "gryphons": "griffins",
695
+ "gynaecological": "gynecological",
696
+ "gynaecologist": "gynecologist",
697
+ "gynaecologists": "gynecologists",
698
+ "gynaecology": "gynecology",
699
+ "haematological": "hematological",
700
+ "haematologist": "hematologist",
701
+ "haematologists": "hematologists",
702
+ "haematology": "hematology",
703
+ "haemoglobin": "hemoglobin",
704
+ "haemophilia": "hemophilia",
705
+ "haemophiliac": "hemophiliac",
706
+ "haemophiliacs": "hemophiliacs",
707
+ "haemorrhage": "hemorrhage",
708
+ "haemorrhaged": "hemorrhaged",
709
+ "haemorrhages": "hemorrhages",
710
+ "haemorrhaging": "hemorrhaging",
711
+ "haemorrhoids": "hemorrhoids",
712
+ "harbour": "harbor",
713
+ "harboured": "harbored",
714
+ "harbouring": "harboring",
715
+ "harbours": "harbors",
716
+ "harmonisation": "harmonization",
717
+ "harmonise": "harmonize",
718
+ "harmonised": "harmonized",
719
+ "harmonises": "harmonizes",
720
+ "harmonising": "harmonizing",
721
+ "homoeopath": "homeopath",
722
+ "homoeopathic": "homeopathic",
723
+ "homoeopaths": "homeopaths",
724
+ "homoeopathy": "homeopathy",
725
+ "homogenise": "homogenize",
726
+ "homogenised": "homogenized",
727
+ "homogenises": "homogenizes",
728
+ "homogenising": "homogenizing",
729
+ "honour": "honor",
730
+ "honourable": "honorable",
731
+ "honourably": "honorably",
732
+ "honoured": "honored",
733
+ "honouring": "honoring",
734
+ "honours": "honors",
735
+ "hospitalisation": "hospitalization",
736
+ "hospitalise": "hospitalize",
737
+ "hospitalised": "hospitalized",
738
+ "hospitalises": "hospitalizes",
739
+ "hospitalising": "hospitalizing",
740
+ "humanise": "humanize",
741
+ "humanised": "humanized",
742
+ "humanises": "humanizes",
743
+ "humanising": "humanizing",
744
+ "humour": "humor",
745
+ "humoured": "humored",
746
+ "humouring": "humoring",
747
+ "humourless": "humorless",
748
+ "humours": "humors",
749
+ "hybridise": "hybridize",
750
+ "hybridised": "hybridized",
751
+ "hybridises": "hybridizes",
752
+ "hybridising": "hybridizing",
753
+ "hypnotise": "hypnotize",
754
+ "hypnotised": "hypnotized",
755
+ "hypnotises": "hypnotizes",
756
+ "hypnotising": "hypnotizing",
757
+ "hypothesise": "hypothesize",
758
+ "hypothesised": "hypothesized",
759
+ "hypothesises": "hypothesizes",
760
+ "hypothesising": "hypothesizing",
761
+ "idealisation": "idealization",
762
+ "idealise": "idealize",
763
+ "idealised": "idealized",
764
+ "idealises": "idealizes",
765
+ "idealising": "idealizing",
766
+ "idolise": "idolize",
767
+ "idolised": "idolized",
768
+ "idolises": "idolizes",
769
+ "idolising": "idolizing",
770
+ "immobilisation": "immobilization",
771
+ "immobilise": "immobilize",
772
+ "immobilised": "immobilized",
773
+ "immobiliser": "immobilizer",
774
+ "immobilisers": "immobilizers",
775
+ "immobilises": "immobilizes",
776
+ "immobilising": "immobilizing",
777
+ "immortalise": "immortalize",
778
+ "immortalised": "immortalized",
779
+ "immortalises": "immortalizes",
780
+ "immortalising": "immortalizing",
781
+ "immunisation": "immunization",
782
+ "immunise": "immunize",
783
+ "immunised": "immunized",
784
+ "immunises": "immunizes",
785
+ "immunising": "immunizing",
786
+ "impanelled": "impaneled",
787
+ "impanelling": "impaneling",
788
+ "imperilled": "imperiled",
789
+ "imperilling": "imperiling",
790
+ "individualise": "individualize",
791
+ "individualised": "individualized",
792
+ "individualises": "individualizes",
793
+ "individualising": "individualizing",
794
+ "industrialise": "industrialize",
795
+ "industrialised": "industrialized",
796
+ "industrialises": "industrializes",
797
+ "industrialising": "industrializing",
798
+ "inflexion": "inflection",
799
+ "inflexions": "inflections",
800
+ "initialise": "initialize",
801
+ "initialised": "initialized",
802
+ "initialises": "initializes",
803
+ "initialising": "initializing",
804
+ "initialled": "initialed",
805
+ "initialling": "initialing",
806
+ "instal": "install",
807
+ "instalment": "installment",
808
+ "instalments": "installments",
809
+ "instals": "installs",
810
+ "instil": "instill",
811
+ "instils": "instills",
812
+ "institutionalisation": "institutionalization",
813
+ "institutionalise": "institutionalize",
814
+ "institutionalised": "institutionalized",
815
+ "institutionalises": "institutionalizes",
816
+ "institutionalising": "institutionalizing",
817
+ "intellectualise": "intellectualize",
818
+ "intellectualised": "intellectualized",
819
+ "intellectualises": "intellectualizes",
820
+ "intellectualising": "intellectualizing",
821
+ "internalisation": "internalization",
822
+ "internalise": "internalize",
823
+ "internalised": "internalized",
824
+ "internalises": "internalizes",
825
+ "internalising": "internalizing",
826
+ "internationalisation": "internationalization",
827
+ "internationalise": "internationalize",
828
+ "internationalised": "internationalized",
829
+ "internationalises": "internationalizes",
830
+ "internationalising": "internationalizing",
831
+ "ionisation": "ionization",
832
+ "ionise": "ionize",
833
+ "ionised": "ionized",
834
+ "ioniser": "ionizer",
835
+ "ionisers": "ionizers",
836
+ "ionises": "ionizes",
837
+ "ionising": "ionizing",
838
+ "italicise": "italicize",
839
+ "italicised": "italicized",
840
+ "italicises": "italicizes",
841
+ "italicising": "italicizing",
842
+ "itemise": "itemize",
843
+ "itemised": "itemized",
844
+ "itemises": "itemizes",
845
+ "itemising": "itemizing",
846
+ "jeopardise": "jeopardize",
847
+ "jeopardised": "jeopardized",
848
+ "jeopardises": "jeopardizes",
849
+ "jeopardising": "jeopardizing",
850
+ "jewelled": "jeweled",
851
+ "jeweller": "jeweler",
852
+ "jewellers": "jewelers",
853
+ "jewellery": "jewelry",
854
+ "judgement": "judgment",
855
+ "kilogramme": "kilogram",
856
+ "kilogrammes": "kilograms",
857
+ "kilometre": "kilometer",
858
+ "kilometres": "kilometers",
859
+ "labelled": "labeled",
860
+ "labelling": "labeling",
861
+ "labour": "labor",
862
+ "laboured": "labored",
863
+ "labourer": "laborer",
864
+ "labourers": "laborers",
865
+ "labouring": "laboring",
866
+ "labours": "labors",
867
+ "lacklustre": "lackluster",
868
+ "legalisation": "legalization",
869
+ "legalise": "legalize",
870
+ "legalised": "legalized",
871
+ "legalises": "legalizes",
872
+ "legalising": "legalizing",
873
+ "legitimise": "legitimize",
874
+ "legitimised": "legitimized",
875
+ "legitimises": "legitimizes",
876
+ "legitimising": "legitimizing",
877
+ "leukaemia": "leukemia",
878
+ "levelled": "leveled",
879
+ "leveller": "leveler",
880
+ "levellers": "levelers",
881
+ "levelling": "leveling",
882
+ "libelled": "libeled",
883
+ "libelling": "libeling",
884
+ "libellous": "libelous",
885
+ "liberalisation": "liberalization",
886
+ "liberalise": "liberalize",
887
+ "liberalised": "liberalized",
888
+ "liberalises": "liberalizes",
889
+ "liberalising": "liberalizing",
890
+ "licence": "license",
891
+ "licenced": "licensed",
892
+ "licences": "licenses",
893
+ "licencing": "licensing",
894
+ "likeable": "likable",
895
+ "lionisation": "lionization",
896
+ "lionise": "lionize",
897
+ "lionised": "lionized",
898
+ "lionises": "lionizes",
899
+ "lionising": "lionizing",
900
+ "liquidise": "liquidize",
901
+ "liquidised": "liquidized",
902
+ "liquidiser": "liquidizer",
903
+ "liquidisers": "liquidizers",
904
+ "liquidises": "liquidizes",
905
+ "liquidising": "liquidizing",
906
+ "litre": "liter",
907
+ "litres": "liters",
908
+ "localise": "localize",
909
+ "localised": "localized",
910
+ "localises": "localizes",
911
+ "localising": "localizing",
912
+ "louvre": "louver",
913
+ "louvred": "louvered",
914
+ "louvres": "louvers",
915
+ "lustre": "luster",
916
+ "magnetise": "magnetize",
917
+ "magnetised": "magnetized",
918
+ "magnetises": "magnetizes",
919
+ "magnetising": "magnetizing",
920
+ "manoeuvrability": "maneuverability",
921
+ "manoeuvrable": "maneuverable",
922
+ "manoeuvre": "maneuver",
923
+ "manoeuvred": "maneuvered",
924
+ "manoeuvres": "maneuvers",
925
+ "manoeuvring": "maneuvering",
926
+ "manoeuvrings": "maneuverings",
927
+ "marginalisation": "marginalization",
928
+ "marginalise": "marginalize",
929
+ "marginalised": "marginalized",
930
+ "marginalises": "marginalizes",
931
+ "marginalising": "marginalizing",
932
+ "marshalled": "marshaled",
933
+ "marshalling": "marshaling",
934
+ "marvelled": "marveled",
935
+ "marvelling": "marveling",
936
+ "marvellous": "marvelous",
937
+ "marvellously": "marvelously",
938
+ "materialisation": "materialization",
939
+ "materialise": "materialize",
940
+ "materialised": "materialized",
941
+ "materialises": "materializes",
942
+ "materialising": "materializing",
943
+ "maximisation": "maximization",
944
+ "maximise": "maximize",
945
+ "maximised": "maximized",
946
+ "maximises": "maximizes",
947
+ "maximising": "maximizing",
948
+ "meagre": "meager",
949
+ "mechanisation": "mechanization",
950
+ "mechanise": "mechanize",
951
+ "mechanised": "mechanized",
952
+ "mechanises": "mechanizes",
953
+ "mechanising": "mechanizing",
954
+ "mediaeval": "medieval",
955
+ "memorialise": "memorialize",
956
+ "memorialised": "memorialized",
957
+ "memorialises": "memorializes",
958
+ "memorialising": "memorializing",
959
+ "memorise": "memorize",
960
+ "memorised": "memorized",
961
+ "memorises": "memorizes",
962
+ "memorising": "memorizing",
963
+ "mesmerise": "mesmerize",
964
+ "mesmerised": "mesmerized",
965
+ "mesmerises": "mesmerizes",
966
+ "mesmerising": "mesmerizing",
967
+ "metabolise": "metabolize",
968
+ "metabolised": "metabolized",
969
+ "metabolises": "metabolizes",
970
+ "metabolising": "metabolizing",
971
+ "metre": "meter",
972
+ "metres": "meters",
973
+ "mhm": "hmm",
974
+ "micrometre": "micrometer",
975
+ "micrometres": "micrometers",
976
+ "militarise": "militarize",
977
+ "militarised": "militarized",
978
+ "militarises": "militarizes",
979
+ "militarising": "militarizing",
980
+ "milligramme": "milligram",
981
+ "milligrammes": "milligrams",
982
+ "millilitre": "milliliter",
983
+ "millilitres": "milliliters",
984
+ "millimetre": "millimeter",
985
+ "millimetres": "millimeters",
986
+ "miniaturisation": "miniaturization",
987
+ "miniaturise": "miniaturize",
988
+ "miniaturised": "miniaturized",
989
+ "miniaturises": "miniaturizes",
990
+ "miniaturising": "miniaturizing",
991
+ "minibusses": "minibuses",
992
+ "minimise": "minimize",
993
+ "minimised": "minimized",
994
+ "minimises": "minimizes",
995
+ "minimising": "minimizing",
996
+ "misbehaviour": "misbehavior",
997
+ "misdemeanour": "misdemeanor",
998
+ "misdemeanours": "misdemeanors",
999
+ "misspelt": "misspelled",
1000
+ "mitre": "miter",
1001
+ "mitres": "miters",
1002
+ "mm": "hmm",
1003
+ "mmm": "hmm",
1004
+ "mobilisation": "mobilization",
1005
+ "mobilise": "mobilize",
1006
+ "mobilised": "mobilized",
1007
+ "mobilises": "mobilizes",
1008
+ "mobilising": "mobilizing",
1009
+ "modelled": "modeled",
1010
+ "modeller": "modeler",
1011
+ "modellers": "modelers",
1012
+ "modelling": "modeling",
1013
+ "modernise": "modernize",
1014
+ "modernised": "modernized",
1015
+ "modernises": "modernizes",
1016
+ "modernising": "modernizing",
1017
+ "moisturise": "moisturize",
1018
+ "moisturised": "moisturized",
1019
+ "moisturiser": "moisturizer",
1020
+ "moisturisers": "moisturizers",
1021
+ "moisturises": "moisturizes",
1022
+ "moisturising": "moisturizing",
1023
+ "monologue": "monolog",
1024
+ "monologues": "monologs",
1025
+ "monopolisation": "monopolization",
1026
+ "monopolise": "monopolize",
1027
+ "monopolised": "monopolized",
1028
+ "monopolises": "monopolizes",
1029
+ "monopolising": "monopolizing",
1030
+ "moralise": "moralize",
1031
+ "moralised": "moralized",
1032
+ "moralises": "moralizes",
1033
+ "moralising": "moralizing",
1034
+ "motorised": "motorized",
1035
+ "mould": "mold",
1036
+ "moulded": "molded",
1037
+ "moulder": "molder",
1038
+ "mouldered": "moldered",
1039
+ "mouldering": "moldering",
1040
+ "moulders": "molders",
1041
+ "mouldier": "moldier",
1042
+ "mouldiest": "moldiest",
1043
+ "moulding": "molding",
1044
+ "mouldings": "moldings",
1045
+ "moulds": "molds",
1046
+ "mouldy": "moldy",
1047
+ "moult": "molt",
1048
+ "moulted": "molted",
1049
+ "moulting": "molting",
1050
+ "moults": "molts",
1051
+ "moustache": "mustache",
1052
+ "moustached": "mustached",
1053
+ "moustaches": "mustaches",
1054
+ "moustachioed": "mustachioed",
1055
+ "multicoloured": "multicolored",
1056
+ "nationalisation": "nationalization",
1057
+ "nationalisations": "nationalizations",
1058
+ "nationalise": "nationalize",
1059
+ "nationalised": "nationalized",
1060
+ "nationalises": "nationalizes",
1061
+ "nationalising": "nationalizing",
1062
+ "naturalisation": "naturalization",
1063
+ "naturalise": "naturalize",
1064
+ "naturalised": "naturalized",
1065
+ "naturalises": "naturalizes",
1066
+ "naturalising": "naturalizing",
1067
+ "neighbour": "neighbor",
1068
+ "neighbourhood": "neighborhood",
1069
+ "neighbourhoods": "neighborhoods",
1070
+ "neighbouring": "neighboring",
1071
+ "neighbourliness": "neighborliness",
1072
+ "neighbourly": "neighborly",
1073
+ "neighbours": "neighbors",
1074
+ "neutralisation": "neutralization",
1075
+ "neutralise": "neutralize",
1076
+ "neutralised": "neutralized",
1077
+ "neutralises": "neutralizes",
1078
+ "neutralising": "neutralizing",
1079
+ "normalisation": "normalization",
1080
+ "normalise": "normalize",
1081
+ "normalised": "normalized",
1082
+ "normalises": "normalizes",
1083
+ "normalising": "normalizing",
1084
+ "odour": "odor",
1085
+ "odourless": "odorless",
1086
+ "odours": "odors",
1087
+ "oesophagus": "esophagus",
1088
+ "oesophaguses": "esophaguses",
1089
+ "oestrogen": "estrogen",
1090
+ "offence": "offense",
1091
+ "offences": "offenses",
1092
+ "omelette": "omelet",
1093
+ "omelettes": "omelets",
1094
+ "optimise": "optimize",
1095
+ "optimised": "optimized",
1096
+ "optimises": "optimizes",
1097
+ "optimising": "optimizing",
1098
+ "organisation": "organization",
1099
+ "organisational": "organizational",
1100
+ "organisations": "organizations",
1101
+ "organise": "organize",
1102
+ "organised": "organized",
1103
+ "organiser": "organizer",
1104
+ "organisers": "organizers",
1105
+ "organises": "organizes",
1106
+ "organising": "organizing",
1107
+ "orthopaedic": "orthopedic",
1108
+ "orthopaedics": "orthopedics",
1109
+ "ostracise": "ostracize",
1110
+ "ostracised": "ostracized",
1111
+ "ostracises": "ostracizes",
1112
+ "ostracising": "ostracizing",
1113
+ "outmanoeuvre": "outmaneuver",
1114
+ "outmanoeuvred": "outmaneuvered",
1115
+ "outmanoeuvres": "outmaneuvers",
1116
+ "outmanoeuvring": "outmaneuvering",
1117
+ "overemphasise": "overemphasize",
1118
+ "overemphasised": "overemphasized",
1119
+ "overemphasises": "overemphasizes",
1120
+ "overemphasising": "overemphasizing",
1121
+ "oxidisation": "oxidization",
1122
+ "oxidise": "oxidize",
1123
+ "oxidised": "oxidized",
1124
+ "oxidises": "oxidizes",
1125
+ "oxidising": "oxidizing",
1126
+ "paederast": "pederast",
1127
+ "paederasts": "pederasts",
1128
+ "paediatric": "pediatric",
1129
+ "paediatrician": "pediatrician",
1130
+ "paediatricians": "pediatricians",
1131
+ "paediatrics": "pediatrics",
1132
+ "paedophile": "pedophile",
1133
+ "paedophiles": "pedophiles",
1134
+ "paedophilia": "pedophilia",
1135
+ "palaeolithic": "paleolithic",
1136
+ "palaeontologist": "paleontologist",
1137
+ "palaeontologists": "paleontologists",
1138
+ "palaeontology": "paleontology",
1139
+ "panelled": "paneled",
1140
+ "panelling": "paneling",
1141
+ "panellist": "panelist",
1142
+ "panellists": "panelists",
1143
+ "paralyse": "paralyze",
1144
+ "paralysed": "paralyzed",
1145
+ "paralyses": "paralyzes",
1146
+ "paralysing": "paralyzing",
1147
+ "parcelled": "parceled",
1148
+ "parcelling": "parceling",
1149
+ "parlour": "parlor",
1150
+ "parlours": "parlors",
1151
+ "particularise": "particularize",
1152
+ "particularised": "particularized",
1153
+ "particularises": "particularizes",
1154
+ "particularising": "particularizing",
1155
+ "passivisation": "passivization",
1156
+ "passivise": "passivize",
1157
+ "passivised": "passivized",
1158
+ "passivises": "passivizes",
1159
+ "passivising": "passivizing",
1160
+ "pasteurisation": "pasteurization",
1161
+ "pasteurise": "pasteurize",
1162
+ "pasteurised": "pasteurized",
1163
+ "pasteurises": "pasteurizes",
1164
+ "pasteurising": "pasteurizing",
1165
+ "patronise": "patronize",
1166
+ "patronised": "patronized",
1167
+ "patronises": "patronizes",
1168
+ "patronising": "patronizing",
1169
+ "patronisingly": "patronizingly",
1170
+ "pedalled": "pedaled",
1171
+ "pedalling": "pedaling",
1172
+ "pedestrianisation": "pedestrianization",
1173
+ "pedestrianise": "pedestrianize",
1174
+ "pedestrianised": "pedestrianized",
1175
+ "pedestrianises": "pedestrianizes",
1176
+ "pedestrianising": "pedestrianizing",
1177
+ "penalise": "penalize",
1178
+ "penalised": "penalized",
1179
+ "penalises": "penalizes",
1180
+ "penalising": "penalizing",
1181
+ "pencilled": "penciled",
1182
+ "pencilling": "penciling",
1183
+ "personalise": "personalize",
1184
+ "personalised": "personalized",
1185
+ "personalises": "personalizes",
1186
+ "personalising": "personalizing",
1187
+ "pharmacopoeia": "pharmacopeia",
1188
+ "pharmacopoeias": "pharmacopeias",
1189
+ "philosophise": "philosophize",
1190
+ "philosophised": "philosophized",
1191
+ "philosophises": "philosophizes",
1192
+ "philosophising": "philosophizing",
1193
+ "philtre": "filter",
1194
+ "philtres": "filters",
1195
+ "phoney": "phony",
1196
+ "plagiarise": "plagiarize",
1197
+ "plagiarised": "plagiarized",
1198
+ "plagiarises": "plagiarizes",
1199
+ "plagiarising": "plagiarizing",
1200
+ "plough": "plow",
1201
+ "ploughed": "plowed",
1202
+ "ploughing": "plowing",
1203
+ "ploughman": "plowman",
1204
+ "ploughmen": "plowmen",
1205
+ "ploughs": "plows",
1206
+ "ploughshare": "plowshare",
1207
+ "ploughshares": "plowshares",
1208
+ "polarisation": "polarization",
1209
+ "polarise": "polarize",
1210
+ "polarised": "polarized",
1211
+ "polarises": "polarizes",
1212
+ "polarising": "polarizing",
1213
+ "politicisation": "politicization",
1214
+ "politicise": "politicize",
1215
+ "politicised": "politicized",
1216
+ "politicises": "politicizes",
1217
+ "politicising": "politicizing",
1218
+ "popularisation": "popularization",
1219
+ "popularise": "popularize",
1220
+ "popularised": "popularized",
1221
+ "popularises": "popularizes",
1222
+ "popularising": "popularizing",
1223
+ "pouffe": "pouf",
1224
+ "pouffes": "poufs",
1225
+ "practise": "practice",
1226
+ "practised": "practiced",
1227
+ "practises": "practices",
1228
+ "practising": "practicing",
1229
+ "praesidium": "presidium",
1230
+ "praesidiums": "presidiums",
1231
+ "pressurisation": "pressurization",
1232
+ "pressurise": "pressurize",
1233
+ "pressurised": "pressurized",
1234
+ "pressurises": "pressurizes",
1235
+ "pressurising": "pressurizing",
1236
+ "pretence": "pretense",
1237
+ "pretences": "pretenses",
1238
+ "primaeval": "primeval",
1239
+ "prioritisation": "prioritization",
1240
+ "prioritise": "prioritize",
1241
+ "prioritised": "prioritized",
1242
+ "prioritises": "prioritizes",
1243
+ "prioritising": "prioritizing",
1244
+ "privatisation": "privatization",
1245
+ "privatisations": "privatizations",
1246
+ "privatise": "privatize",
1247
+ "privatised": "privatized",
1248
+ "privatises": "privatizes",
1249
+ "privatising": "privatizing",
1250
+ "professionalisation": "professionalization",
1251
+ "professionalise": "professionalize",
1252
+ "professionalised": "professionalized",
1253
+ "professionalises": "professionalizes",
1254
+ "professionalising": "professionalizing",
1255
+ "programme": "program",
1256
+ "programmes": "programs",
1257
+ "prologue": "prolog",
1258
+ "prologues": "prologs",
1259
+ "propagandise": "propagandize",
1260
+ "propagandised": "propagandized",
1261
+ "propagandises": "propagandizes",
1262
+ "propagandising": "propagandizing",
1263
+ "proselytise": "proselytize",
1264
+ "proselytised": "proselytized",
1265
+ "proselytiser": "proselytizer",
1266
+ "proselytisers": "proselytizers",
1267
+ "proselytises": "proselytizes",
1268
+ "proselytising": "proselytizing",
1269
+ "psychoanalyse": "psychoanalyze",
1270
+ "psychoanalysed": "psychoanalyzed",
1271
+ "psychoanalyses": "psychoanalyzes",
1272
+ "psychoanalysing": "psychoanalyzing",
1273
+ "publicise": "publicize",
1274
+ "publicised": "publicized",
1275
+ "publicises": "publicizes",
1276
+ "publicising": "publicizing",
1277
+ "pulverisation": "pulverization",
1278
+ "pulverise": "pulverize",
1279
+ "pulverised": "pulverized",
1280
+ "pulverises": "pulverizes",
1281
+ "pulverising": "pulverizing",
1282
+ "pummelled": "pummel",
1283
+ "pummelling": "pummeled",
1284
+ "pyjama": "pajama",
1285
+ "pyjamas": "pajamas",
1286
+ "pzazz": "pizzazz",
1287
+ "quarrelled": "quarreled",
1288
+ "quarrelling": "quarreling",
1289
+ "radicalise": "radicalize",
1290
+ "radicalised": "radicalized",
1291
+ "radicalises": "radicalizes",
1292
+ "radicalising": "radicalizing",
1293
+ "rancour": "rancor",
1294
+ "randomise": "randomize",
1295
+ "randomised": "randomized",
1296
+ "randomises": "randomizes",
1297
+ "randomising": "randomizing",
1298
+ "rationalisation": "rationalization",
1299
+ "rationalisations": "rationalizations",
1300
+ "rationalise": "rationalize",
1301
+ "rationalised": "rationalized",
1302
+ "rationalises": "rationalizes",
1303
+ "rationalising": "rationalizing",
1304
+ "ravelled": "raveled",
1305
+ "ravelling": "raveling",
1306
+ "realisable": "realizable",
1307
+ "realisation": "realization",
1308
+ "realisations": "realizations",
1309
+ "realise": "realize",
1310
+ "realised": "realized",
1311
+ "realises": "realizes",
1312
+ "realising": "realizing",
1313
+ "recognisable": "recognizable",
1314
+ "recognisably": "recognizably",
1315
+ "recognisance": "recognizance",
1316
+ "recognise": "recognize",
1317
+ "recognised": "recognized",
1318
+ "recognises": "recognizes",
1319
+ "recognising": "recognizing",
1320
+ "reconnoitre": "reconnoiter",
1321
+ "reconnoitred": "reconnoitered",
1322
+ "reconnoitres": "reconnoiters",
1323
+ "reconnoitring": "reconnoitering",
1324
+ "refuelled": "refueled",
1325
+ "refuelling": "refueling",
1326
+ "regularisation": "regularization",
1327
+ "regularise": "regularize",
1328
+ "regularised": "regularized",
1329
+ "regularises": "regularizes",
1330
+ "regularising": "regularizing",
1331
+ "remodelled": "remodeled",
1332
+ "remodelling": "remodeling",
1333
+ "remould": "remold",
1334
+ "remoulded": "remolded",
1335
+ "remoulding": "remolding",
1336
+ "remoulds": "remolds",
1337
+ "reorganisation": "reorganization",
1338
+ "reorganisations": "reorganizations",
1339
+ "reorganise": "reorganize",
1340
+ "reorganised": "reorganized",
1341
+ "reorganises": "reorganizes",
1342
+ "reorganising": "reorganizing",
1343
+ "revelled": "reveled",
1344
+ "reveller": "reveler",
1345
+ "revellers": "revelers",
1346
+ "revelling": "reveling",
1347
+ "revitalise": "revitalize",
1348
+ "revitalised": "revitalized",
1349
+ "revitalises": "revitalizes",
1350
+ "revitalising": "revitalizing",
1351
+ "revolutionise": "revolutionize",
1352
+ "revolutionised": "revolutionized",
1353
+ "revolutionises": "revolutionizes",
1354
+ "revolutionising": "revolutionizing",
1355
+ "rhapsodise": "rhapsodize",
1356
+ "rhapsodised": "rhapsodized",
1357
+ "rhapsodises": "rhapsodizes",
1358
+ "rhapsodising": "rhapsodizing",
1359
+ "rigour": "rigor",
1360
+ "rigours": "rigors",
1361
+ "ritualised": "ritualized",
1362
+ "rivalled": "rivaled",
1363
+ "rivalling": "rivaling",
1364
+ "romanticise": "romanticize",
1365
+ "romanticised": "romanticized",
1366
+ "romanticises": "romanticizes",
1367
+ "romanticising": "romanticizing",
1368
+ "rumour": "rumor",
1369
+ "rumoured": "rumored",
1370
+ "rumours": "rumors",
1371
+ "sabre": "saber",
1372
+ "sabres": "sabers",
1373
+ "saltpetre": "saltpeter",
1374
+ "sanitise": "sanitize",
1375
+ "sanitised": "sanitized",
1376
+ "sanitises": "sanitizes",
1377
+ "sanitising": "sanitizing",
1378
+ "satirise": "satirize",
1379
+ "satirised": "satirized",
1380
+ "satirises": "satirizes",
1381
+ "satirising": "satirizing",
1382
+ "saviour": "savior",
1383
+ "saviours": "saviors",
1384
+ "savour": "savor",
1385
+ "savoured": "savored",
1386
+ "savouries": "savories",
1387
+ "savouring": "savoring",
1388
+ "savours": "savors",
1389
+ "savoury": "savory",
1390
+ "scandalise": "scandalize",
1391
+ "scandalised": "scandalized",
1392
+ "scandalises": "scandalizes",
1393
+ "scandalising": "scandalizing",
1394
+ "sceptic": "skeptic",
1395
+ "sceptical": "skeptical",
1396
+ "sceptically": "skeptically",
1397
+ "scepticism": "skepticism",
1398
+ "sceptics": "skeptics",
1399
+ "sceptre": "scepter",
1400
+ "sceptres": "scepters",
1401
+ "scrutinise": "scrutinize",
1402
+ "scrutinised": "scrutinized",
1403
+ "scrutinises": "scrutinizes",
1404
+ "scrutinising": "scrutinizing",
1405
+ "secularisation": "secularization",
1406
+ "secularise": "secularize",
1407
+ "secularised": "secularized",
1408
+ "secularises": "secularizes",
1409
+ "secularising": "secularizing",
1410
+ "sensationalise": "sensationalize",
1411
+ "sensationalised": "sensationalized",
1412
+ "sensationalises": "sensationalizes",
1413
+ "sensationalising": "sensationalizing",
1414
+ "sensitise": "sensitize",
1415
+ "sensitised": "sensitized",
1416
+ "sensitises": "sensitizes",
1417
+ "sensitising": "sensitizing",
1418
+ "sentimentalise": "sentimentalize",
1419
+ "sentimentalised": "sentimentalized",
1420
+ "sentimentalises": "sentimentalizes",
1421
+ "sentimentalising": "sentimentalizing",
1422
+ "sepulchre": "sepulcher",
1423
+ "sepulchres": "sepulchers",
1424
+ "serialisation": "serialization",
1425
+ "serialisations": "serializations",
1426
+ "serialise": "serialize",
1427
+ "serialised": "serialized",
1428
+ "serialises": "serializes",
1429
+ "serialising": "serializing",
1430
+ "sermonise": "sermonize",
1431
+ "sermonised": "sermonized",
1432
+ "sermonises": "sermonizes",
1433
+ "sermonising": "sermonizing",
1434
+ "sheikh": "sheik",
1435
+ "shovelled": "shoveled",
1436
+ "shovelling": "shoveling",
1437
+ "shrivelled": "shriveled",
1438
+ "shrivelling": "shriveling",
1439
+ "signalise": "signalize",
1440
+ "signalised": "signalized",
1441
+ "signalises": "signalizes",
1442
+ "signalising": "signalizing",
1443
+ "signalled": "signaled",
1444
+ "signalling": "signaling",
1445
+ "smoulder": "smolder",
1446
+ "smouldered": "smoldered",
1447
+ "smouldering": "smoldering",
1448
+ "smoulders": "smolders",
1449
+ "snivelled": "sniveled",
1450
+ "snivelling": "sniveling",
1451
+ "snorkelled": "snorkeled",
1452
+ "snorkelling": "snorkeling",
1453
+ "snowplough": "snowplow",
1454
+ "snowploughs": "snowplow",
1455
+ "socialisation": "socialization",
1456
+ "socialise": "socialize",
1457
+ "socialised": "socialized",
1458
+ "socialises": "socializes",
1459
+ "socialising": "socializing",
1460
+ "sodomise": "sodomize",
1461
+ "sodomised": "sodomized",
1462
+ "sodomises": "sodomizes",
1463
+ "sodomising": "sodomizing",
1464
+ "solemnise": "solemnize",
1465
+ "solemnised": "solemnized",
1466
+ "solemnises": "solemnizes",
1467
+ "solemnising": "solemnizing",
1468
+ "sombre": "somber",
1469
+ "specialisation": "specialization",
1470
+ "specialisations": "specializations",
1471
+ "specialise": "specialize",
1472
+ "specialised": "specialized",
1473
+ "specialises": "specializes",
1474
+ "specialising": "specializing",
1475
+ "spectre": "specter",
1476
+ "spectres": "specters",
1477
+ "spiralled": "spiraled",
1478
+ "spiralling": "spiraling",
1479
+ "splendour": "splendor",
1480
+ "splendours": "splendors",
1481
+ "squirrelled": "squirreled",
1482
+ "squirrelling": "squirreling",
1483
+ "stabilisation": "stabilization",
1484
+ "stabilise": "stabilize",
1485
+ "stabilised": "stabilized",
1486
+ "stabiliser": "stabilizer",
1487
+ "stabilisers": "stabilizers",
1488
+ "stabilises": "stabilizes",
1489
+ "stabilising": "stabilizing",
1490
+ "standardisation": "standardization",
1491
+ "standardise": "standardize",
1492
+ "standardised": "standardized",
1493
+ "standardises": "standardizes",
1494
+ "standardising": "standardizing",
1495
+ "stencilled": "stenciled",
1496
+ "stencilling": "stenciling",
1497
+ "sterilisation": "sterilization",
1498
+ "sterilisations": "sterilizations",
1499
+ "sterilise": "sterilize",
1500
+ "sterilised": "sterilized",
1501
+ "steriliser": "sterilizer",
1502
+ "sterilisers": "sterilizers",
1503
+ "sterilises": "sterilizes",
1504
+ "sterilising": "sterilizing",
1505
+ "stigmatisation": "stigmatization",
1506
+ "stigmatise": "stigmatize",
1507
+ "stigmatised": "stigmatized",
1508
+ "stigmatises": "stigmatizes",
1509
+ "stigmatising": "stigmatizing",
1510
+ "storey": "story",
1511
+ "storeys": "stories",
1512
+ "subsidisation": "subsidization",
1513
+ "subsidise": "subsidize",
1514
+ "subsidised": "subsidized",
1515
+ "subsidiser": "subsidizer",
1516
+ "subsidisers": "subsidizers",
1517
+ "subsidises": "subsidizes",
1518
+ "subsidising": "subsidizing",
1519
+ "succour": "succor",
1520
+ "succoured": "succored",
1521
+ "succouring": "succoring",
1522
+ "succours": "succors",
1523
+ "sulphate": "sulfate",
1524
+ "sulphates": "sulfates",
1525
+ "sulphide": "sulfide",
1526
+ "sulphides": "sulfides",
1527
+ "sulphur": "sulfur",
1528
+ "sulphurous": "sulfurous",
1529
+ "summarise": "summarize",
1530
+ "summarised": "summarized",
1531
+ "summarises": "summarizes",
1532
+ "summarising": "summarizing",
1533
+ "swivelled": "swiveled",
1534
+ "swivelling": "swiveling",
1535
+ "symbolise": "symbolize",
1536
+ "symbolised": "symbolized",
1537
+ "symbolises": "symbolizes",
1538
+ "symbolising": "symbolizing",
1539
+ "sympathise": "sympathize",
1540
+ "sympathised": "sympathized",
1541
+ "sympathiser": "sympathizer",
1542
+ "sympathisers": "sympathizers",
1543
+ "sympathises": "sympathizes",
1544
+ "sympathising": "sympathizing",
1545
+ "synchronisation": "synchronization",
1546
+ "synchronise": "synchronize",
1547
+ "synchronised": "synchronized",
1548
+ "synchronises": "synchronizes",
1549
+ "synchronising": "synchronizing",
1550
+ "synthesise": "synthesize",
1551
+ "synthesised": "synthesized",
1552
+ "synthesiser": "synthesizer",
1553
+ "synthesisers": "synthesizers",
1554
+ "synthesises": "synthesizes",
1555
+ "synthesising": "synthesizing",
1556
+ "syphon": "siphon",
1557
+ "syphoned": "siphoned",
1558
+ "syphoning": "siphoning",
1559
+ "syphons": "siphons",
1560
+ "systematisation": "systematization",
1561
+ "systematise": "systematize",
1562
+ "systematised": "systematized",
1563
+ "systematises": "systematizes",
1564
+ "systematising": "systematizing",
1565
+ "tantalise": "tantalize",
1566
+ "tantalised": "tantalized",
1567
+ "tantalises": "tantalizes",
1568
+ "tantalising": "tantalizing",
1569
+ "tantalisingly": "tantalizingly",
1570
+ "tasselled": "tasseled",
1571
+ "technicolour": "technicolor",
1572
+ "temporise": "temporize",
1573
+ "temporised": "temporized",
1574
+ "temporises": "temporizes",
1575
+ "temporising": "temporizing",
1576
+ "tenderise": "tenderize",
1577
+ "tenderised": "tenderized",
1578
+ "tenderises": "tenderizes",
1579
+ "tenderising": "tenderizing",
1580
+ "terrorise": "terrorize",
1581
+ "terrorised": "terrorized",
1582
+ "terrorises": "terrorizes",
1583
+ "terrorising": "terrorizing",
1584
+ "theatre": "theater",
1585
+ "theatregoer": "theatergoer",
1586
+ "theatregoers": "theatergoers",
1587
+ "theatres": "theaters",
1588
+ "theorise": "theorize",
1589
+ "theorised": "theorized",
1590
+ "theorises": "theorizes",
1591
+ "theorising": "theorizing",
1592
+ "tonne": "ton",
1593
+ "tonnes": "tons",
1594
+ "towelled": "toweled",
1595
+ "towelling": "toweling",
1596
+ "toxaemia": "toxemia",
1597
+ "tranquillise": "tranquilize",
1598
+ "tranquillised": "tranquilized",
1599
+ "tranquilliser": "tranquilizer",
1600
+ "tranquillisers": "tranquilizers",
1601
+ "tranquillises": "tranquilizes",
1602
+ "tranquillising": "tranquilizing",
1603
+ "tranquillity": "tranquility",
1604
+ "tranquillize": "tranquilize",
1605
+ "tranquillized": "tranquilized",
1606
+ "tranquillizer": "tranquilizer",
1607
+ "tranquillizers": "tranquilizers",
1608
+ "tranquillizes": "tranquilizes",
1609
+ "tranquillizing": "tranquilizing",
1610
+ "tranquilly": "tranquility",
1611
+ "transistorised": "transistorized",
1612
+ "traumatise": "traumatize",
1613
+ "traumatised": "traumatized",
1614
+ "traumatises": "traumatizes",
1615
+ "traumatising": "traumatizing",
1616
+ "travelled": "traveled",
1617
+ "traveller": "traveler",
1618
+ "travellers": "travelers",
1619
+ "travelling": "traveling",
1620
+ "travelog": "travelogue",
1621
+ "travelogs": "travelogues",
1622
+ "trialled": "trialed",
1623
+ "trialling": "trialing",
1624
+ "tricolour": "tricolor",
1625
+ "tricolours": "tricolors",
1626
+ "trivialise": "trivialize",
1627
+ "trivialised": "trivialized",
1628
+ "trivialises": "trivializes",
1629
+ "trivialising": "trivializing",
1630
+ "tumour": "tumor",
1631
+ "tumours": "tumors",
1632
+ "tunnelled": "tunneled",
1633
+ "tunnelling": "tunneling",
1634
+ "tyrannise": "tyrannize",
1635
+ "tyrannised": "tyrannized",
1636
+ "tyrannises": "tyrannizes",
1637
+ "tyrannising": "tyrannizing",
1638
+ "tyre": "tire",
1639
+ "tyres": "tires",
1640
+ "unauthorised": "unauthorized",
1641
+ "uncivilised": "uncivilized",
1642
+ "underutilised": "underutilized",
1643
+ "unequalled": "unequaled",
1644
+ "unfavourable": "unfavorable",
1645
+ "unfavourably": "unfavorably",
1646
+ "unionisation": "unionization",
1647
+ "unionise": "unionize",
1648
+ "unionised": "unionized",
1649
+ "unionises": "unionizes",
1650
+ "unionising": "unionizing",
1651
+ "unorganised": "unorganized",
1652
+ "unravelled": "unraveled",
1653
+ "unravelling": "unraveling",
1654
+ "unrecognisable": "unrecognizable",
1655
+ "unrecognised": "unrecognized",
1656
+ "unrivalled": "unrivaled",
1657
+ "unsavoury": "unsavory",
1658
+ "untrammelled": "untrammeled",
1659
+ "urbanisation": "urbanization",
1660
+ "urbanise": "urbanize",
1661
+ "urbanised": "urbanized",
1662
+ "urbanises": "urbanizes",
1663
+ "urbanising": "urbanizing",
1664
+ "utilisable": "utilizable",
1665
+ "utilisation": "utilization",
1666
+ "utilise": "utilize",
1667
+ "utilised": "utilized",
1668
+ "utilises": "utilizes",
1669
+ "utilising": "utilizing",
1670
+ "valour": "valor",
1671
+ "vandalise": "vandalize",
1672
+ "vandalised": "vandalized",
1673
+ "vandalises": "vandalizes",
1674
+ "vandalising": "vandalizing",
1675
+ "vaporisation": "vaporization",
1676
+ "vaporise": "vaporize",
1677
+ "vaporised": "vaporized",
1678
+ "vaporises": "vaporizes",
1679
+ "vaporising": "vaporizing",
1680
+ "vapour": "vapor",
1681
+ "vapours": "vapors",
1682
+ "verbalise": "verbalize",
1683
+ "verbalised": "verbalized",
1684
+ "verbalises": "verbalizes",
1685
+ "verbalising": "verbalizing",
1686
+ "victimisation": "victimization",
1687
+ "victimise": "victimize",
1688
+ "victimised": "victimized",
1689
+ "victimises": "victimizes",
1690
+ "victimising": "victimizing",
1691
+ "videodisc": "videodisk",
1692
+ "videodiscs": "videodisks",
1693
+ "vigour": "vigor",
1694
+ "visualisation": "visualization",
1695
+ "visualisations": "visualizations",
1696
+ "visualise": "visualize",
1697
+ "visualised": "visualized",
1698
+ "visualises": "visualizes",
1699
+ "visualising": "visualizing",
1700
+ "vocalisation": "vocalization",
1701
+ "vocalisations": "vocalizations",
1702
+ "vocalise": "vocalize",
1703
+ "vocalised": "vocalized",
1704
+ "vocalises": "vocalizes",
1705
+ "vocalising": "vocalizing",
1706
+ "vulcanised": "vulcanized",
1707
+ "vulgarisation": "vulgarization",
1708
+ "vulgarise": "vulgarize",
1709
+ "vulgarised": "vulgarized",
1710
+ "vulgarises": "vulgarizes",
1711
+ "vulgarising": "vulgarizing",
1712
+ "waggon": "wagon",
1713
+ "waggons": "wagons",
1714
+ "watercolour": "watercolor",
1715
+ "watercolours": "watercolors",
1716
+ "weaselled": "weaseled",
1717
+ "weaselling": "weaseling",
1718
+ "westernisation": "westernization",
1719
+ "westernise": "westernize",
1720
+ "westernised": "westernized",
1721
+ "westernises": "westernizes",
1722
+ "westernising": "westernizing",
1723
+ "womanise": "womanize",
1724
+ "womanised": "womanized",
1725
+ "womaniser": "womanizer",
1726
+ "womanisers": "womanizers",
1727
+ "womanises": "womanizes",
1728
+ "womanising": "womanizing",
1729
+ "woollen": "woolen",
1730
+ "woollens": "woolens",
1731
+ "woollies": "woolies",
1732
+ "woolly": "wooly",
1733
+ "worshipped": "worshiped",
1734
+ "worshipper": "worshiper",
1735
+ "worshipping": "worshiping",
1736
+ "yodelled": "yodeled",
1737
+ "yodelling": "yodeling",
1738
+ "yoghourt": "yogurt",
1739
+ "yoghourts": "yogurts",
1740
+ "yoghurt": "yogurt",
1741
+ "yoghurts": "yogurts"
1742
+ }
preprocessor_config.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8133f7ffdd8c1a3dbc89a8cdde7acf319bc572ccfc3efdddc9561da79d8379f
3
+ size 3055754841
runs/Dec17_13-22-39_150-136-214-225/1671283363.367702/events.out.tfevents.1671283363.150-136-214-225.126569.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb48659d861f6237bfd80d751899b9ebe658ecd26c92dd3a811cbe7e6f03714a
3
+ size 5865
runs/Dec17_13-22-39_150-136-214-225/events.out.tfevents.1671283363.150-136-214-225.126569.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcd512d9851fac3f52f00e37e8e12cccd76b3cbbefb0832685e623c4a6ab644d
3
+ size 10875
special_tokens_map.json ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<|startoftranscript|>",
5
+ "<|en|>",
6
+ "<|zh|>",
7
+ "<|de|>",
8
+ "<|es|>",
9
+ "<|ru|>",
10
+ "<|ko|>",
11
+ "<|fr|>",
12
+ "<|ja|>",
13
+ "<|pt|>",
14
+ "<|tr|>",
15
+ "<|pl|>",
16
+ "<|ca|>",
17
+ "<|nl|>",
18
+ "<|ar|>",
19
+ "<|sv|>",
20
+ "<|it|>",
21
+ "<|id|>",
22
+ "<|hi|>",
23
+ "<|fi|>",
24
+ "<|vi|>",
25
+ "<|iw|>",
26
+ "<|uk|>",
27
+ "<|el|>",
28
+ "<|ms|>",
29
+ "<|cs|>",
30
+ "<|ro|>",
31
+ "<|da|>",
32
+ "<|hu|>",
33
+ "<|ta|>",
34
+ "<|no|>",
35
+ "<|th|>",
36
+ "<|ur|>",
37
+ "<|hr|>",
38
+ "<|bg|>",
39
+ "<|lt|>",
40
+ "<|la|>",
41
+ "<|mi|>",
42
+ "<|ml|>",
43
+ "<|cy|>",
44
+ "<|sk|>",
45
+ "<|te|>",
46
+ "<|fa|>",
47
+ "<|lv|>",
48
+ "<|bn|>",
49
+ "<|sr|>",
50
+ "<|az|>",
51
+ "<|sl|>",
52
+ "<|kn|>",
53
+ "<|et|>",
54
+ "<|mk|>",
55
+ "<|br|>",
56
+ "<|eu|>",
57
+ "<|is|>",
58
+ "<|hy|>",
59
+ "<|ne|>",
60
+ "<|mn|>",
61
+ "<|bs|>",
62
+ "<|kk|>",
63
+ "<|sq|>",
64
+ "<|sw|>",
65
+ "<|gl|>",
66
+ "<|mr|>",
67
+ "<|pa|>",
68
+ "<|si|>",
69
+ "<|km|>",
70
+ "<|sn|>",
71
+ "<|yo|>",
72
+ "<|so|>",
73
+ "<|af|>",
74
+ "<|oc|>",
75
+ "<|ka|>",
76
+ "<|be|>",
77
+ "<|tg|>",
78
+ "<|sd|>",
79
+ "<|gu|>",
80
+ "<|am|>",
81
+ "<|yi|>",
82
+ "<|lo|>",
83
+ "<|uz|>",
84
+ "<|fo|>",
85
+ "<|ht|>",
86
+ "<|ps|>",
87
+ "<|tk|>",
88
+ "<|nn|>",
89
+ "<|mt|>",
90
+ "<|sa|>",
91
+ "<|lb|>",
92
+ "<|my|>",
93
+ "<|bo|>",
94
+ "<|tl|>",
95
+ "<|mg|>",
96
+ "<|as|>",
97
+ "<|tt|>",
98
+ "<|haw|>",
99
+ "<|ln|>",
100
+ "<|ha|>",
101
+ "<|ba|>",
102
+ "<|jw|>",
103
+ "<|su|>",
104
+ "<|translate|>",
105
+ "<|transcribe|>",
106
+ "<|startoflm|>",
107
+ "<|startofprev|>",
108
+ "<|nocaptions|>",
109
+ "<|notimestamps|>"
110
+ ],
111
+ "bos_token": {
112
+ "content": "<|endoftext|>",
113
+ "lstrip": false,
114
+ "normalized": true,
115
+ "rstrip": false,
116
+ "single_word": false
117
+ },
118
+ "eos_token": {
119
+ "content": "<|endoftext|>",
120
+ "lstrip": false,
121
+ "normalized": true,
122
+ "rstrip": false,
123
+ "single_word": false
124
+ },
125
+ "pad_token": "<|endoftext|>",
126
+ "unk_token": {
127
+ "content": "",
128
+ "lstrip": false,
129
+ "normalized": true,
130
+ "rstrip": false,
131
+ "single_word": false
132
+ }
133
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 1024,
22
+ "name_or_path": "openai/whisper-medium",
23
+ "pad_token": null,
24
+ "processor_class": "WhisperProcessor",
25
+ "return_attention_mask": false,
26
+ "special_tokens_map_file": null,
27
+ "tokenizer_class": "WhisperTokenizer",
28
+ "unk_token": {
29
+ "__type": "AddedToken",
30
+ "content": "",
31
+ "lstrip": false,
32
+ "normalized": true,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e94a978e18b24c646193fae6e142d5234f7cb62e405948f6d06319cd01882645
3
+ size 3579
vocab.json ADDED
The diff for this file is too large to render. See raw diff