{ "cells": [ { "cell_type": "markdown", "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6", "metadata": { "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6" }, "source": [ "# Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers" ] }, { "cell_type": "markdown", "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a", "metadata": { "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a" }, "source": [ "In this Colab, we present a step-by-step guide on how to fine-tune Whisper \n", "for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This is a \n", "more \"hands-on\" version of the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). \n", "For a more in-depth explanation of Whisper, the Common Voice dataset and the theory behind fine-tuning, the reader is advised to refer to the blog post." ] }, { "cell_type": "markdown", "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e", "metadata": { "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e" }, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0", "metadata": { "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0" }, "source": [ "Whisper is a pre-trained model for automatic speech recognition (ASR) \n", "published in [September 2022](https://openai.com/blog/whisper/) by the authors \n", "Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as \n", "[Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained \n", "on un-labelled audio data, Whisper is pre-trained on a vast quantity of \n", "**labelled** audio-transcription data, 680,000 hours to be precise. \n", "This is an order of magnitude more data than the un-labelled audio data used \n", "to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this \n", "pre-training data is multilingual ASR data. This results in checkpoints \n", "that can be applied to over 96 languages, many of which are considered \n", "_low-resource_.\n", "\n", "When scaled to 680,000 hours of labelled pre-training data, Whisper models \n", "demonstrate a strong ability to generalise to many datasets and domains.\n", "The pre-trained checkpoints achieve competitive results to state-of-the-art \n", "ASR systems, with near 3% word error rate (WER) on the test-clean subset of \n", "LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ \n", "Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).\n", "The extensive multilingual ASR knowledge acquired by Whisper during pre-training \n", "can be leveraged for other low-resource languages; through fine-tuning, the \n", "pre-trained checkpoints can be adapted for specific datasets and languages \n", "to further improve upon these results. We'll show just how Whisper can be fine-tuned \n", "for low-resource languages in this Colab." ] }, { "cell_type": "markdown", "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72", "metadata": { "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "21b6316e-8a55-4549-a154-66d3da2ab74a", "metadata": { "id": "21b6316e-8a55-4549-a154-66d3da2ab74a" }, "source": [ "The Whisper checkpoints come in five configurations of varying model sizes.\n", "The smallest four are trained on either English-only or multilingual data.\n", "The largest checkpoint is multilingual only. All nine of the pre-trained checkpoints \n", "are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The \n", "checkpoints are summarised in the following table with links to the models on the Hub:\n", "\n", "| Size | Layers | Width | Heads | Parameters | English-only | Multilingual |\n", "|--------|--------|-------|-------|------------|------------------------------------------------------|---------------------------------------------------|\n", "| tiny | 4 | 384 | 6 | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny.) |\n", "| base | 6 | 512 | 8 | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |\n", "| small | 12 | 768 | 12 | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |\n", "| medium | 24 | 1024 | 16 | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |\n", "| large | 32 | 1280 | 20 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |\n", "\n", "For demonstration purposes, we'll fine-tune the multilingual version of the \n", "[`\"small\"`](https://huggingface.co/openai/whisper-small) checkpoint with 244M params (~= 1GB). \n", "As for our data, we'll train and evaluate our system on a low-resource language \n", "taken from the [Common Voice](https://huggingface.co/datasets/mozilla-foundation/fleurs_11_0)\n", "dataset. We'll show that with as little as 8 hours of fine-tuning data, we can achieve \n", "strong performance in this language." ] }, { "cell_type": "markdown", "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a", "metadata": { "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a" }, "source": [ "------------------------------------------------------------------------\n", "\n", "\\\\({}^1\\\\) The name Whisper follows from the acronym “WSPSR”, which stands for “Web-scale Supervised Pre-training for Speech Recognition”." ] }, { "cell_type": "markdown", "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0", "metadata": { "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0" }, "source": [ "## Load Dataset\n", "Loading MS-MY Dataset from FLEURS.\n", "Combine train and validation set." ] }, { "cell_type": "code", "execution_count": 31, "id": "18406a25", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "325cdc2642f546f68aed269575f1c975", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HTML(value='
Step | \n", "Training Loss | \n", "Validation Loss | \n", "Wer | \n", "
---|---|---|---|
1000 | \n", "0.001500 | \n", "0.332360 | \n", "15.645336 | \n", "
"
],
"text/plain": [
"