{ "cells": [ { "cell_type": "markdown", "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6", "metadata": { "id": "75b58048-7d14-4fc6-8085-1fc08c81b4a6" }, "source": [ "# Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers" ] }, { "cell_type": "markdown", "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a", "metadata": { "id": "fbfa8ad5-4cdc-4512-9058-836cbbf65e1a" }, "source": [ "In this Colab, we present a step-by-step guide on how to fine-tune Whisper \n", "for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This is a \n", "more \"hands-on\" version of the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). \n", "For a more in-depth explanation of Whisper, the Common Voice dataset and the theory behind fine-tuning, the reader is advised to refer to the blog post." ] }, { "cell_type": "markdown", "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e", "metadata": { "id": "afe0d503-ae4e-4aa7-9af4-dbcba52db41e" }, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0", "metadata": { "id": "9ae91ed4-9c3e-4ade-938e-f4c2dcfbfdc0" }, "source": [ "Whisper is a pre-trained model for automatic speech recognition (ASR) \n", "published in [September 2022](https://openai.com/blog/whisper/) by the authors \n", "Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as \n", "[Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained \n", "on un-labelled audio data, Whisper is pre-trained on a vast quantity of \n", "**labelled** audio-transcription data, 680,000 hours to be precise. \n", "This is an order of magnitude more data than the un-labelled audio data used \n", "to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this \n", "pre-training data is multilingual ASR data. This results in checkpoints \n", "that can be applied to over 96 languages, many of which are considered \n", "_low-resource_.\n", "\n", "When scaled to 680,000 hours of labelled pre-training data, Whisper models \n", "demonstrate a strong ability to generalise to many datasets and domains.\n", "The pre-trained checkpoints achieve competitive results to state-of-the-art \n", "ASR systems, with near 3% word error rate (WER) on the test-clean subset of \n", "LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ \n", "Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).\n", "The extensive multilingual ASR knowledge acquired by Whisper during pre-training \n", "can be leveraged for other low-resource languages; through fine-tuning, the \n", "pre-trained checkpoints can be adapted for specific datasets and languages \n", "to further improve upon these results. We'll show just how Whisper can be fine-tuned \n", "for low-resource languages in this Colab." ] }, { "cell_type": "markdown", "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72", "metadata": { "id": "e59b91d6-be24-4b5e-bb38-4977ea143a72" }, "source": [ "
\n", "\"Trulli\"\n", "
Figure 1: Whisper model. The architecture \n", "follows the standard Transformer-based encoder-decoder model. A \n", "log-Mel spectrogram is input to the encoder. The last encoder \n", "hidden states are input to the decoder via cross-attention mechanisms. The \n", "decoder autoregressively predicts text tokens, jointly conditional on the \n", "encoder hidden states and previously predicted tokens. Figure source: \n", "OpenAI Whisper Blog.
\n", "
" ] }, { "cell_type": "markdown", "id": "21b6316e-8a55-4549-a154-66d3da2ab74a", "metadata": { "id": "21b6316e-8a55-4549-a154-66d3da2ab74a" }, "source": [ "The Whisper checkpoints come in five configurations of varying model sizes.\n", "The smallest four are trained on either English-only or multilingual data.\n", "The largest checkpoint is multilingual only. All nine of the pre-trained checkpoints \n", "are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The \n", "checkpoints are summarised in the following table with links to the models on the Hub:\n", "\n", "| Size | Layers | Width | Heads | Parameters | English-only | Multilingual |\n", "|--------|--------|-------|-------|------------|------------------------------------------------------|---------------------------------------------------|\n", "| tiny | 4 | 384 | 6 | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny.) |\n", "| base | 6 | 512 | 8 | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |\n", "| small | 12 | 768 | 12 | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |\n", "| medium | 24 | 1024 | 16 | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |\n", "| large | 32 | 1280 | 20 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |\n", "\n", "For demonstration purposes, we'll fine-tune the multilingual version of the \n", "[`\"small\"`](https://huggingface.co/openai/whisper-small) checkpoint with 244M params (~= 1GB). \n", "As for our data, we'll train and evaluate our system on a low-resource language \n", "taken from the [Common Voice](https://huggingface.co/datasets/mozilla-foundation/fleurs_11_0)\n", "dataset. We'll show that with as little as 8 hours of fine-tuning data, we can achieve \n", "strong performance in this language." ] }, { "cell_type": "markdown", "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a", "metadata": { "id": "3a680dfc-cbba-4f6c-8a1f-e1a5ff3f123a" }, "source": [ "------------------------------------------------------------------------\n", "\n", "\\\\({}^1\\\\) The name Whisper follows from the acronym “WSPSR”, which stands for “Web-scale Supervised Pre-training for Speech Recognition”." ] }, { "cell_type": "markdown", "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0", "metadata": { "id": "b219c9dd-39b6-4a95-b2a1-3f547a1e7bc0" }, "source": [ "## Load Dataset\n", "Loading MS-MY Dataset from FLEURS.\n", "Combine train and validation set." ] }, { "cell_type": "code", "execution_count": 1, "id": "a2787582-554f-44ce-9f38-4180a5ed6b44", "metadata": { "id": "a2787582-554f-44ce-9f38-4180a5ed6b44" }, "outputs": [], "source": [ "from datasets import load_dataset, DatasetDict\n", "\n", "# fleurs = DatasetDict()\n", "# fleurs[\"train\"] = load_dataset(\"google/fleurs\", \"id_id\", split=\"train+validation\", use_auth_token=True)\n", "# fleurs[\"test\"] = load_dataset(\"google/fleurs\", \"id_id\", split=\"test\", use_auth_token=True)\n", "\n", "# fleurs = fleurs.remove_columns([\"id\", \"num_samples\", \"path\", \"raw_transcription\", \"gender\", \"lang_id\", \"language\", \"lang_group_id\"])\n", "\n", "# print(fleurs)" ] }, { "cell_type": "code", "execution_count": 2, "id": "d087b451", "metadata": { "scrolled": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Found cached dataset common_voice_11_0 (/home/ubuntu/.cache/huggingface/datasets/mozilla-foundation___common_voice_11_0/yue/11.0.0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f)\n", "Found cached dataset common_voice_11_0 (/home/ubuntu/.cache/huggingface/datasets/mozilla-foundation___common_voice_11_0/yue/11.0.0/f8e47235d9b4e68fa24ed71d63266a02018ccf7194b2a8c9c598a5f3ab304d9f)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['audio', 'transcription'],\n", " num_rows: 5296\n", " })\n", " test: Dataset({\n", " features: ['audio', 'transcription'],\n", " num_rows: 2438\n", " })\n", "})\n" ] } ], "source": [ "cv = DatasetDict()\n", "cv[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"yue\", split=\"train+validation\", use_auth_token=True)\n", "cv[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"yue\", split=\"test\", use_auth_token=True)\n", "\n", "cv = cv.remove_columns([\"client_id\", \"path\", 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\n", "cv = cv.rename_column('sentence', 'transcription')\n", "print(cv)" ] }, { "cell_type": "code", "execution_count": 3, "id": "f681bfef", "metadata": {}, "outputs": [], "source": [ "# lbv = DatasetDict()\n", "# lbv[\"train\"] = load_dataset(\"indonesian-nlp/librivox-indonesia\", \"ind\", split=\"train\", use_auth_token=True)\n", "# lbv[\"test\"] = load_dataset(\"indonesian-nlp/librivox-indonesia\", \"ind\", split=\"test\", use_auth_token=True)\n", "\n", "# lbv = lbv.remove_columns([\"path\", \"language\", \"reader\"])\n", "# lbv = lbv.rename_column('sentence', 'transcription')\n", "# print(lbv)" ] }, { "cell_type": "markdown", "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605", "metadata": { "id": "2d63b2d2-f68a-4d74-b7f1-5127f6d16605" }, "source": [ "## Prepare Feature Extractor, Tokenizer and Data" ] }, { "cell_type": "markdown", "id": "601c3099-1026-439e-93e2-5635b3ba5a73", "metadata": { "id": "601c3099-1026-439e-93e2-5635b3ba5a73" }, "source": [ "The ASR pipeline can be de-composed into three stages: \n", "1) A feature extractor which pre-processes the raw audio-inputs\n", "2) The model which performs the sequence-to-sequence mapping \n", "3) A tokenizer which post-processes the model outputs to text format\n", "\n", "In 🤗 Transformers, the Whisper model has an associated feature extractor and tokenizer, \n", "called [WhisperFeatureExtractor](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperFeatureExtractor)\n", "and [WhisperTokenizer](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperTokenizer) \n", "respectively.\n", "\n", "We'll go through details for setting-up the feature extractor and tokenizer one-by-one!" ] }, { "cell_type": "markdown", "id": "560332eb-3558-41a1-b500-e83a9f695f84", "metadata": { "id": "560332eb-3558-41a1-b500-e83a9f695f84" }, "source": [ "### Load WhisperFeatureExtractor" ] }, { "cell_type": "markdown", "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365", "metadata": { "id": "32ec8068-0bd7-412d-b662-0edb9d1e7365" }, "source": [ "The Whisper feature extractor performs two operations:\n", "1. Pads / truncates the audio inputs to 30s: any audio inputs shorter than 30s are padded to 30s with silence (zeros), and those longer that 30s are truncated to 30s\n", "2. Converts the audio inputs to _log-Mel spectrogram_ input features, a visual representation of the audio and the form of the input expected by the Whisper model" ] }, { "cell_type": "markdown", "id": "589d9ec1-d12b-4b64-93f7-04c63997da19", "metadata": { "id": "589d9ec1-d12b-4b64-93f7-04c63997da19" }, "source": [ "
\n", "\"Trulli\"\n", "
Figure 2: Conversion of sampled audio array to log-Mel spectrogram.\n", "Left: sampled 1-dimensional audio signal. Right: corresponding log-Mel spectrogram. Figure source:\n", "Google SpecAugment Blog.\n", "
" ] }, { "cell_type": "markdown", "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa", "metadata": { "id": "b2ef54d5-b946-4c1d-9fdc-adc5d01b46aa" }, "source": [ "We'll load the feature extractor from the pre-trained checkpoint with the default values:" ] }, { "cell_type": "code", "execution_count": 4, "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5", "metadata": { "id": "bc77d7bb-f9e2-47f5-b663-30f7a4321ce5" }, "outputs": [], "source": [ "from transformers import WhisperFeatureExtractor\n", "\n", "feature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-large-v2\")" ] }, { "cell_type": "markdown", "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb", "metadata": { "id": "93748af7-b917-4ecf-a0c8-7d89077ff9cb" }, "source": [ "### Load WhisperTokenizer" ] }, { "cell_type": "markdown", "id": "2bc82609-a9fb-447a-a2af-99597c864029", "metadata": { "id": "2bc82609-a9fb-447a-a2af-99597c864029" }, "source": [ "The Whisper model outputs a sequence of _token ids_. The tokenizer maps each of these token ids to their corresponding text string. For Hindi, we can load the pre-trained tokenizer and use it for fine-tuning without any further modifications. We simply have to \n", "specify the target language and the task. These arguments inform the \n", "tokenizer to prefix the language and task tokens to the start of encoded \n", "label sequences:" ] }, { "cell_type": "code", "execution_count": 13, "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6", "metadata": { "id": "c7b07f9b-ae0e-4f89-98f0-0c50d432eab6", "outputId": "5c004b44-86e7-4e00-88be-39e0af5eed69" }, "outputs": [], "source": [ "from transformers import WhisperTokenizer\n", "\n", "tokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-large-v2\", language=\"Chinese\", task=\"transcribe\")" ] }, { "cell_type": "markdown", "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b", "metadata": { "id": "d2ef23f3-f4a8-483a-a2dc-080a7496cb1b" }, "source": [ "### Combine To Create A WhisperProcessor" ] }, { "cell_type": "markdown", "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d", "metadata": { "id": "5ff67654-5a29-4bb8-a69d-0228946c6f8d" }, "source": [ "To simplify using the feature extractor and tokenizer, we can _wrap_ \n", "both into a single `WhisperProcessor` class. This processor object \n", "inherits from the `WhisperFeatureExtractor` and `WhisperProcessor`, \n", "and can be used on the audio inputs and model predictions as required. \n", "In doing so, we only need to keep track of two objects during training: \n", "the `processor` and the `model`:" ] }, { "cell_type": "code", "execution_count": 14, "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6", "metadata": { "id": "77d9f0c5-8607-4642-a8ac-c3ab2e223ea6" }, "outputs": [], "source": [ "from transformers import WhisperProcessor\n", "\n", "processor = WhisperProcessor.from_pretrained(\"openai/whisper-large-v2\", language=\"Chinese\", task=\"transcribe\")" ] }, { "cell_type": "markdown", "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c", "metadata": { "id": "381acd09-0b0f-4d04-9eb3-f028ac0e5f2c" }, "source": [ "### Prepare Data" ] }, { "cell_type": "code", "execution_count": 7, "id": "c69246a2", "metadata": {}, "outputs": [], "source": [ "from datasets import Audio\n", "\n", "cv = cv.cast_column(\"audio\", Audio(sampling_rate=16000))\n", "# fleurs = fleurs.cast_column(\"audio\", Audio(sampling_rate=16000))\n", "# lbv = lbv.cast_column(\"audio\", Audio(sampling_rate=16000))" ] }, { "cell_type": "markdown", "id": "3df7378a-a4c0-45d7-8d07-defbd1062ab6", "metadata": {}, "source": [ "We'll define our pre-processing strategy. We advise that you **do not** lower-case the transcriptions or remove punctuation unless mixing different datasets. This will enable you to fine-tune Whisper models that can predict punctuation and casing. Later, you will see how we can evaluate the predictions without punctuation or casing, so that the models benefit from the WER improvement obtained by normalising the transcriptions while still predicting fully formatted transcriptions." ] }, { "cell_type": "code", "execution_count": 8, "id": "d041650e-1c48-4439-87b3-5b6f4a514107", "metadata": {}, "outputs": [], "source": [ "from transformers.models.whisper.english_normalizer import BasicTextNormalizer\n", "\n", "do_lower_case = False\n", "do_remove_punctuation = False\n", "\n", "normalizer = BasicTextNormalizer()" ] }, { "cell_type": "markdown", "id": "89e12c2e-2f14-479b-987b-f0c75c881095", "metadata": {}, "source": [ "Now we can write a function to prepare our data ready for the model:\n", "1. We load and resample the audio data by calling `batch[\"audio\"]`. As explained above, 🤗 Datasets performs any necessary resampling operations on the fly.\n", "2. We use the feature extractor to compute the log-Mel spectrogram input features from our 1-dimensional audio array.\n", "3. We perform any optional pre-processing (lower-case or remove punctuation).\n", "4. We encode the transcriptions to label ids through the use of the tokenizer." ] }, { "cell_type": "code", "execution_count": 9, "id": "4c79b333", "metadata": { "scrolled": false }, "outputs": [], "source": [ "from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift\n", "\n", "augment_waveform = Compose([\n", "# AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.3),\n", " TimeStretch(min_rate=0.9, max_rate=1.25, p=0.3, leave_length_unchanged=False),\n", "# PitchShift(min_semitones=-4, max_semitones=4, p=0.3),\n", " ])\n", "\n", "def augment_dataset(batch):\n", "\n", " audio = batch[\"audio\"][\"array\"]\n", " # apply augmentation\n", " augmented_audio = augment_waveform(samples=audio, sample_rate=16000)\n", "\n", " batch[\"audio\"][\"array\"] = augmented_audio\n", "\n", " return batch" ] }, { "cell_type": "code", "execution_count": 15, "id": "c085911c-a10a-41ef-8874-306e0503e9bb", "metadata": {}, "outputs": [], "source": [ "def prepare_dataset(batch):\n", " # load and (possibly) resample audio data to 16kHz\n", " audio = batch[\"audio\"]\n", "\n", " # compute log-Mel input features from input audio array \n", " batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\n", " # compute input length of audio sample in seconds\n", " batch[\"input_length\"] = len(audio[\"array\"]) / audio[\"sampling_rate\"]\n", " \n", " # optional pre-processing steps\n", " transcription = batch[\"transcription\"]\n", " if do_lower_case:\n", " transcription = transcription.lower()\n", " if do_remove_punctuation:\n", " transcription = normalizer(transcription).strip()\n", " \n", " # encode target text to label ids\n", " batch[\"labels\"] = processor.tokenizer(transcription).input_ids\n", " return batch" ] }, { "cell_type": "markdown", "id": "8c960965-9fb6-466f-9dbd-c9d43e71d9d0", "metadata": { "id": "70b319fb-2439-4ef6-a70d-a47bf41c4a13" }, "source": [ "We can apply the data preparation function to all of our training examples using dataset's `.map` method. The argument `num_proc` specifies how many CPU cores to use. Setting `num_proc` > 1 will enable multiprocessing. If the `.map` method hangs with multiprocessing, set `num_proc=1` and process the dataset sequentially." ] }, { "cell_type": "code", "execution_count": 11, "id": "db271164", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f9683eb693c449c19719183b46b54fbb", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/5296 [00:00 Dict[str, torch.Tensor]:\n", " # split inputs and labels since they have to be of different lengths and need different padding methods\n", " # first treat the audio inputs by simply returning torch tensors\n", " input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\n", " batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\n", "\n", " # get the tokenized label sequences\n", " label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n", " # pad the labels to max length\n", " labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\n", "\n", " # replace padding with -100 to ignore loss correctly\n", " labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n", "\n", " # if bos token is appended in previous tokenization step,\n", " # cut bos token here as it's append later anyways\n", " if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\n", " labels = labels[:, 1:]\n", "\n", " batch[\"labels\"] = labels\n", "\n", " return batch" ] }, { "cell_type": "markdown", "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86", "metadata": { "id": "3cae7dbf-8a50-456e-a3a8-7fd005390f86" }, "source": [ "Let's initialise the data collator we've just defined:" ] }, { "cell_type": "code", "execution_count": 20, "id": "fc834702-c0d3-4a96-b101-7b87be32bf42", "metadata": { "id": "fc834702-c0d3-4a96-b101-7b87be32bf42" }, "outputs": [], "source": [ "data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)" ] }, { "cell_type": "markdown", "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698", "metadata": { "id": "d62bb2ab-750a-45e7-82e9-61d6f4805698" }, "source": [ "### Evaluation Metrics" ] }, { "cell_type": "markdown", "id": "66fee1a7-a44c-461e-b047-c3917221572e", "metadata": { "id": "66fee1a7-a44c-461e-b047-c3917221572e" }, "source": [ "We'll use the word error rate (WER) metric, the 'de-facto' metric for assessing \n", "ASR systems. For more information, refer to the WER [docs](https://huggingface.co/metrics/wer). We'll load the WER metric from 🤗 Evaluate:" ] }, { "cell_type": "code", "execution_count": 21, "id": "b22b4011-f31f-4b57-b684-c52332f92890", "metadata": { "id": "b22b4011-f31f-4b57-b684-c52332f92890" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "2da06d3741bf4cee961e3352d32a2515", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading builder script: 0%| | 0.00/5.60k [00:00\n", " \n", " \n", " [ 11/1000 01:03 < 1:55:52, 0.14 it/s, Epoch 0.06/7]\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
StepTraining LossValidation Loss

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "trainer.train()" ] }, { "cell_type": "markdown", "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3", "metadata": { "id": "810ced54-7187-4a06-b2fe-ba6dcca94dc3" }, "source": [ "We can label our checkpoint with the `whisper-event` tag on push by setting the appropriate key-word arguments (kwargs):" ] }, { "cell_type": "code", "execution_count": null, "id": "c704f91e-241b-48c9-b8e0-f0da396a9663", "metadata": { "id": "c704f91e-241b-48c9-b8e0-f0da396a9663" }, "outputs": [], "source": [ "kwargs = {\n", " \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\n", " \"dataset\": \"mozilla-foundation/common_voice_11_0\", # a 'pretty' name for the training dataset\n", " \"language\": \"yue\",\n", " \"model_name\": \"Whisper Large V2 - Cantonese - Augmented\", # a 'pretty' name for your model\n", " \"finetuned_from\": \"openai/whisper-large-v2\",\n", " \"tasks\": \"automatic-speech-recognition\",\n", " \"tags\": \"whisper-event\",\n", "}" ] }, { "cell_type": "markdown", "id": "090d676a-f944-4297-a938-a40eda0b2b68", "metadata": { "id": "090d676a-f944-4297-a938-a40eda0b2b68" }, "source": [ "The training results can now be uploaded to the Hub. To do so, execute the `push_to_hub` command and save the preprocessor object we created:" ] }, { "cell_type": "code", "execution_count": null, "id": "d7030622-caf7-4039-939b-6195cdaa2585", "metadata": { "id": "d7030622-caf7-4039-939b-6195cdaa2585" }, "outputs": [], "source": [ "trainer.push_to_hub(**kwargs)" ] }, { "cell_type": "code", "execution_count": null, "id": "e19f35cf", "metadata": {}, "outputs": [], "source": [ "fleurs_results = trainer.evaluate(fleurs['test'])\n", "print(fleurs_results)\n", "\n", "cv_results = trainer.evaluate(cv['test'])\n", "print(cv_results)\n", "\n", "lbv_results = trainer.evaluate(lbv['test'])\n", "print(lbv_results)" ] }, { "cell_type": "code", "execution_count": null, "id": "1c1e53d0", "metadata": {}, "outputs": [], "source": [ "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(fleurs_results['eval_wer'], 2),\n", " metric_type=\"wer\",\n", " metric_name=\"WER\",\n", " dataset_name='google/fleurs',\n", " dataset_type='google/fleurs',\n", " dataset_split='test',\n", " dataset_config='id_id',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )\n", "\n", "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(fleurs_results['eval_cer'], 2),\n", " metric_type=\"cer\",\n", " metric_name=\"CER\",\n", " dataset_name='google/fleurs',\n", " dataset_type='google/fleurs',\n", " dataset_split='test',\n", " dataset_config='id_id',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )\n", "\n", "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(cv_results['eval_wer'], 2),\n", " metric_type=\"wer\",\n", " metric_name=\"WER\",\n", " dataset_name='mozilla-foundation/common_voice_11_0',\n", " dataset_type='mozilla-foundation/common_voice_11_0',\n", " dataset_split='test',\n", " dataset_config='id',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )\n", "\n", "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(cv_results['eval_cer'], 2),\n", " metric_type=\"cer\",\n", " metric_name=\"CER\",\n", " dataset_name='mozilla-foundation/common_voice_11_0',\n", " dataset_type='mozilla-foundation/common_voice_11_0',\n", " dataset_split='test',\n", " dataset_config='id',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )\n", "\n", "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(lbv_results['eval_wer'], 2),\n", " metric_type=\"wer\",\n", " metric_name=\"WER\",\n", " dataset_name='indonesian-nlp/librivox-indonesia',\n", " dataset_type='indonesian-nlp/librivox-indonesia',\n", " dataset_split='test',\n", " dataset_config='ind',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )\n", "\n", "evaluate.push_to_hub(\n", " model_id='Scrya/whisper-medium-id',\n", " metric_value=round(lbv_results['eval_cer'], 2),\n", " metric_type=\"cer\",\n", " metric_name=\"CER\",\n", " dataset_name='indonesian-nlp/librivox-indonesia',\n", " dataset_type='indonesian-nlp/librivox-indonesia',\n", " dataset_split='test',\n", " dataset_config='ind',\n", " task_type=\"automatic-speech-recognition\",\n", " task_name=\"Automatic Speech Recognition\",\n", " overwrite=True\n", " )" ] }, { "cell_type": "markdown", "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba", "metadata": { "id": "ca743fbd-602c-48d4-ba8d-a2fe60af64ba" }, "source": [ "## Closing Remarks" ] }, { "cell_type": "markdown", "id": "7f737783-2870-4e35-aa11-86a42d7d997a", "metadata": { "id": "7f737783-2870-4e35-aa11-86a42d7d997a" }, "source": [ "In this blog, we covered a step-by-step guide on fine-tuning Whisper for multilingual ASR \n", "using 🤗 Datasets, Transformers and the Hugging Face Hub. For more details on the Whisper model, the Common Voice dataset and the theory behind fine-tuning, refere to the accompanying [blog post](https://huggingface.co/blog/fine-tune-whisper). If you're interested in fine-tuning other \n", "Transformers models, both for English and multilingual ASR, be sure to check out the \n", "examples scripts at [examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)." ] } ], "metadata": { "colab": { "include_colab_link": true, "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }