{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fine Tuning Whisper using LoRA\n",
    "\n",
    "In this tutorial, we will be demonstrating how to fine-tune a [Whisper](https://arxiv.org/abs/2212.04356) model using `adapters`. We will be adding [LoRA](https://docs.adapterhub.ml/methods#lora) to Whisper and will incorporate a sequence to sequence head on top of the model for performing audio transcription. \n",
    "\n",
    "Our tutorial is build on [this Whisper training guide](https://huggingface.co/blog/fine-tune-whisper), which provides a detailed step-by-step guide on how to do full-model finetunig with Whisper. However, in this notebook, we will be focusing on the minimal changes required to swap out traditional full model finetuning with parameter-efficient finetuning using `adapters`.\n",
    "\n",
    "For more information on the Whisper Model, please visit the [Hugging Face model card](https://huggingface.co/openai/whisper-large-v3) or see the original [OpenAI Blog Post](https://openai.com/index/whisper/)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Installation\n",
    "\n",
    "Before we can get started, we need to ensure the proper packages are installed. Here's a breakdown of what we need:\n",
    "\n",
    "- `adapters` and `accelerate` for efficient fine-tuning and training optimization\n",
    "- `librosa` and `datasets[audio]` for audio processing and data handling\n",
    "- `evaluate` and `jiwer` for metric computation and model evaluation"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:25:53.253177Z",
     "start_time": "2024-08-09T12:25:45.683646Z"
    }
   },
   "source": [
    "!pip install -qq jiwer evaluate>=0.30 librosa\n",
    "!pip install -qq -U adapters datasets[audio] accelerate"
   ],
   "outputs": [],
   "execution_count": 1
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:25:53.259803Z",
     "start_time": "2024-08-09T12:25:53.255746Z"
    }
   },
   "source": [
    "import os\n",
    "#os.environ[\"CUDA_VISIBLE_DEVICES\"] = \",\".join([\"0\", \"1\", \"2\", \"3\"])  # use this line if you have multiple GPUs available, e.g. 4 GPUs\n",
    "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\""
   ],
   "outputs": [],
   "execution_count": 2
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dataset\n",
    "\n",
    "In this tutorial, we will be using the `mozilla-foundation/common_voice_11_0` dataset, created by the Mozilla Foundation. This dataset is a comprehensive collection of voice recordings in multiple languages, making it ideal for training and fine-tuning speech recognition models.\n",
    "\n",
    "For comparison purposes, we will finetune Whisper on the low resource language Hindi, but you can adapt the language to your choice if desired. The dataset supports numerous languages; a list of all available ones can be found [here](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0#languages), on the dataset page on Hugging Face.\n",
    "For a high resource language make sure you have enough memory available.\n"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:25:53.268600Z",
     "start_time": "2024-08-09T12:25:53.261627Z"
    }
   },
   "source": [
    "language = \"hindi\"\n",
    "language_abbr = \"hi\"\n",
    "task = \"transcribe\"\n",
    "dataset_name = \"mozilla-foundation/common_voice_11_0\""
   ],
   "outputs": [],
   "execution_count": 3
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Loading the Dataset\n",
    "\n",
    "We load the dataset and split it into its respective train and test sets. For training, we use both the training and validation split, since Hindi is low resource. \n",
    "We then remove some of the columns as they are not needed for training."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:25:58.152155Z",
     "start_time": "2024-08-09T12:25:53.271295Z"
    }
   },
   "source": [
    "from datasets import load_dataset, DatasetDict\n",
    "\n",
    "common_voice = DatasetDict()\n",
    "\n",
    "common_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", language_abbr, split=\"train+validation\")\n",
    "common_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", language_abbr, split=\"test\")\n",
    "\n",
    "print(common_voice)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DatasetDict({\n",
      "    train: Dataset({\n",
      "        features: ['client_id', 'path', 'audio', 'sentence', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'],\n",
      "        num_rows: 6540\n",
      "    })\n",
      "    test: Dataset({\n",
      "        features: ['client_id', 'path', 'audio', 'sentence', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'],\n",
      "        num_rows: 2894\n",
      "    })\n",
      "})\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:25:59.309477Z",
     "start_time": "2024-08-09T12:25:58.154146Z"
    }
   },
   "source": [
    "common_voice = common_voice.remove_columns(\n",
    "    [\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"path\", \"segment\", \"up_votes\"]\n",
    ")\n",
    "\n",
    "print(common_voice[\"train\"][0])"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'audio': {'path': '/home/imhof/.cache/huggingface/datasets/downloads/extracted/8fcfd9e391a57582a1ced30aaf3434aa04b2903f54adfa3c588ef97e404d8bd9/hi_train_0/common_voice_hi_26008353.mp3', 'array': array([ 5.81611368e-26, -1.48634016e-25, -9.37040538e-26, ...,\n",
      "        1.06425901e-07,  4.46416450e-08,  2.61450239e-09]), 'sampling_rate': 48000}, 'sentence': 'हमने उसका जन्मदिन मनाया।'}\n"
     ]
    }
   ],
   "execution_count": 5
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Preprocessing\n",
    "\n",
    "For preprocessing audio data we require:\n",
    "- a feature extractor which pre-processes the raw audio-inputs\n",
    "- a tokenizer which post-processes the model outputs to text format\n",
    "\n",
    "Hugging Face `transformers` offers a class for each respectively, however you can also use the `WhisperProcessor`, which wraps both into a single class. \n"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.086234Z",
     "start_time": "2024-08-09T12:25:59.311087Z"
    }
   },
   "source": [
    "from transformers import WhisperFeatureExtractor\n",
    "from transformers import WhisperTokenizer\n",
    "from transformers import WhisperProcessor\n",
    "\n",
    "model_name_or_path = \"openai/whisper-small\"\n",
    "\n",
    "feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path)\n",
    "tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path, language=language, task=task)\n",
    "processor = WhisperProcessor.from_pretrained(model_name_or_path, language=language, task=task)"
   ],
   "outputs": [],
   "execution_count": 6
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we need to address a sampling rate mismatch. The Common Voice dataset typically provides audio sampled at 48 kHz, but Whisper's feature extractor expects a sampling rate of 16 kHz. To resolve this, we need to *downsample* our audio data."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.096860Z",
     "start_time": "2024-08-09T12:26:02.087803Z"
    }
   },
   "source": [
    "#sample down to 16000\n",
    "from datasets import Audio\n",
    "\n",
    "common_voice = common_voice.cast_column(\"audio\", Audio(sampling_rate=16000))"
   ],
   "outputs": [],
   "execution_count": 7
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.111920Z",
     "start_time": "2024-08-09T12:26:02.098488Z"
    }
   },
   "source": [
    "print(common_voice[\"train\"][0])"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'audio': {'path': '/home/imhof/.cache/huggingface/datasets/downloads/extracted/8fcfd9e391a57582a1ced30aaf3434aa04b2903f54adfa3c588ef97e404d8bd9/hi_train_0/common_voice_hi_26008353.mp3', 'array': array([ 3.81639165e-17,  2.42861287e-17, -1.73472348e-17, ...,\n",
      "       -1.30981789e-07,  2.63096808e-07,  4.77157300e-08]), 'sampling_rate': 16000}, 'sentence': 'हमने उसका जन्मदिन मनाया।'}\n"
     ]
    }
   ],
   "execution_count": 8
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can create a function `prepare_dataset` to make our data ready for the model with the following steps:\n",
    "\n",
    "1) Grab the audio data from each sample in the batch (this reloading triggers the resampling operation)\n",
    "2) Use the feature extractor to compute the input features from the 1-dim audio array\n",
    "3) Encode the transcriptions to label ids with the tokenizer"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.117173Z",
     "start_time": "2024-08-09T12:26:02.113222Z"
    }
   },
   "source": [
    "def prepare_dataset(batch):\n",
    "    # load and resample audio data from 48 to 16kHz\n",
    "    audio = batch[\"audio\"]\n",
    "\n",
    "    # compute log-Mel input features from input audio array\n",
    "    batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\n",
    "\n",
    "    # encode target text to label ids\n",
    "    batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\n",
    "    return batch"
   ],
   "outputs": [],
   "execution_count": 9
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We then use the `dataset.map()` function to apply the preparation function to the whole training split."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.130524Z",
     "start_time": "2024-08-09T12:26:02.120882Z"
    }
   },
   "source": [
    "common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=1)"
   ],
   "outputs": [],
   "execution_count": 10
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Define a DataCollator\n",
    "\n",
    "We now define a DataCollator class that will be responsible for batching and preprocessing our training data.\n",
    "\n",
    "In the DataCollator we ensure that both the input features and our tokenized input_ids in our labels are of the same length. We do this by padding both of them to ensure they are equal, and then replace the padding values with -100 to ensure that these tokens are ignored when computing the loss."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.138258Z",
     "start_time": "2024-08-09T12:26:02.132043Z"
    }
   },
   "source": [
    "import torch\n",
    "\n",
    "from dataclasses import dataclass\n",
    "from typing import Any, Dict, List, Union\n",
    "\n",
    "\n",
    "@dataclass\n",
    "class DataCollatorSpeechSeq2SeqWithPadding:\n",
    "    processor: Any\n",
    "\n",
    "    def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\n",
    "        # split inputs and labels since they have to be of different lengths and need different padding methods\n",
    "        # first treat the audio inputs by simply returning torch tensors\n",
    "        input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\n",
    "        batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\n",
    "\n",
    "        # get the tokenized label sequences\n",
    "        label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n",
    "        # pad the labels to max length\n",
    "        labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\n",
    "\n",
    "        # replace padding with -100 to ignore loss correctly\n",
    "        labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n",
    "\n",
    "        # if bos token is appended in previous tokenization step,\n",
    "        # cut bos token here as it's append later anyways\n",
    "        if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\n",
    "            labels = labels[:, 1:]\n",
    "\n",
    "        batch[\"labels\"] = labels\n",
    "\n",
    "        return batch"
   ],
   "outputs": [],
   "execution_count": 11
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We then initialize our DataCollator so we can apply it to our dataset during the training process."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:02.147493Z",
     "start_time": "2024-08-09T12:26:02.140019Z"
    }
   },
   "source": [
    "data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)"
   ],
   "outputs": [],
   "execution_count": 12
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Evaluation Metrics\n",
    "\n",
    "We'll use the word error rate (WER) metric, a metric used primarily for evaluating performance on audio speech recognition models. \n",
    "\n",
    "You can find more information about this metric [here](https://huggingface.co/metrics/wer)."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:03.236708Z",
     "start_time": "2024-08-09T12:26:02.148975Z"
    }
   },
   "source": [
    "import evaluate\n",
    "\n",
    "metric = evaluate.load(\"wer\")\n",
    "\n",
    "def compute_metrics(pred):\n",
    "    pred_ids = pred.predictions\n",
    "    label_ids = pred.label_ids\n",
    "\n",
    "    # replace -100 with the pad_token_id\n",
    "    label_ids[label_ids == -100] = tokenizer.pad_token_id\n",
    "\n",
    "    # we do not want to group tokens when computing the metrics\n",
    "    pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\n",
    "    label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)\n",
    "\n",
    "    wer = 100 * metric.compute(predictions=pred_str, references=label_str)\n",
    "\n",
    "    return {\"wer\": wer}"
   ],
   "outputs": [],
   "execution_count": 13
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Initialize the adapter model\n",
    "\n",
    "Here we will setup the model using the `WhisperAdapterModel` class from `adapters`. As you can see, we only swapped out the normal `WhisperForConditionalGeneration` from Hugging Face `transformers` and gained all the parameter-efficient finetuning capabilities in one line of code.\n",
    "\n",
    "We still can easily download the pre-trained checkpoint from the Hugging Face Hub via `from_pretained()`.\n"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:04.857147Z",
     "start_time": "2024-08-09T12:26:03.238282Z"
    }
   },
   "source": [
    "from adapters import WhisperAdapterModel\n",
    "model_name_or_path = \"openai/whisper-small\"\n",
    "\n",
    "model = WhisperAdapterModel.from_pretrained(\n",
    "    model_name_or_path,\n",
    ")"
   ],
   "outputs": [],
   "execution_count": 14
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "AdapterModels are typically instantiated without predefined heads, unlike static head Hugging Face models. This design allows for flexible addition and removal of heads as needed. However, when loading from a pre-trained checkpoint that contains head weights, the `adapters` library automatically converts these weights into a \"default\" head and adds it to the model. This behavior explains why the 'heads' attribute of our model is not empty.\n",
    "\n",
    "While we could have added and trained a new head alongside the LoRA adapter, we will utilize the existing \"default\" head and train only the newly initialized adapter."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:04.865355Z",
     "start_time": "2024-08-09T12:26:04.858842Z"
    }
   },
   "source": [
    "model.heads"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ModuleDict(\n",
       "  (default): Seq2SeqLMHead(\n",
       "    (0): Linear(in_features=768, out_features=51865, bias=False)\n",
       "  )\n",
       ")"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 15
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:04.874210Z",
     "start_time": "2024-08-09T12:26:04.867104Z"
    }
   },
   "source": [
    "model.active_head"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'default'"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 16
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we add a new LoRA adapter, that we will fine-tune instead of the ``Whisper`` model parameters.\n",
    "Additionally, we leverage the `train_adapter` function to make sure the `AdapterTrainer` knows what adapter weights need to be updated during training."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:04.917873Z",
     "start_time": "2024-08-09T12:26:04.875711Z"
    }
   },
   "source": [
    "import adapters\n",
    "\n",
    "name = \"whisper_LoRA\"\n",
    "\n",
    "model.add_adapter(name, config=\"lora\")\n",
    "model.train_adapter(name)\n",
    "\n",
    "print(model.adapter_summary())"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================================================================\n",
      "Name                     Architecture         #Param      %Param  Active   Train\n",
      "--------------------------------------------------------------------------------\n",
      "whisper_LoRA             lora                884,736       0.366       1       1\n",
      "--------------------------------------------------------------------------------\n",
      "Full model                               241,734,912     100.000               0\n",
      "================================================================================\n"
     ]
    }
   ],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### Model Training\n",
    "\n",
    "We will now define the training arguments for our model. We will be using the `Seq2SeqTrainingArguments` class from `transformers` to define the training arguments. This class is similar to the `TrainingArguments` class, but it is specifically designed for sequence-to-sequence tasks like audio transcription.\n",
    "\n",
    "The `Seq2SeqTrainingArguments` class has a special parameter called `predict_with_generate`, which, if set to `True` will enable using the `generate()` method during the evaluation loop for producing the prediction. \n",
    "Unfortunately, we are currently not able to use this parameter due to signature mismatches of the `forward()` method of the `WhisperAdapterModel`, therefore we fall back to normal training, which also works.\n",
    "(This notebook is based on the current state of the `adapters` library, version 1.0.0,  and will be updated when a corresponding fix is implemented.)"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:04.973661Z",
     "start_time": "2024-08-09T12:26:04.919248Z"
    }
   },
   "source": [
    "from transformers import Seq2SeqTrainingArguments\n",
    "\n",
    "training_args = Seq2SeqTrainingArguments(\n",
    "    output_dir=\"whisper_finetuning\",  # change to a directory name of your choice\n",
    "    per_device_train_batch_size=8,\n",
    "    gradient_accumulation_steps=1,  # increase by 2x for every 2x decrease in batch size\n",
    "    learning_rate=1e-3,\n",
    "    warmup_steps=50,\n",
    "    #predict_with_generate=True,  # currently not supported\n",
    "    num_train_epochs=3, # edit this based on the number of epochs you would like to train\n",
    "    eval_strategy=\"epoch\",\n",
    "    fp16=True,\n",
    "    per_device_eval_batch_size=8,\n",
    "    logging_steps=25,\n",
    ")"
   ],
   "outputs": [],
   "execution_count": 18
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can initialize the `Seq2SeqAdapterTrainer` class from `adapters` to train our model. We pass in the model, tokenizer, data collator, training arguments, and the training and evaluation datasets. The `Seq2SeqAdapterTrainer` class is specifically designed for sequence-to-sequence tasks like audio transcription. Alternatively, you could use the standard `AdapterTrainer` class for other tasks.\n",
    "\n",
    "And that's it! We are ready to start training our model."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T12:26:05.421510Z",
     "start_time": "2024-08-09T12:26:04.975147Z"
    }
   },
   "source": [
    "from adapters import Seq2SeqAdapterTrainer\n",
    "\n",
    "trainer = Seq2SeqAdapterTrainer(\n",
    "    model=model,\n",
    "    tokenizer=tokenizer,\n",
    "    data_collator=data_collator,\n",
    "    train_dataset=common_voice[\"train\"],\n",
    "    eval_dataset=common_voice[\"test\"],\n",
    "    #compute_metrics=compute_metrics,  # currently not supported\n",
    "    args=training_args,\n",
    ")\n",
    "\n",
    "model.config.use_cache = False  # silence the warnings. Please re-enable for inference!"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/accelerate/accelerator.py:488: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.\n",
      "  self.scaler = torch.cuda.amp.GradScaler(**kwargs)\n"
     ]
    }
   ],
   "execution_count": 19
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "But before we start the training, let's add an initial evaluation loop to have a baseline performance of the model on Hindi before training.\n",
    "For this, because we currently cannot use the `generate()` method inside the trainer, we create a custom evaluation function that computes the WER score on the test set."
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T13:11:41.618333Z",
     "start_time": "2024-08-09T12:26:05.423357Z"
    }
   },
   "cell_type": "code",
   "source": [
    "#initial_wer = trainer.evaluate()  # currently not supported\n",
    "\n",
    "from torch.utils.data import DataLoader\n",
    "from tqdm import tqdm\n",
    "import numpy as np\n",
    "import gc\n",
    "\n",
    "def eval_loop():\n",
    "    eval_dataloader = DataLoader(common_voice[\"test\"], batch_size=16, collate_fn=data_collator)\n",
    "    \n",
    "    model.eval()\n",
    "    for step, batch in enumerate(tqdm(eval_dataloader)):\n",
    "        with torch.cuda.amp.autocast():\n",
    "            with torch.no_grad():\n",
    "                generated_tokens = (\n",
    "                    model.generate(\n",
    "                        input_features=batch[\"input_features\"].to(\"cuda\"),\n",
    "                        decoder_input_ids=batch[\"labels\"][:, :4].to(\"cuda\"),\n",
    "                        max_new_tokens=255,\n",
    "                    )\n",
    "                    .cpu()\n",
    "                    .numpy()\n",
    "                )\n",
    "                labels = batch[\"labels\"].cpu().numpy()\n",
    "                labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n",
    "                decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\n",
    "                decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n",
    "                metric.add_batch(\n",
    "                    predictions=decoded_preds,\n",
    "                    references=decoded_labels,\n",
    "                )\n",
    "        del generated_tokens, labels, batch\n",
    "        gc.collect()\n",
    "    wer = 100 * metric.compute()\n",
    "    return wer\n",
    "\n",
    "initial_wer = eval_loop()\n",
    "print(f\"The initial wer score is: {initial_wer}\")\n"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  0%|          | 0/362 [00:00<?, ?it/s]/tmp/ipykernel_1147766/974044494.py:13: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.amp.autocast():\n",
      "The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/transformers/generation/utils.py:951: FutureWarning: You have explicitly specified `forced_decoder_ids`. This functionality has been deprecated and will throw an error in v4.40. Please remove the `forced_decoder_ids` argument in favour of `input_ids` or `decoder_input_ids` respectively.\n",
      "  warnings.warn(\n",
      "100%|██████████| 362/362 [45:36<00:00,  7.56s/it] "
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The initial wer score is: 92.50825361889444\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "execution_count": 20
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "Ok, now we are ready to start the training! Just run the cell below to start the training process.\n",
    "We will compare the performance of the model after training with the initial wer score."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:25:04.923020Z",
     "start_time": "2024-08-09T13:11:41.620045Z"
    }
   },
   "source": "trainer.train()",
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py:79: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.device(device), torch.cuda.stream(stream), autocast(enabled=autocast_enabled):\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ],
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='615' max='615' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [615/615 1:13:13, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>0.308700</td>\n",
       "      <td>0.381261</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>0.224500</td>\n",
       "      <td>0.343649</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.175200</td>\n",
       "      <td>0.336221</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py:79: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.device(device), torch.cuda.stream(stream), autocast(enabled=autocast_enabled):\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py:79: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.device(device), torch.cuda.stream(stream), autocast(enabled=autocast_enabled):\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=615, training_loss=0.3448017062210455, metrics={'train_runtime': 4402.9285, 'train_samples_per_second': 4.456, 'train_steps_per_second': 0.14, 'total_flos': 5.6870418235392e+18, 'train_loss': 0.3448017062210455, 'epoch': 3.0})"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 21
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "Trainig will take approximately 1-4 hours depending on your GPU and the batch size you have chosen. You might encounter a CUDA out of memory error, which means that the batch size is too large for your GPU. In this case, you can reduce the batch size in the training arguments."
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "Let's now evaluate the model on the test set again and compute the WER score."
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:54:27.778404Z",
     "start_time": "2024-08-09T14:25:04.924785Z"
    }
   },
   "cell_type": "code",
   "source": [
    "post_training_wer = eval_loop()\n",
    "print(f\"The wer score after training is: {post_training_wer}\")"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  0%|          | 0/362 [00:00<?, ?it/s]/tmp/ipykernel_1147766/974044494.py:13: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.amp.autocast():\n",
      "/opt/workdir/miniconda3/envs/adapters/lib/python3.11/site-packages/transformers/generation/utils.py:951: FutureWarning: You have explicitly specified `forced_decoder_ids`. This functionality has been deprecated and will throw an error in v4.40. Please remove the `forced_decoder_ids` argument in favour of `input_ids` or `decoder_input_ids` respectively.\n",
      "  warnings.warn(\n",
      "100%|██████████| 362/362 [29:22<00:00,  4.87s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The wer score after training is: 40.22263607889613\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "execution_count": 22
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "After training, our WER score is 40.2, which is a significant improvement over the initial WER score of 92.5. \n",
    "Compared to full finetuning in the original notebook, we did not reach the same performance with a WER score of 32.1, \n",
    "BUT:\n",
    "- We only utilized 0,36% of the parameters compared to full model finetuning\n",
    "- We only trained for 1.5 hours compared to 8 hours in the original notebook when finetuning the full model\n",
    "\n",
    "Which makes this a great result for parameter-efficient fine-tuning with adapters!\n",
    "\n",
    "Feel free to experiment with the training arguments, e.g., the learning rate or the number of epochs and see if you can improve the model's performance further.\n",
    "With more training time you will most likely reach a performance comparable to the original notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": "If you want to save your model on the Hugging Face Hub you need to first sign in via the cell below."
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:54:27.796515Z",
     "start_time": "2024-08-09T14:54:27.780616Z"
    }
   },
   "source": [
    "from huggingface_hub import notebook_login\n",
    "\n",
    "notebook_login()"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
      ],
      "application/vnd.jupyter.widget-view+json": {
       "version_major": 2,
       "version_minor": 0,
       "model_id": "73b820dc87e648d3a20dd45185e70540"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "execution_count": 23
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:54:28.333571Z",
     "start_time": "2024-08-09T14:54:27.798084Z"
    }
   },
   "source": [
    "model.push_adapter_to_hub(\n",
    "    \"whisper\",\n",
    "    \"whisper_adapter\",\n",
    "    datasets_tag=\"mozilla-foundation/common_voice_11_0\"\n",
    ")"
   ],
   "outputs": [],
   "execution_count": 24
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": "And you can always save your model locally on your computer by using the `save_adapter` function."
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:56:52.064043Z",
     "start_time": "2024-08-09T14:56:52.018804Z"
    }
   },
   "source": [
    "# Define the directory to save the model\n",
    "save_directory = \"./my_model_directory\"\n",
    "# Save the model\n",
    "model.save_adapter(save_directory, name)"
   ],
   "outputs": [],
   "execution_count": 25
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using AutomaticSpeechRecognitionPipeline\n",
    "\n",
    "Now that we have successfully fine-tuned our Whisper model using LoRA, it's time to put it to the test! In this section, we'll demonstrate how to use the `AutomaticSpeechRecognitionPipeline` to transcribe audio files into text using our newly trained model.\n",
    "\n",
    "First, let's create the speech recognition pipeline."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:56:54.718994Z",
     "start_time": "2024-08-09T14:56:54.705946Z"
    }
   },
   "source": [
    "from transformers import AutomaticSpeechRecognitionPipeline\n",
    "\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "\n",
    "# Create the pipeline\n",
    "pipe = AutomaticSpeechRecognitionPipeline(\n",
    "    model=model,\n",
    "    tokenizer=processor.tokenizer,\n",
    "    feature_extractor=processor.feature_extractor,\n",
    "    device=device\n",
    ")"
   ],
   "outputs": [],
   "execution_count": 26
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "Now we'll create an inference function that takes a raw audio sample and the pipeline, and returns the audio transcription."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:56:55.937525Z",
     "start_time": "2024-08-09T14:56:55.930846Z"
    }
   },
   "source": [
    "import numpy as np\n",
    "from scipy import signal\n",
    "import torch\n",
    "\n",
    "def transcribe_audio_sample(audio_sample, pipe, processor, language, task):\n",
    "    \"\"\"\n",
    "    Transcribe a single unprocessed audio sample from the Common Voice dataset.\n",
    "    \n",
    "    :param audio_sample: A single audio sample from the Common Voice dataset\n",
    "    :param pipe: Pre-initialized AutomaticSpeechRecognitionPipeline\n",
    "    :param processor: Pre-initialized WhisperProcessor\n",
    "    :param language: Language code for transcription\n",
    "    :param task: Task for the model (e.g., \"transcribe\")\n",
    "    :return: Transcribed text\n",
    "    \"\"\"\n",
    "    \n",
    "    # Function to resample audio using scipy\n",
    "    def resample_audio(audio_array, orig_sr, target_sr):\n",
    "        resampled = signal.resample(audio_array, int(len(audio_array) * target_sr / orig_sr))\n",
    "        return resampled\n",
    "    \n",
    "    # Resample the audio\n",
    "    original_sr = audio_sample['audio']['sampling_rate']\n",
    "    target_sr = processor.feature_extractor.sampling_rate  # Whisper expects 16kHz\n",
    "    \n",
    "    if original_sr != target_sr:\n",
    "        resampled_audio = resample_audio(audio_sample['audio']['array'], original_sr, target_sr)\n",
    "    else:\n",
    "        resampled_audio = audio_sample['audio']['array']\n",
    "    \n",
    "    # Get forced decoder IDs - Used to ensure that the model generates output in a specific language and for a specific task\n",
    "    forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)\n",
    "    \n",
    "    # Transcribe\n",
    "    with torch.cuda.amp.autocast():\n",
    "        result = pipe(\n",
    "            resampled_audio,\n",
    "            generate_kwargs={\"forced_decoder_ids\": forced_decoder_ids},\n",
    "            max_new_tokens=256\n",
    "        )[\"text\"]\n",
    "    \n",
    "    return result"
   ],
   "outputs": [],
   "execution_count": 27
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": "Next, we need some data to test our model. For simplicity, let's just reuse the test split of Common Voice."
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:56:58.907428Z",
     "start_time": "2024-08-09T14:56:57.166610Z"
    }
   },
   "source": [
    "from datasets import Audio, DatasetDict, load_dataset\n",
    "\n",
    "# Redownload the test split \n",
    "inference_test = DatasetDict()\n",
    "inference_test[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", language_abbr, split=\"test\")\n",
    "\n",
    "# Select a subset of samples\n",
    "inf_set = inference_test[\"test\"].select(range(0,32))"
   ],
   "outputs": [],
   "execution_count": 28
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "Finally, let's transcribe a sample and print the result!"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-08-09T14:57:00.641561Z",
     "start_time": "2024-08-09T14:56:58.984637Z"
    }
   },
   "source": [
    "transcription = transcribe_audio_sample(inf_set[2], pipe, processor, language, task)\n",
    "print(f\"Transcription: {transcription}\")"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/tmp/ipykernel_1147766/518660873.py:35: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
      "  with torch.cuda.amp.autocast():\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Transcription: वीराट कोली के लिए बेहत्रिन माउका\n"
     ]
    }
   ],
   "execution_count": 29
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
