{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
    "\n",
    "Instructions for setting up Colab are as follows:\n",
    "1. Open a new Python 3 notebook.\n",
    "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
    "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
    "4. Run this cell to set up dependencies.\n",
    "5. Restart the runtime (Runtime -> Restart Runtime) for any upgraded packages to take effect\n",
    "\n",
    "\n",
    "NOTE: User is responsible for checking the content of datasets and the applicable licenses and determining if suitable for the intended use.\n",
    "\"\"\"\n",
    "import os\n",
    "\n",
    "# Install dependencies\n",
    "!apt-get install sox libsndfile1 ffmpeg\n",
    "\n",
    "# setting up a workspace folder where all downloaded content will be held\n",
    "# change it to whatever location is convenient and remove after you're done with this tutorial\n",
    "WORKSPACE_DIR = os.path.abspath('confidence-ensembles-tutorial')\n",
    "os.makedirs(WORKSPACE_DIR, exist_ok=True)\n",
    "\n",
    "# need to locate NeMo repository\n",
    "# either provide a path to local NeMo repository with NeMo already installed or git clone\n",
    "\n",
    "# option #1: local path to NeMo repo with NeMo already installed\n",
    "NEMO_DIR = os.path.dirname(os.path.dirname(os.path.abspath('')))\n",
    "\n",
    "# option #2: download NeMo repo\n",
    "if 'google.colab' in str(get_ipython()) or not os.path.exists(os.path.join(NEMO_DIR, \"nemo\")):\n",
    "    BRANCH = 'main'\n",
    "    !git clone -b $BRANCH https://github.com/NVIDIA/NeMo $WORKSPACE_DIR/NeMo\n",
    "    NEMO_DIR = os.path.join(WORKSPACE_DIR, 'NeMo')\n",
    "\n",
    "# installing nemo (from source code)\n",
    "!cd $NEMO_DIR && ./reinstall.sh non-dev\n",
    "\n",
    "# clone SDP and install requirements\n",
    "!git clone https://github.com/NVIDIA/NeMo-speech-data-processor $WORKSPACE_DIR/NeMo-speech-data-processor\n",
    "!pip install -r $WORKSPACE_DIR/NeMo-speech-data-processor/requirements/main.txt\n",
    "\n",
    "\"\"\"\n",
    "Remember to restart the runtime for the kernel to pick up any upgraded packages.\n",
    "Alternatively, you can uncomment the exit() below to crash and restart the kernel, in the case\n",
    "that you want to use the \"Run All Cells\" (or similar) option.\n",
    "\"\"\"\n",
    "# exit()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Confidence-based Ensembles of End-to-End ASR Models\n",
    "\n",
    "In this tutorial we discuss how to use confidence-based ensembles to improve different aspects of ASR models.\n",
    "\n",
    "We are only going to cover basics in this tutorial, so make sure to check out our [paper](https://arxiv.org/abs/2306.15824) to learn more details!\n",
    "\n",
    "Before we are going to learn **what** a confidence-based ensemble is, let's discuss **why** you might want to use one. A high-level motivation behind this method is that there are many \"expert\" ASR models that are publicly available. These models are often specialized to a certain language, accent or domain and might not perform well outside of it. But what if you need to cover multiple such target domains and you don't have a single model that works well on all of them? This is exactly the case when you should try confidence-based ensembles! In our paper we show two applications of this general idea:\n",
    "\n",
    "1. If you need to support multi-lingual ASR, but don't have a single model that covers all your languages, you basically have two choices. You can either run a separate [language-identification](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/langid_ambernet) (LID) block first to pick an ASR model from the corresponding language. Or you can run all models in parallel and use confidence to select which output to use. In the paper we show that the second method generally works better and can be even combined with LID model for the best results.\n",
    "2. If you have a generic ASR model as well as a [finetuned version](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/ASR_CTC_Language_Finetuning.ipynb) that works much better on a target domain. In such a case, your finetuned model will likely degrade on the \"base\" domain. What if you need to support both cases in a single application and don't have an easy way to know which domain the input comes from? To solve this, you can use confidence ensembles to pick the right output automatically.\n",
    "\n",
    "Let's also briefly talk about some limitations of the confidence-based ensembles.\n",
    "\n",
    "1. Confidence-based ensembles are not well suited for latency-critical applications as they require a few seconds of audio to select the most confident model.\n",
    "2. The runtime cost grows linearly with each added model, which limits the practically useful ensemble size.\n",
    "3. Given enough compute and data, it is likely possible to build specialized models that would outperform confidence-based ensembles on most tasks.\n",
    "\n",
    "To sum up — if you're combining a small number of models (e.g., up to 5), can afford a few seconds of additional latency and don't have resources to build a specialized model, confidence-based ensembles might be a good fit and you should try them out! There are many ASR models that you can combine in the ensemble available in [NVIDIA NGC cloud](https://catalog.ngc.nvidia.com/models) as well as other model hubs, such as [Hugging Face](https://huggingface.co/nvidia).\n",
    "\n",
    "In the next few cells we will cover what a confidence-based ensemble is and some best practices of using these models. Each cell is mostly self-contained, so feel free to skip around or jump directly to the code part if you want to see usage examples right away."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What is a confidence-based ensemble?\n",
    "\n",
    "You're probably familiar with more traditional [ensembles of machine learning models](https://en.wikipedia.org/wiki/Ensemble_learning). Confidence ensembles are a less popular approach where we only use an output of a single model that is deemed best for the current input. A typical way to pick the \"best\" output is to select a model with the highest confidence score, which provides an estimate of how likely the output is to be correct. Here is a schematic illustration of the model.\n",
    "\n",
    "<img src=\"https://github.com/NVIDIA/NeMo/releases/download/v1.19.0/conf-ensembles-overview.png\" alt=\"Confidence-ensemble schematic representation\" width=\"600\"/>\n",
    "\n",
    "As you can see, to define confidence ensemble, we need to define 3 things:\n",
    "\n",
    "1. Which models are part of the ensemble.\n",
    "2. How do we estimate model's confidence.\n",
    "3. How do we \"calibrate\" confidence values via a model selection block.\n",
    "\n",
    "Let's discuss each of these 3 items below.\n",
    "\n",
    "### Which models to use?\n",
    "\n",
    "A short answer — you can use any ASR models. E.g., you can combine a number of CTC models, or Transducer models, or even mix-and-match. \n",
    "\n",
    "A more detailed answer is that the performance of the confidence ensemble is upper-bounded by the performance of the best model on each of the input examples. Thus you will benefit if some of your models work really well on part of the input compared to other models. This way you will get more gains compared to each separate model, and it will also make correct model identification easier.\n",
    "\n",
    "### How to estimate a model's confidence?\n",
    "\n",
    "Good news, we have a whole separate [tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/ASR_Confidence_Estimation.ipynb) on this topic! You can go through it if you want to know all the details about different ways to estimate confidence of NeMo ASR models. There are different confidence measures and aggregation functions and for the absolute best performance, you will need to run a grid-search to pick the best confidence estimation way for your specific models and data.\n",
    "\n",
    "That being said, we found that there exist a set of confidence parameters that work pretty well on a large set of models and datasets. They are default in NeMo and so you might not need to worry about running the search. If you do want to maximize the performance by tuning the confidence parameters, you only need to add [a few extra config lines](#Building-and-evaluating-ensemble-(tuned-parameters)).\n",
    "\n",
    "### How to calibrate confidence values?\n",
    "\n",
    "Let's now talk about the \"model selection block\". First of all — you don't need to know the details to use confidence ensembles, calibration is always automatically performed when you build the model. But if you want to learn more, read on!\n",
    "\n",
    "First, let's discuss why we need a separate \"model selection block\" to pick the most confident model. If we had an access to the perfect confidence, which would exactly equal to the probability of the model's output being correct, we wouldn't need this block. In this idealized case we can simply take the model with the maximum confidence score. But in practice, models tend to be over- or under-confident, which means that their confidence scores need to be calibrated together to be comparable. E.g., one model might mostly produce scores from 0 to 0.8, while another model tend to produce scores from 0 to 0.5, even though they have the same average accuracy. So we want to multiply the first model's score by 1.25 and the second model's score by 2.0 to put the on the same \"scale\".\n",
    "\n",
    "More generally, the goal of the model selection block is to pick the right model for each input. So it needs to solve a standard classification task, where the set of all model's confidence scores is the input and the \"most confident\" model index is the output. Since this is a standard classification problem in a low-dimensional space, we found that using a logistic regression (LR) model is sufficient to solve it with a high accuracy. We assume that for each model there exist a small (e.g., 100-1000 examples) set of input utterances that the model performs the best on. E.g., if you build a multi-lingual ensemble, this set will come from the language the model is trained to recognize. We will use these samples → model correspondence as the ground-truth for training LR.\n",
    "\n",
    "> **_note:_**  If you don't have a clear \"audio → best recognition model\" correspondence, you can still build it artificially, as long as you also have ground-truth text labels. Just take a larger set of inputs, run all models on them and compute WER. This will tell you which model works best for which audio. But note that if all your models perform very similarly, the gains from confidence ensembling will also be minimal!\n",
    "\n",
    "Even though logistic regression is a simple model and operates in a low-dimensional space, we found that it's still beneficial sometimes to tune its hyperparameters, especially if your input data is imbalanced (e.g., you have more ground-truth samples for some models than others). This tuning is very cheap and so will be performed automatically, as long as you [specify a validation set in the config](#Building-and-evaluating-ensemble-(tuned-parameters))."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# How to use confidence-based ensembles in NeMo?\n",
    "\n",
    "The following cells contain code examples of how to use confidence ensembles in NeMo. We will build confidence ensemble of two models - generic ASR model trained on a large set of audio and a modified version of the same model that's finetuned to recognize [Irish English accent](https://openslr.org/83/).\n",
    "\n",
    "To do this, we will go through the following steps:\n",
    "\n",
    "1. Download and process the Irish accent data using NVIDIA's [Speech Data Processor](https://github.com/NVIDIA/NeMo-speech-data-processor).\n",
    "2. Finetune the [Conformer Large CTC LS model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_conformer_ctc_large_ls) on this data. All steps work exactly the same for Transducer models as well.\n",
    "3. Evaluate performance of the original and finetuned models on the Irish accent data and on LibriSpeech.\n",
    "4. Build a confidence-based ensemble (with default parameters) of these two models and check how it compares with each of the models.\n",
    "5. Tune the confidence hyperparameters of the ensemble and check how the performance changes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Downloading and preparing Irish accent data using [Speech Data Processor](https://github.com/NVIDIA/NeMo-speech-data-processor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# let's start by downloading and processing the Irish accent data with SDP\n",
    "# Check out https://github.com/NVIDIA/NeMo-speech-data-processor to learn more details\n",
    "\n",
    "# run the Irish accent preparation config (will download and process data for us)\n",
    "cmd = (\n",
    "    f\"cd {WORKSPACE_DIR}/NeMo-speech-data-processor && \"\n",
    "    \"python main.py --config-path=dataset_configs/english/slr83 --config-name=config.yaml \"\n",
    "    f\"workspace_dir={WORKSPACE_DIR}/slr83-data dialect=irish_english_male data_split={{data_split}}\"\n",
    ")\n",
    "for data_split in ['train', 'dev', 'test']:\n",
    "    print(f\"****************** Preparing Irish accent data (split={data_split}) ******************\\n\\n\")\n",
    "    cur_cmd = cmd.format(data_split=data_split)\n",
    "    !$cur_cmd\n",
    "    \n",
    "# you can inspect https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/english/slr83/config.yaml\n",
    "# to see what processing was done. \n",
    "# You can also check the generated NeMo manifests inside 'slr83-data' folder \n",
    "# that are ready for training and evaluation \n",
    "\n",
    "!ls $WORKSPACE_DIR/slr83-data/irish_english_male"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Finetuning the generic model on the accent data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# before running training, let's open up a tensorboard pane to see the progress\n",
    "# you might need to install tensorboard and tensorboard jupyter extension if you get errors\n",
    "# you can totally skip this cell, since the logs will also be streamed to stdout\n",
    "%load_ext tensorboard\n",
    "%tensorboard --logdir $WORKSPACE_DIR/irish_finetuning --bind_all "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# now let's finetune the generic model on this data. \n",
    "# We will only run finetuning for 5 epochs (the results can be improved by running longer)\n",
    "# check out https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/ASR_CTC_Language_Finetuning.ipynb\n",
    "# to learn more about finetuning NeMo ASR models\n",
    "from omegaconf import open_dict, OmegaConf\n",
    "from pytorch_lightning import Trainer\n",
    "\n",
    "from nemo.collections.asr.models.ctc_bpe_models import EncDecCTCModelBPE\n",
    "import nemo.utils.exp_manager as exp_manager\n",
    "\n",
    "\n",
    "# feel free to play around with parameters here (e.g., increase bs/devices to match your GPUs)\n",
    "# but note that you might need to tune LR a bit to get good results\n",
    "\n",
    "\n",
    "trainer = Trainer(\n",
    "    devices=1,  # to have the same results on single/multi-gpu systems\n",
    "    max_epochs=5,  # we typically want to finetune for 50-100 epochs, but 5 is enough for the tutorial\n",
    "    # just some reasonable defaults\n",
    "    accelerator='auto',\n",
    "    accumulate_grad_batches=1,\n",
    "    enable_checkpointing=False,\n",
    "    logger=False,\n",
    "    log_every_n_steps=100,\n",
    ")  \n",
    "model = EncDecCTCModelBPE.from_pretrained(\"stt_en_conformer_ctc_large_ls\", trainer=trainer)\n",
    "\n",
    "# updating data/optimization to support finetuning\n",
    "with open_dict(model.cfg):\n",
    "    # setting up data manifests and lowering batch size in case we deal with low-memory GPUs\n",
    "    model.cfg.train_ds.manifest_filepath = f\"{WORKSPACE_DIR}/slr83-data/irish_english_male/train_manifest.json\"\n",
    "    model.cfg.train_ds.batch_size = 4\n",
    "    model.cfg.train_ds.is_tarred = False\n",
    "    model.cfg.validation_ds.manifest_filepath = f\"{WORKSPACE_DIR}/slr83-data/irish_english_male/dev_manifest.json\"\n",
    "    model.cfg.validation_ds.batch_size = 4\n",
    "\n",
    "    model.cfg.optim.lr = 0.02  # 100 times lower to facilitate finetuning\n",
    "    model.cfg.optim.sched.warmup_steps = 0  # no warmup\n",
    "\n",
    "# updating the model according to the new parameters\n",
    "model.setup_training_data(model.cfg.train_ds)\n",
    "model.setup_multiple_validation_data(model.cfg.validation_ds)\n",
    "model.setup_optimization(model.cfg.optim)\n",
    "\n",
    "# controlling where the model is saved and asking to save best WER model\n",
    "exp_manager_config = exp_manager.ExpManagerConfig(\n",
    "    exp_dir=f'{WORKSPACE_DIR}/irish_finetuning',\n",
    "    checkpoint_callback_params=exp_manager.CallbackParams(\n",
    "        monitor=\"val_wer\",\n",
    "        mode=\"min\",\n",
    "        always_save_nemo=True,\n",
    "        save_best_model=True,\n",
    "    ),\n",
    ")\n",
    "exp_manager.exp_manager(trainer, OmegaConf.structured(exp_manager_config))\n",
    "            \n",
    "# launching finetuning\n",
    "trainer.fit(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluating both models to compare performance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# let's evaluate the performance of the original and finetuned models on the test set of the Irish accent data\n",
    "# as well as the LibriSpeech (which is a proxy for generic ASR domain). We expect the finetuned model to be\n",
    "# significantly better on the Irish data and significantly worse on the LS\n",
    "\n",
    "# running the script to download LibriSpeech data\n",
    "os.makedirs(os.path.join(WORKSPACE_DIR, \"librispeech\"), exist_ok=True)\n",
    "!cd $NEMO_DIR && python scripts/dataset_processing/get_librispeech_data.py \\\n",
    "                        --data_root=$WORKSPACE_DIR/librispeech --data_set=test_other,dev_other\n",
    "\n",
    "\n",
    "# running evaluation with generic model on LS. Typically will be run as a script in command line, but we want to\n",
    "# capture WER numbers for display later, so let's import and run the evaluation function here\n",
    "\n",
    "# adding script folder to python path to be able to import it\n",
    "import glob\n",
    "import sys\n",
    "import pandas as pd\n",
    "\n",
    "sys.path.insert(0, os.path.join(NEMO_DIR, \"examples\", \"asr\"))\n",
    "from speech_to_text_eval import EvaluationConfig, main as run_eval\n",
    "\n",
    "wer_results = {\n",
    "    'generic': [],  # LS, Irish\n",
    "    'finetuned': [],\n",
    "}\n",
    "\n",
    "# running evaluation with generic model\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"librispeech\", \"test_other.json\"),\n",
    "    pretrained_name=\"stt_en_conformer_ctc_large_ls\",\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['generic'].append(eval_cfg.metric_value)\n",
    "\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"slr83-data\", \"irish_english_male\", \"test_manifest.json\"),\n",
    "    pretrained_name=\"stt_en_conformer_ctc_large_ls\",\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['generic'].append(eval_cfg.metric_value)\n",
    "\n",
    "\n",
    "# running evaluation with finetuned model\n",
    "finetuned_model_path = glob.glob(os.path.join(WORKSPACE_DIR, \"irish_finetuning\", \"**\", \"*.nemo\"), recursive=True)[0]\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"librispeech\", \"test_other.json\"),\n",
    "    model_path=finetuned_model_path,\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['finetuned'].append(eval_cfg.metric_value)\n",
    "\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"slr83-data\", \"irish_english_male\", \"test_manifest.json\"),\n",
    "    model_path=finetuned_model_path,\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['finetuned'].append(eval_cfg.metric_value)\n",
    "\n",
    "# you should be able to see that the generic model is much better\n",
    "# on LibriSpeech and much worse on the accent data\n",
    "print(\"\\n*************************** Results ***************************\\n\")\n",
    "pd.DataFrame(wer_results, index=['LibriSpeech', 'Irish Accent']).transpose()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Building and evaluating ensemble (default parameters)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# now let's finally combine the two models in the confidence-based ensemble!\n",
    "# first, we are going to use default parameters (no tuning)\n",
    "cmd = (\n",
    "    f\"cd {NEMO_DIR} && python scripts/confidence_ensembles/build_ensemble.py \"\n",
    "    # and example config is good enough for our purposes\n",
    "    f\"--config-path={NEMO_DIR}/scripts/confidence_ensembles --config-name=ensemble_config.yaml \"\n",
    "    # specifying model and corresponding dataset (to be used as ground-truth for logistic regression training)\n",
    "    \"ensemble.0.model=stt_en_conformer_ctc_large_ls \"\n",
    "    # by default it subsamples to a max of 1000 samples, so it's not going to use the full data\n",
    "    # note that for librispeech we are using the dev data - this is just to avoid downloading the training set\n",
    "    # it's perfectly fine and simpler to use the training data here\n",
    "    f\"ensemble.0.training_manifest={WORKSPACE_DIR}/librispeech/dev_other.json \"\n",
    "    # same for the second model/dataset\n",
    "    f\"ensemble.1.model={finetuned_model_path} \"\n",
    "    f\"ensemble.1.training_manifest={WORKSPACE_DIR}/slr83-data/irish_english_male/train_manifest.json \"\n",
    "    # setting up the final checkpoint location and lower batch size to save GPU memory\n",
    "    f\"output_path={WORKSPACE_DIR}/confidence_ensemble_default.nemo \"\n",
    "    \"transcription.batch_size=4 \"\n",
    ")\n",
    "\n",
    "# building the ensemble\n",
    "!$cmd\n",
    "\n",
    "# running evaluation on LibriSpeech and Irish accent data\n",
    "# you will see that the transcription is run 2 times, since we need to run both models to get confidence scores\n",
    "wer_results['ensemble (default)'] = []\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"librispeech\", \"test_other.json\"),\n",
    "    model_path=os.path.join(WORKSPACE_DIR, 'confidence_ensemble_default.nemo'),\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['ensemble (default)'].append(eval_cfg.metric_value)\n",
    "\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"slr83-data\", \"irish_english_male\", \"test_manifest.json\"),\n",
    "    model_path=os.path.join(WORKSPACE_DIR, 'confidence_ensemble_default.nemo'),\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['ensemble (default)'].append(eval_cfg.metric_value)\n",
    "\n",
    "# you should be able to see that the ensemble with default parameters is already \n",
    "# working very well. It might even be slightly better than the best model,\n",
    "# because it can sometimes \"incorrectly\" pick generic model on Irish data\n",
    "# when it's actually giving lower WER than the finetuned model (and same for LibriSpeech).\n",
    "print(\"\\n*************************** Results ***************************\\n\")\n",
    "pd.DataFrame(wer_results, index=['LibriSpeech', 'Irish Accent']).transpose()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Building and evaluating ensemble (tuned parameters)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# now, we are going to allow tuning of the confidence and LR parameters to see how this affects results\n",
    "# this cell is quite similar to the previous one - the only difference is in parameters of the\n",
    "# command-line to build an ensemble.\n",
    "\n",
    "# for LibriSpeech, since we already used validation for training the logistic regression \n",
    "# (to avoid downloading actual training data), we will create a new manifest with \n",
    "# just 100 samples for training and another 100 for validation\n",
    "!head -n 100 {WORKSPACE_DIR}/librispeech/dev_other.json > {WORKSPACE_DIR}/librispeech/dev_other_train100.json\n",
    "!tail -n 100 {WORKSPACE_DIR}/librispeech/dev_other.json > {WORKSPACE_DIR}/librispeech/dev_other_dev100.json\n",
    "\n",
    "# we keep everything exactly the same, but specify a few additional config settings\n",
    "cmd = (\n",
    "    f\"cd {NEMO_DIR} && python scripts/confidence_ensembles/build_ensemble.py \"\n",
    "    f\"--config-path={NEMO_DIR}/scripts/confidence_ensembles --config-name=ensemble_config.yaml \"\n",
    "    \"ensemble.0.model=stt_en_conformer_ctc_large_ls \"\n",
    "    f\"ensemble.0.training_manifest={WORKSPACE_DIR}/librispeech/dev_other_train100.json \"\n",
    "    f\"ensemble.1.model={finetuned_model_path} \"\n",
    "    f\"ensemble.1.training_manifest={WORKSPACE_DIR}/slr83-data/irish_english_male/train_manifest.json \"\n",
    "    # let's specify to just use 100 samples here as well to make tuning faster\n",
    "    # 100 is usually more than enough (remember that we are just fitting 2 parameters in the logistic regression)\n",
    "    # but default is 1000 just in case\n",
    "    f\"ensemble.1.max_training_samples=100 \"\n",
    "    # the tuning will take a bit more memory, so let's use bs=2 this time\n",
    "    \"transcription.batch_size=2 \"\n",
    "    # requesting to tune the confidence\n",
    "    # you can also specify exactly what grid-search to run here,\n",
    "    # but we'd just use the default (it's reasonably large)\n",
    "    \"tune_confidence=True \"\n",
    "    # need to provide the validation sets for the tuning\n",
    "    f\"ensemble.0.dev_manifest={WORKSPACE_DIR}/librispeech/dev_other_dev100.json \"\n",
    "    f\"ensemble.1.dev_manifest={WORKSPACE_DIR}/slr83-data/irish_english_male/dev_manifest.json \"\n",
    "    f\"output_path={WORKSPACE_DIR}/confidence_ensemble_tuned.nemo \"\n",
    ")\n",
    "\n",
    "# building the ensemble. You should see that confidence computation step is \n",
    "# taking quite a bit longer - this is where the grid search happens\n",
    "!$cmd\n",
    "\n",
    "# running evaluation on LibriSpeech and Irish accent data\n",
    "# you will see that the transcription is run 2 times, since we need to run both models to get confidence scores\n",
    "wer_results['ensemble (tuned)'] = []\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"librispeech\", \"test_other.json\"),\n",
    "    model_path=os.path.join(WORKSPACE_DIR, 'confidence_ensemble_tuned.nemo'),\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['ensemble (tuned)'].append(eval_cfg.metric_value)\n",
    "\n",
    "eval_cfg = run_eval(EvaluationConfig(\n",
    "    dataset_manifest=os.path.join(WORKSPACE_DIR, \"slr83-data\", \"irish_english_male\", \"test_manifest.json\"),\n",
    "    model_path=os.path.join(WORKSPACE_DIR, 'confidence_ensemble_tuned.nemo'),\n",
    "    batch_size=4,\n",
    "    output_filename=os.path.join(WORKSPACE_DIR, \"eval_results.json\"),\n",
    "))\n",
    "wer_results['ensemble (tuned)'].append(eval_cfg.metric_value)\n",
    "\n",
    "# the tuned ensemble should be a bit better than default (but not too much)\n",
    "# note that there is a bit of randomness in the finetuning and our dev set is quite small\n",
    "# so it's possible that the tuned model can be similar to default or even slightly worse\n",
    "# for the real applications it's recommended to use larger dev set,\n",
    "# but tuning will take longer in this case\n",
    "print(\"\\n*************************** Results ***************************\\n\")\n",
    "pd.DataFrame(wer_results, index=['LibriSpeech', 'Irish Accent']).transpose()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
