{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "df7PEVIJ5eWB"
   },
   "source": [
    "#### WHAT ARE WE BUILDING TODAY?  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "J3OvtPxn3G0P"
   },
   "source": [
    "In this notebook, we'll focus on building an interesting application using **Whisper, NeMo MSDD, and LanceDB** to create an end-to-end speaker-mapped transcription from an audio file.\n",
    "\n",
    "We'll extract speakers from the audio, generate its transcription using Whisper models, perform diarization to identify the number of speakers and map them with timestamps, and then use LanceDB to match these speakers with their correct names from a database of known speakers.  \n",
    "\n",
    "I believe this notebook will give you a kickstart in developing an end-to-end product and exploring how these technologies can be used to create innovative solutions. If you build something using this, share it on social media and tag me and LanceDB in your post.  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1tACIbzDXIJY"
   },
   "source": [
    "![Speaker_Mapped.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LdcbQs2lNy7f"
   },
   "source": [
    "#### How to use this notebook?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qzy7bCdFPg7I"
   },
   "source": [
    "I found a few resource on the internet that can help you with end to end processing of multiple stages of this project. I'll be using some part of these resources in this notebook while building this project with all credits to the original creators.\n",
    "\n",
    "Notebook -https://shorturl.at/37hfR\n",
    "\n",
    "Blog - https://ufarooqi.com/blog/speaker-diarization-for-whisper-transcripts/?utm_source=chatgpt.com\n",
    "\n",
    "Sharing them because they are worth a read. Once you go through the concepts, you can use this notebook better to build speaker mapped transcription using LanceDB.\n",
    "\n",
    "![image.png]()\n",
    "\n",
    "1. First, we'll see a naive transcription using whisper and identify the issues with it.\n",
    "\n",
    "2. Then we'll figure out how to connect LanceDB with Azure Blob Storage to use this in current application.\n",
    "\n",
    "3. Once we are done with both of these steps, we'll jump onto building our project and create a speaker mapped transcription for an audio file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_zRyFufFOhor"
   },
   "source": [
    "#### Install Necessary Libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "collapsed": true,
    "id": "D2g4Oug9DtXV",
    "outputId": "5d09b8ca-1eaa-492c-f5b2-0cc551212581"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting nemo-toolkit>=2.dev (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nemo_toolkit-2.2.0rc2-py3-none-any.whl.metadata (76 kB)\n",
      "\u001b[?25l     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/76.4 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.4/76.4 kB\u001b[0m \u001b[31m2.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hRequirement already satisfied: huggingface_hub>=0.24 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (0.28.1)\n",
      "Requirement already satisfied: numba in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (0.61.0)\n",
      "Requirement already satisfied: numpy>=1.22 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.26.4)\n",
      "Collecting onnx>=1.7.0 (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading onnx-1.17.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB)\n",
      "Collecting protobuf==3.20.3 (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading protobuf-3.20.3-py2.py3-none-any.whl.metadata (720 bytes)\n",
      "Requirement already satisfied: python-dateutil in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.8.2)\n",
      "Collecting ruamel.yaml (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading ruamel.yaml-0.18.10-py3-none-any.whl.metadata (23 kB)\n",
      "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.6.1)\n",
      "Requirement already satisfied: setuptools>=70.0.0 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (75.1.0)\n",
      "Requirement already satisfied: tensorboard in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.18.0)\n",
      "Requirement already satisfied: text-unidecode in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.3)\n",
      "Requirement already satisfied: torch in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.5.1+cu124)\n",
      "Requirement already satisfied: tqdm>=4.41.0 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (4.67.1)\n",
      "Collecting wget (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading wget-3.2.zip (10 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: wrapt in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.17.2)\n",
      "Collecting braceexpand (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading braceexpand-0.1.7-py2.py3-none-any.whl.metadata (3.0 kB)\n",
      "Requirement already satisfied: editdistance in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.8.1)\n",
      "Requirement already satisfied: einops in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.8.1)\n",
      "Collecting g2p_en (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading g2p_en-2.1.0-py3-none-any.whl.metadata (4.5 kB)\n",
      "Collecting jiwer (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading jiwer-3.1.0-py3-none-any.whl.metadata (2.6 kB)\n",
      "Collecting kaldi-python-io (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading kaldi-python-io-1.2.2.tar.gz (8.8 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Collecting kaldiio (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading kaldiio-2.18.0-py3-none-any.whl.metadata (13 kB)\n",
      "Collecting lhotse>=1.26.0 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading lhotse-1.29.0-py3-none-any.whl.metadata (17 kB)\n",
      "Requirement already satisfied: librosa>=0.10.2 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.10.2.post1)\n",
      "Collecting marshmallow (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading marshmallow-3.26.1-py3-none-any.whl.metadata (7.3 kB)\n",
      "Collecting optuna (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading optuna-4.2.1-py3-none-any.whl.metadata (17 kB)\n",
      "Requirement already satisfied: packaging in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (24.2)\n",
      "Collecting pyannote.core (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pyannote.core-5.0.0-py3-none-any.whl.metadata (1.4 kB)\n",
      "Collecting pyannote.metrics (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pyannote.metrics-3.2.1-py3-none-any.whl.metadata (1.3 kB)\n",
      "Collecting pydub (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pydub-0.25.1-py2.py3-none-any.whl.metadata (1.4 kB)\n",
      "Collecting pyloudnorm (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pyloudnorm-0.1.1-py3-none-any.whl.metadata (5.6 kB)\n",
      "Collecting resampy (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading resampy-0.4.3-py3-none-any.whl.metadata (3.0 kB)\n",
      "Requirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (1.13.1)\n",
      "Requirement already satisfied: soundfile in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.13.1)\n",
      "Collecting sox (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading sox-1.5.0.tar.gz (63 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m63.9/63.9 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Collecting texterrors (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading texterrors-0.5.1.tar.gz (23 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: cloudpickle in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (3.1.1)\n",
      "Collecting fiddle (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading fiddle-0.3.0-py3-none-any.whl.metadata (2.3 kB)\n",
      "Collecting hydra-core<=1.3.2,>1.3 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading hydra_core-1.3.2-py3-none-any.whl.metadata (5.5 kB)\n",
      "Collecting lightning<=2.4.0,>2.2.1 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading lightning-2.4.0-py3-none-any.whl.metadata (38 kB)\n",
      "Collecting omegaconf<=2.3 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading omegaconf-2.3.0-py3-none-any.whl.metadata (3.9 kB)\n",
      "Requirement already satisfied: peft in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.14.0)\n",
      "Collecting torchmetrics>=0.11.0 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading torchmetrics-1.6.1-py3-none-any.whl.metadata (21 kB)\n",
      "Requirement already satisfied: transformers>=4.45.0 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (4.48.3)\n",
      "Requirement already satisfied: wandb in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.19.6)\n",
      "Collecting webdataset>=0.2.86 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading webdataset-0.2.111-py3-none-any.whl.metadata (15 kB)\n",
      "Collecting datasets (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading datasets-3.3.2-py3-none-any.whl.metadata (19 kB)\n",
      "Requirement already satisfied: inflect in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (7.5.0)\n",
      "Collecting mediapy==1.1.6 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading mediapy-1.1.6-py3-none-any.whl.metadata (4.8 kB)\n",
      "Requirement already satisfied: pandas in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (2.2.2)\n",
      "Collecting sacremoses>=0.0.43 (from nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading sacremoses-0.1.1-py3-none-any.whl.metadata (8.3 kB)\n",
      "Requirement already satisfied: sentencepiece<1.0.0 in /usr/local/lib/python3.11/dist-packages (from nemo-toolkit[asr]>=2.dev) (0.2.0)\n",
      "Requirement already satisfied: ipython in /usr/local/lib/python3.11/dist-packages (from mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (7.34.0)\n",
      "Requirement already satisfied: matplotlib in /usr/local/lib/python3.11/dist-packages (from mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (3.10.0)\n",
      "Requirement already satisfied: Pillow in /usr/local/lib/python3.11/dist-packages (from mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (11.1.0)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.17.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2024.10.0)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (6.0.2)\n",
      "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.32.3)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (4.12.2)\n",
      "Collecting antlr4-python3-runtime==4.9.* (from hydra-core<=1.3.2,>1.3->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading antlr4-python3-runtime-4.9.3.tar.gz (117 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.0/117.0 kB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: audioread>=2.1.9 in /usr/local/lib/python3.11/dist-packages (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev) (3.0.1)\n",
      "Requirement already satisfied: click>=7.1.1 in /usr/local/lib/python3.11/dist-packages (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev) (8.1.8)\n",
      "Collecting cytoolz>=0.10.1 (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading cytoolz-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.6 kB)\n",
      "Collecting intervaltree>=3.1.0 (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading intervaltree-3.1.0.tar.gz (32 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: tabulate>=0.8.1 in /usr/local/lib/python3.11/dist-packages (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev) (0.9.0)\n",
      "Collecting lilcom>=1.1.0 (from lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading lilcom-1.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)\n",
      "Requirement already satisfied: joblib>=0.14 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (1.4.2)\n",
      "Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (4.4.2)\n",
      "Requirement already satisfied: pooch>=1.1 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (1.8.2)\n",
      "Requirement already satisfied: soxr>=0.3.2 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (0.5.0.post1)\n",
      "Requirement already satisfied: lazy-loader>=0.1 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (0.4)\n",
      "Requirement already satisfied: msgpack>=1.0 in /usr/local/lib/python3.11/dist-packages (from librosa>=0.10.2->nemo-toolkit[asr]>=2.dev) (1.1.0)\n",
      "Collecting lightning-utilities<2.0,>=0.10.0 (from lightning<=2.4.0,>2.2.1->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading lightning_utilities-0.12.0-py3-none-any.whl.metadata (5.6 kB)\n",
      "Collecting pytorch-lightning (from lightning<=2.4.0,>2.2.1->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pytorch_lightning-2.5.0.post0-py3-none-any.whl.metadata (21 kB)\n",
      "Requirement already satisfied: llvmlite<0.45,>=0.44.0dev0 in /usr/local/lib/python3.11/dist-packages (from numba->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (0.44.0)\n",
      "Requirement already satisfied: regex in /usr/local/lib/python3.11/dist-packages (from sacremoses>=0.0.43->nemo-toolkit[asr]>=2.dev) (2024.11.6)\n",
      "Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.5.0)\n",
      "Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.11/dist-packages (from soundfile->nemo-toolkit[asr]>=2.dev) (1.17.1)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.4.2)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.1.5)\n",
      "Collecting nvidia-cuda-nvrtc-cu12==12.4.127 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cuda-runtime-cu12==12.4.127 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cuda-cupti-cu12==12.4.127 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cublas-cu12==12.4.5.8 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cufft-cu12==11.2.1.3 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-curand-cu12==10.3.5.147 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cusolver-cu12==11.6.1.9 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cusparse-cu12==12.3.1.170 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.21.5)\n",
      "Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (12.4.127)\n",
      "Collecting nvidia-nvjitlink-cu12==12.4.127 (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Requirement already satisfied: triton==3.1.0 in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.1.0)\n",
      "Requirement already satisfied: sympy==1.13.1 in /usr/local/lib/python3.11/dist-packages (from torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.13.1)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy==1.13.1->torch->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.3.0)\n",
      "Requirement already satisfied: tokenizers<0.22,>=0.21 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.45.0->nemo-toolkit[asr]>=2.dev) (0.21.0)\n",
      "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.45.0->nemo-toolkit[asr]>=2.dev) (0.5.2)\n",
      "Requirement already satisfied: pyarrow>=15.0.0 in /usr/local/lib/python3.11/dist-packages (from datasets->nemo-toolkit[asr]>=2.dev) (17.0.0)\n",
      "Collecting dill<0.3.9,>=0.3.0 (from datasets->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading dill-0.3.8-py3-none-any.whl.metadata (10 kB)\n",
      "Collecting xxhash (from datasets->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading xxhash-3.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)\n",
      "Collecting multiprocess<0.70.17 (from datasets->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading multiprocess-0.70.16-py311-none-any.whl.metadata (7.2 kB)\n",
      "Requirement already satisfied: aiohttp in /usr/local/lib/python3.11/dist-packages (from datasets->nemo-toolkit[asr]>=2.dev) (3.11.12)\n",
      "Requirement already satisfied: absl-py in /usr/local/lib/python3.11/dist-packages (from fiddle->nemo-toolkit[asr]>=2.dev) (1.4.0)\n",
      "Requirement already satisfied: graphviz in /usr/local/lib/python3.11/dist-packages (from fiddle->nemo-toolkit[asr]>=2.dev) (0.20.3)\n",
      "Collecting libcst (from fiddle->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading libcst-1.6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (17 kB)\n",
      "Requirement already satisfied: nltk>=3.2.4 in /usr/local/lib/python3.11/dist-packages (from g2p_en->nemo-toolkit[asr]>=2.dev) (3.9.1)\n",
      "Collecting distance>=0.1.3 (from g2p_en->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading Distance-0.1.3.tar.gz (180 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m180.3/180.3 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: more_itertools>=8.5.0 in /usr/local/lib/python3.11/dist-packages (from inflect->nemo-toolkit[asr]>=2.dev) (10.6.0)\n",
      "Requirement already satisfied: typeguard>=4.0.1 in /usr/local/lib/python3.11/dist-packages (from inflect->nemo-toolkit[asr]>=2.dev) (4.4.2)\n",
      "Collecting rapidfuzz>=3.9.7 (from jiwer->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading rapidfuzz-3.12.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)\n",
      "Collecting alembic>=1.5.0 (from optuna->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading alembic-1.14.1-py3-none-any.whl.metadata (7.4 kB)\n",
      "Collecting colorlog (from optuna->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading colorlog-6.9.0-py3-none-any.whl.metadata (10 kB)\n",
      "Requirement already satisfied: sqlalchemy>=1.4.2 in /usr/local/lib/python3.11/dist-packages (from optuna->nemo-toolkit[asr]>=2.dev) (2.0.38)\n",
      "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.11/dist-packages (from pandas->nemo-toolkit[asr]>=2.dev) (2025.1)\n",
      "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.11/dist-packages (from pandas->nemo-toolkit[asr]>=2.dev) (2025.1)\n",
      "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.17.0)\n",
      "Requirement already satisfied: psutil in /usr/local/lib/python3.11/dist-packages (from peft->nemo-toolkit[asr]>=2.dev) (5.9.5)\n",
      "Requirement already satisfied: accelerate>=0.21.0 in /usr/local/lib/python3.11/dist-packages (from peft->nemo-toolkit[asr]>=2.dev) (1.3.0)\n",
      "Collecting sortedcontainers>=2.0.4 (from pyannote.core->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB)\n",
      "Collecting pyannote.database>=4.0.1 (from pyannote.metrics->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading pyannote.database-5.1.3-py3-none-any.whl.metadata (1.1 kB)\n",
      "Collecting docopt>=0.6.2 (from pyannote.metrics->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading docopt-0.6.2.tar.gz (25 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: future>=0.16.0 in /usr/local/lib/python3.11/dist-packages (from pyloudnorm->nemo-toolkit[asr]>=2.dev) (1.0.0)\n",
      "Collecting ruamel.yaml.clib>=0.2.7 (from ruamel.yaml->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.7 kB)\n",
      "Requirement already satisfied: grpcio>=1.48.2 in /usr/local/lib/python3.11/dist-packages (from tensorboard->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (1.70.0)\n",
      "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.11/dist-packages (from tensorboard->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.7)\n",
      "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.11/dist-packages (from tensorboard->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (0.7.2)\n",
      "Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.11/dist-packages (from tensorboard->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.1.3)\n",
      "Collecting pybind11 (from texterrors->nemo-toolkit[asr]>=2.dev)\n",
      "  Using cached pybind11-2.13.6-py3-none-any.whl.metadata (9.5 kB)\n",
      "Collecting plac (from texterrors->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading plac-1.4.3-py2.py3-none-any.whl.metadata (5.9 kB)\n",
      "Collecting loguru (from texterrors->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading loguru-0.7.3-py3-none-any.whl.metadata (22 kB)\n",
      "Requirement already satisfied: termcolor in /usr/local/lib/python3.11/dist-packages (from texterrors->nemo-toolkit[asr]>=2.dev) (2.5.0)\n",
      "Collecting Levenshtein (from texterrors->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading levenshtein-0.26.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.2 kB)\n",
      "Requirement already satisfied: docker-pycreds>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (0.4.0)\n",
      "Requirement already satisfied: gitpython!=3.1.29,>=1.0.0 in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (3.1.44)\n",
      "Requirement already satisfied: platformdirs in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (4.3.6)\n",
      "Requirement already satisfied: pydantic<3,>=2.6 in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (2.10.6)\n",
      "Requirement already satisfied: sentry-sdk>=2.0.0 in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (2.22.0)\n",
      "Requirement already satisfied: setproctitle in /usr/local/lib/python3.11/dist-packages (from wandb->nemo-toolkit[asr]>=2.dev) (1.3.4)\n",
      "Collecting Mako (from alembic>=1.5.0->optuna->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading Mako-1.3.9-py3-none-any.whl.metadata (2.9 kB)\n",
      "Requirement already satisfied: pycparser in /usr/local/lib/python3.11/dist-packages (from cffi>=1.0->soundfile->nemo-toolkit[asr]>=2.dev) (2.22)\n",
      "Requirement already satisfied: toolz>=0.8.0 in /usr/local/lib/python3.11/dist-packages (from cytoolz>=0.10.1->lhotse>=1.26.0->nemo-toolkit[asr]>=2.dev) (0.12.1)\n",
      "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (2.4.6)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (1.3.2)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (25.1.0)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (1.5.0)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (6.1.0)\n",
      "Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (0.2.1)\n",
      "Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->datasets->nemo-toolkit[asr]>=2.dev) (1.18.3)\n",
      "Requirement already satisfied: gitdb<5,>=4.0.1 in /usr/local/lib/python3.11/dist-packages (from gitpython!=3.1.29,>=1.0.0->wandb->nemo-toolkit[asr]>=2.dev) (4.0.12)\n",
      "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (1.3.1)\n",
      "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.11/dist-packages (from matplotlib->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.12.1)\n",
      "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.11/dist-packages (from matplotlib->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (4.56.0)\n",
      "Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (1.4.8)\n",
      "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (3.2.1)\n",
      "Requirement already satisfied: typer>=0.12.1 in /usr/local/lib/python3.11/dist-packages (from pyannote.database>=4.0.1->pyannote.metrics->nemo-toolkit[asr]>=2.dev) (0.15.1)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3,>=2.6->wandb->nemo-toolkit[asr]>=2.dev) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<3,>=2.6->wandb->nemo-toolkit[asr]>=2.dev) (2.27.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.4.1)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.10)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2.3.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface_hub>=0.24->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (2025.1.31)\n",
      "Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.11/dist-packages (from sqlalchemy>=1.4.2->optuna->nemo-toolkit[asr]>=2.dev) (3.1.1)\n",
      "Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.11/dist-packages (from werkzeug>=1.0.1->tensorboard->nemo-toolkit>=2.dev->nemo-toolkit[asr]>=2.dev) (3.0.2)\n",
      "Collecting jedi>=0.16 (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev)\n",
      "  Downloading jedi-0.19.2-py2.py3-none-any.whl.metadata (22 kB)\n",
      "Requirement already satisfied: pickleshare in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.7.5)\n",
      "Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (5.7.1)\n",
      "Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (3.0.50)\n",
      "Requirement already satisfied: pygments in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (2.18.0)\n",
      "Requirement already satisfied: backcall in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.2.0)\n",
      "Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.1.7)\n",
      "Requirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.11/dist-packages (from ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (4.9.0)\n",
      "Requirement already satisfied: smmap<6,>=3.0.1 in /usr/local/lib/python3.11/dist-packages (from gitdb<5,>=4.0.1->gitpython!=3.1.29,>=1.0.0->wandb->nemo-toolkit[asr]>=2.dev) (5.0.2)\n",
      "Requirement already satisfied: parso<0.9.0,>=0.8.4 in /usr/local/lib/python3.11/dist-packages (from jedi>=0.16->ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.8.4)\n",
      "Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.11/dist-packages (from pexpect>4.3->ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.7.0)\n",
      "Requirement already satisfied: wcwidth in /usr/local/lib/python3.11/dist-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->mediapy==1.1.6->nemo-toolkit[asr]>=2.dev) (0.2.13)\n",
      "Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.11/dist-packages (from typer>=0.12.1->pyannote.database>=4.0.1->pyannote.metrics->nemo-toolkit[asr]>=2.dev) (1.5.4)\n",
      "Requirement already satisfied: rich>=10.11.0 in /usr/local/lib/python3.11/dist-packages (from typer>=0.12.1->pyannote.database>=4.0.1->pyannote.metrics->nemo-toolkit[asr]>=2.dev) (13.9.4)\n",
      "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.11.0->typer>=0.12.1->pyannote.database>=4.0.1->pyannote.metrics->nemo-toolkit[asr]>=2.dev) (3.0.0)\n",
      "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer>=0.12.1->pyannote.database>=4.0.1->pyannote.metrics->nemo-toolkit[asr]>=2.dev) (0.1.2)\n",
      "Downloading nemo_toolkit-2.2.0rc2-py3-none-any.whl (5.4 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.4/5.4 MB\u001b[0m \u001b[31m70.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading protobuf-3.20.3-py2.py3-none-any.whl (162 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m162.1/162.1 kB\u001b[0m \u001b[31m11.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading mediapy-1.1.6-py3-none-any.whl (24 kB)\n",
      "Downloading hydra_core-1.3.2-py3-none-any.whl (154 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m154.5/154.5 kB\u001b[0m \u001b[31m15.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading lhotse-1.29.0-py3-none-any.whl (843 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m843.9/843.9 kB\u001b[0m \u001b[31m57.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading lightning-2.4.0-py3-none-any.whl (810 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m811.0/811.0 kB\u001b[0m \u001b[31m46.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading omegaconf-2.3.0-py3-none-any.whl (79 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m79.5/79.5 kB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading onnx-1.17.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.0 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m16.0/16.0 MB\u001b[0m \u001b[31m83.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading sacremoses-0.1.1-py3-none-any.whl (897 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m897.5/897.5 kB\u001b[0m \u001b[31m47.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl (363.4 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m363.4/363.4 MB\u001b[0m \u001b[31m4.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (13.8 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.8/13.8 MB\u001b[0m \u001b[31m102.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (24.6 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m24.6/24.6 MB\u001b[0m \u001b[31m80.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (883 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m883.7/883.7 kB\u001b[0m \u001b[31m53.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m664.8/664.8 MB\u001b[0m \u001b[31m2.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl (211.5 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.5/211.5 MB\u001b[0m \u001b[31m5.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl (56.3 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.3/56.3 MB\u001b[0m \u001b[31m10.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl (127.9 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.9/127.9 MB\u001b[0m \u001b[31m7.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl (207.5 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m207.5/207.5 MB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m85.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading torchmetrics-1.6.1-py3-none-any.whl (927 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m927.3/927.3 kB\u001b[0m \u001b[31m58.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading webdataset-0.2.111-py3-none-any.whl (85 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m85.5/85.5 kB\u001b[0m \u001b[31m8.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading braceexpand-0.1.7-py2.py3-none-any.whl (5.9 kB)\n",
      "Downloading datasets-3.3.2-py3-none-any.whl (485 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m485.4/485.4 kB\u001b[0m \u001b[31m30.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading fiddle-0.3.0-py3-none-any.whl (419 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m419.8/419.8 kB\u001b[0m \u001b[31m28.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading g2p_en-2.1.0-py3-none-any.whl (3.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m73.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading jiwer-3.1.0-py3-none-any.whl (22 kB)\n",
      "Downloading kaldiio-2.18.0-py3-none-any.whl (28 kB)\n",
      "Downloading marshmallow-3.26.1-py3-none-any.whl (50 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.9/50.9 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading optuna-4.2.1-py3-none-any.whl (383 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m383.6/383.6 kB\u001b[0m \u001b[31m28.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading pyannote.core-5.0.0-py3-none-any.whl (58 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.5/58.5 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading pyannote.metrics-3.2.1-py3-none-any.whl (51 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m51.4/51.4 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)\n",
      "Downloading pyloudnorm-0.1.1-py3-none-any.whl (9.6 kB)\n",
      "Downloading resampy-0.4.3-py3-none-any.whl (3.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m75.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading ruamel.yaml-0.18.10-py3-none-any.whl (117 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.7/117.7 kB\u001b[0m \u001b[31m11.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading alembic-1.14.1-py3-none-any.whl (233 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m233.6/233.6 kB\u001b[0m \u001b[31m18.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading cytoolz-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.1/2.1 MB\u001b[0m \u001b[31m71.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading dill-0.3.8-py3-none-any.whl (116 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m11.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading lightning_utilities-0.12.0-py3-none-any.whl (28 kB)\n",
      "Downloading lilcom-1.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (87 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m87.2/87.2 kB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading multiprocess-0.70.16-py311-none-any.whl (143 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m143.5/143.5 kB\u001b[0m \u001b[31m13.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading pyannote.database-5.1.3-py3-none-any.whl (48 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m48.1/48.1 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading rapidfuzz-3.12.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m75.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (739 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m739.1/739.1 kB\u001b[0m \u001b[31m44.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)\n",
      "Downloading colorlog-6.9.0-py3-none-any.whl (11 kB)\n",
      "Downloading levenshtein-0.26.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (162 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m162.7/162.7 kB\u001b[0m \u001b[31m12.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading libcst-1.6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.3/2.3 MB\u001b[0m \u001b[31m56.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading loguru-0.7.3-py3-none-any.whl (61 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m61.6/61.6 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading plac-1.4.3-py2.py3-none-any.whl (22 kB)\n",
      "Using cached pybind11-2.13.6-py3-none-any.whl (243 kB)\n",
      "Downloading pytorch_lightning-2.5.0.post0-py3-none-any.whl (819 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m819.3/819.3 kB\u001b[0m \u001b[31m47.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading xxhash-3.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.8/194.8 kB\u001b[0m \u001b[31m17.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading jedi-0.19.2-py2.py3-none-any.whl (1.6 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m55.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading Mako-1.3.9-py3-none-any.whl (78 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.5/78.5 kB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hBuilding wheels for collected packages: antlr4-python3-runtime, kaldi-python-io, sox, texterrors, wget, distance, docopt, intervaltree\n",
      "  Building wheel for antlr4-python3-runtime (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.9.3-py3-none-any.whl size=144555 sha256=b978e729c5ee5b684e2be45e3a4b049d4f9dee523ff0a14049739a4c8a444c64\n",
      "  Stored in directory: /root/.cache/pip/wheels/1a/97/32/461f837398029ad76911109f07047fde1d7b661a147c7c56d1\n",
      "  Building wheel for kaldi-python-io (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for kaldi-python-io: filename=kaldi_python_io-1.2.2-py3-none-any.whl size=8952 sha256=f26c869066161eead80fb367d0069abf080d52e144b460987096c20262693fe0\n",
      "  Stored in directory: /root/.cache/pip/wheels/f2/86/7b/eec1bb7dc63b8aab5da6317609313873e6e75f065b65f3c29c\n",
      "  Building wheel for sox (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for sox: filename=sox-1.5.0-py3-none-any.whl size=40037 sha256=43f3206d77d55d8ce1ed9805b22f3f91eee4164fb7714576597b5c5ffb8d391a\n",
      "  Stored in directory: /root/.cache/pip/wheels/74/89/93/023fcdacaec4e5471e78b43992515e8500cc2505b307e2e6b7\n",
      "  Building wheel for texterrors (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for texterrors: filename=texterrors-0.5.1-cp311-cp311-linux_x86_64.whl size=1077990 sha256=57d5c6a080bebc6562978dbcbc54b1fe5ef54fc76ef430cc3f7ac019e9f4a1d7\n",
      "  Stored in directory: /root/.cache/pip/wheels/6f/94/c8/7edaa578fc800d26e3fda18fba557a4218ab553d078ee51b46\n",
      "  Building wheel for wget (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for wget: filename=wget-3.2-py3-none-any.whl size=9656 sha256=cfe1bf696f9a8fb368455b89af7576a9a660ec7946ee53a271bd2847bbb16e77\n",
      "  Stored in directory: /root/.cache/pip/wheels/40/b3/0f/a40dbd1c6861731779f62cc4babcb234387e11d697df70ee97\n",
      "  Building wheel for distance (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for distance: filename=Distance-0.1.3-py3-none-any.whl size=16256 sha256=0cd89bdead7c404c1c356b5e8c777ae97e95f0f8ae83b674ea2224b412806977\n",
      "  Stored in directory: /root/.cache/pip/wheels/fb/cd/9c/3ab5d666e3bcacc58900b10959edd3816cc9557c7337986322\n",
      "  Building wheel for docopt (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13706 sha256=264d309e13c592fb43ae9276b34c66c6b94f75262dfb906caca6ec4b8575567f\n",
      "  Stored in directory: /root/.cache/pip/wheels/1a/b0/8c/4b75c4116c31f83c8f9f047231251e13cc74481cca4a78a9ce\n",
      "  Building wheel for intervaltree (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26097 sha256=bf54e2eb1d56dcaebab468f1875c8122da49ae31aa2bb30f508ddb552a94d30e\n",
      "  Stored in directory: /root/.cache/pip/wheels/31/d7/d9/eec6891f78cac19a693bd40ecb8365d2f4613318c145ec9816\n",
      "Successfully built antlr4-python3-runtime kaldi-python-io sox texterrors wget distance docopt intervaltree\n",
      "Installing collected packages: wget, sortedcontainers, pydub, plac, docopt, distance, braceexpand, antlr4-python3-runtime, xxhash, webdataset, sox, sacremoses, ruamel.yaml.clib, rapidfuzz, pybind11, protobuf, omegaconf, nvidia-nvjitlink-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, marshmallow, Mako, loguru, lilcom, lightning-utilities, libcst, kaldiio, kaldi-python-io, jedi, intervaltree, dill, cytoolz, colorlog, ruamel.yaml, resampy, pyloudnorm, pyannote.core, onnx, nvidia-cusparse-cu12, nvidia-cudnn-cu12, multiprocess, Levenshtein, jiwer, hydra-core, fiddle, alembic, texterrors, optuna, nvidia-cusolver-cu12, mediapy, g2p_en, pyannote.database, datasets, torchmetrics, pyannote.metrics, nemo-toolkit, lhotse, pytorch-lightning, lightning\n",
      "  Attempting uninstall: protobuf\n",
      "    Found existing installation: protobuf 4.25.6\n",
      "    Uninstalling protobuf-4.25.6:\n",
      "      Successfully uninstalled protobuf-4.25.6\n",
      "  Attempting uninstall: nvidia-nvjitlink-cu12\n",
      "    Found existing installation: nvidia-nvjitlink-cu12 12.5.82\n",
      "    Uninstalling nvidia-nvjitlink-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-nvjitlink-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-curand-cu12\n",
      "    Found existing installation: nvidia-curand-cu12 10.3.6.82\n",
      "    Uninstalling nvidia-curand-cu12-10.3.6.82:\n",
      "      Successfully uninstalled nvidia-curand-cu12-10.3.6.82\n",
      "  Attempting uninstall: nvidia-cufft-cu12\n",
      "    Found existing installation: nvidia-cufft-cu12 11.2.3.61\n",
      "    Uninstalling nvidia-cufft-cu12-11.2.3.61:\n",
      "      Successfully uninstalled nvidia-cufft-cu12-11.2.3.61\n",
      "  Attempting uninstall: nvidia-cuda-runtime-cu12\n",
      "    Found existing installation: nvidia-cuda-runtime-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-runtime-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-runtime-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cuda-nvrtc-cu12\n",
      "    Found existing installation: nvidia-cuda-nvrtc-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-nvrtc-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-nvrtc-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cuda-cupti-cu12\n",
      "    Found existing installation: nvidia-cuda-cupti-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-cupti-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-cupti-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cublas-cu12\n",
      "    Found existing installation: nvidia-cublas-cu12 12.5.3.2\n",
      "    Uninstalling nvidia-cublas-cu12-12.5.3.2:\n",
      "      Successfully uninstalled nvidia-cublas-cu12-12.5.3.2\n",
      "  Attempting uninstall: nvidia-cusparse-cu12\n",
      "    Found existing installation: nvidia-cusparse-cu12 12.5.1.3\n",
      "    Uninstalling nvidia-cusparse-cu12-12.5.1.3:\n",
      "      Successfully uninstalled nvidia-cusparse-cu12-12.5.1.3\n",
      "  Attempting uninstall: nvidia-cudnn-cu12\n",
      "    Found existing installation: nvidia-cudnn-cu12 9.3.0.75\n",
      "    Uninstalling nvidia-cudnn-cu12-9.3.0.75:\n",
      "      Successfully uninstalled nvidia-cudnn-cu12-9.3.0.75\n",
      "  Attempting uninstall: nvidia-cusolver-cu12\n",
      "    Found existing installation: nvidia-cusolver-cu12 11.6.3.83\n",
      "    Uninstalling nvidia-cusolver-cu12-11.6.3.83:\n",
      "      Successfully uninstalled nvidia-cusolver-cu12-11.6.3.83\n",
      "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
      "grpcio-status 1.62.3 requires protobuf>=4.21.6, but you have protobuf 3.20.3 which is incompatible.\n",
      "tensorflow-metadata 1.16.1 requires protobuf<6.0.0dev,>=4.25.2; python_version >= \"3.11\", but you have protobuf 3.20.3 which is incompatible.\u001b[0m\u001b[31m\n",
      "\u001b[0mSuccessfully installed Levenshtein-0.26.1 Mako-1.3.9 alembic-1.14.1 antlr4-python3-runtime-4.9.3 braceexpand-0.1.7 colorlog-6.9.0 cytoolz-1.0.1 datasets-3.3.2 dill-0.3.8 distance-0.1.3 docopt-0.6.2 fiddle-0.3.0 g2p_en-2.1.0 hydra-core-1.3.2 intervaltree-3.1.0 jedi-0.19.2 jiwer-3.1.0 kaldi-python-io-1.2.2 kaldiio-2.18.0 lhotse-1.29.0 libcst-1.6.0 lightning-2.4.0 lightning-utilities-0.12.0 lilcom-1.8.0 loguru-0.7.3 marshmallow-3.26.1 mediapy-1.1.6 multiprocess-0.70.16 nemo-toolkit-2.2.0rc2 nvidia-cublas-cu12-12.4.5.8 nvidia-cuda-cupti-cu12-12.4.127 nvidia-cuda-nvrtc-cu12-12.4.127 nvidia-cuda-runtime-cu12-12.4.127 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-nvjitlink-cu12-12.4.127 omegaconf-2.3.0 onnx-1.17.0 optuna-4.2.1 plac-1.4.3 protobuf-3.20.3 pyannote.core-5.0.0 pyannote.database-5.1.3 pyannote.metrics-3.2.1 pybind11-2.13.6 pydub-0.25.1 pyloudnorm-0.1.1 pytorch-lightning-2.5.0.post0 rapidfuzz-3.12.1 resampy-0.4.3 ruamel.yaml-0.18.10 ruamel.yaml.clib-0.2.12 sacremoses-0.1.1 sortedcontainers-2.4.0 sox-1.5.0 texterrors-0.5.1 torchmetrics-1.6.1 webdataset-0.2.111 wget-3.2 xxhash-3.5.0\n"
     ]
    },
    {
     "data": {
      "application/vnd.colab-display-data+json": {
       "id": "d1ef4d2b829a4b2e8cc6ca5e06aeafdf",
       "pip_warning": {
        "packages": [
         "google",
         "pydevd_plugins"
        ]
       }
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting git+https://github.com/MahmoudAshraf97/ctc-forced-aligner.git\n",
      "  Cloning https://github.com/MahmoudAshraf97/ctc-forced-aligner.git to /tmp/pip-req-build-gnsop7x5\n",
      "  Running command git clone --filter=blob:none --quiet https://github.com/MahmoudAshraf97/ctc-forced-aligner.git /tmp/pip-req-build-gnsop7x5\n",
      "  Resolved https://github.com/MahmoudAshraf97/ctc-forced-aligner.git to commit 7578992b6647a98e65b539436d88bc7bba690374\n",
      "  Running command git submodule update --init --recursive -q\n",
      "  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
      "  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
      "  Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "Requirement already satisfied: nltk in /usr/local/lib/python3.11/dist-packages (from ctc-forced-aligner==0.3.0) (3.9.1)\n",
      "Requirement already satisfied: torch in /usr/local/lib/python3.11/dist-packages (from ctc-forced-aligner==0.3.0) (2.5.1+cu124)\n",
      "Requirement already satisfied: torchaudio in /usr/local/lib/python3.11/dist-packages (from ctc-forced-aligner==0.3.0) (2.5.1+cu124)\n",
      "Requirement already satisfied: transformers>=4.34 in /usr/local/lib/python3.11/dist-packages (from ctc-forced-aligner==0.3.0) (4.48.3)\n",
      "Collecting Unidecode (from ctc-forced-aligner==0.3.0)\n",
      "  Downloading Unidecode-1.3.8-py3-none-any.whl.metadata (13 kB)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (3.17.0)\n",
      "Requirement already satisfied: huggingface-hub<1.0,>=0.24.0 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (0.28.1)\n",
      "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (1.26.4)\n",
      "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (24.2)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (6.0.2)\n",
      "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (2024.11.6)\n",
      "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (2.32.3)\n",
      "Requirement already satisfied: tokenizers<0.22,>=0.21 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (0.21.0)\n",
      "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (0.5.2)\n",
      "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.11/dist-packages (from transformers>=4.34->ctc-forced-aligner==0.3.0) (4.67.1)\n",
      "Requirement already satisfied: click in /usr/local/lib/python3.11/dist-packages (from nltk->ctc-forced-aligner==0.3.0) (8.1.8)\n",
      "Requirement already satisfied: joblib in /usr/local/lib/python3.11/dist-packages (from nltk->ctc-forced-aligner==0.3.0) (1.4.2)\n",
      "Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (4.12.2)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (3.4.2)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (3.1.5)\n",
      "Requirement already satisfied: fsspec in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (2024.10.0)\n",
      "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (9.1.0.70)\n",
      "Requirement already satisfied: nvidia-cublas-cu12==12.4.5.8 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.5.8)\n",
      "Requirement already satisfied: nvidia-cufft-cu12==11.2.1.3 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (11.2.1.3)\n",
      "Requirement already satisfied: nvidia-curand-cu12==10.3.5.147 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (10.3.5.147)\n",
      "Requirement already satisfied: nvidia-cusolver-cu12==11.6.1.9 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (11.6.1.9)\n",
      "Requirement already satisfied: nvidia-cusparse-cu12==12.3.1.170 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.3.1.170)\n",
      "Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (2.21.5)\n",
      "Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.127)\n",
      "Requirement already satisfied: nvidia-nvjitlink-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (12.4.127)\n",
      "Requirement already satisfied: triton==3.1.0 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (3.1.0)\n",
      "Requirement already satisfied: sympy==1.13.1 in /usr/local/lib/python3.11/dist-packages (from torch->ctc-forced-aligner==0.3.0) (1.13.1)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy==1.13.1->torch->ctc-forced-aligner==0.3.0) (1.3.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch->ctc-forced-aligner==0.3.0) (3.0.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->transformers>=4.34->ctc-forced-aligner==0.3.0) (3.4.1)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests->transformers>=4.34->ctc-forced-aligner==0.3.0) (3.10)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests->transformers>=4.34->ctc-forced-aligner==0.3.0) (2.3.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests->transformers>=4.34->ctc-forced-aligner==0.3.0) (2025.1.31)\n",
      "Downloading Unidecode-1.3.8-py3-none-any.whl (235 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m235.5/235.5 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hBuilding wheels for collected packages: ctc-forced-aligner\n",
      "  Building wheel for ctc-forced-aligner (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for ctc-forced-aligner: filename=ctc_forced_aligner-0.3.0-cp311-cp311-linux_x86_64.whl size=1155143 sha256=907fea8d0ef02483525b9afe0b7a78f60aa58e32bb3e60209ae840425918deb6\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-ghpao_8x/wheels/c0/7c/67/0b6728114427b3234d95031945ea8ab5c50a1b83c90ad5424f\n",
      "Successfully built ctc-forced-aligner\n",
      "Installing collected packages: Unidecode, ctc-forced-aligner\n",
      "Successfully installed Unidecode-1.3.8 ctc-forced-aligner-0.3.0\n"
     ]
    }
   ],
   "source": [
    "# note that the process is memory intensive. For experimentation, You can simply test transcription and diarization separately or if doing it together along with creating embeddings, make sure your colab has enough compute available. I'll prefer switching to local setup. It'll be slow but it'll work lol :)\n",
    "\n",
    "# comment/uncomment these commands and install as required.\n",
    "# for audio embeddings\n",
    "\n",
    "!pip install torchaudio speechbrain numpy\n",
    "!pip install faster_whisper\n",
    "\n",
    "# for lancedb connection\n",
    "!pip install adlfs lancedb\n",
    "\n",
    "# for transcription and diarization\n",
    "!pip install faster-whisper>=1.1.0 ctranslate2==4.4.0\n",
    "!pip install \"nemo-toolkit[asr]>=2.dev\"\n",
    "\n",
    "# not mandatorily required but can be used to improve the results and while doing forced alignment for timestamped transcription.\n",
    "# !pip install git+https://github.com/MahmoudAshraf97/demucs.git\n",
    "!pip install git+https://github.com/MahmoudAshraf97/ctc-forced-aligner.git"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "id": "T3zTtnTX60Px"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import wget\n",
    "from omegaconf import OmegaConf\n",
    "import json\n",
    "import shutil\n",
    "import torch\n",
    "import torchaudio\n",
    "from nemo.collections.asr.models.msdd_models import NeuralDiarizer\n",
    "\n",
    "# from deepmultilingualpunctuation import PunctuationModel\n",
    "import re\n",
    "import logging\n",
    "import nltk\n",
    "import faster_whisper\n",
    "\n",
    "from ctc_forced_aligner import (\n",
    "    load_alignment_model,\n",
    "    generate_emissions,\n",
    "    preprocess_text,\n",
    "    get_alignments,\n",
    "    get_spans,\n",
    "    postprocess_results,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "preK8hURemS0"
   },
   "source": [
    "#### Download Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "T1i86tniewyj"
   },
   "source": [
    "You can chose to either download these sample audio files or upload your own audio samples for testing. I'll be using some of my audio samples for this project."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xUbTvDbqhtwp",
    "outputId": "19b26ea3-129e-4a8a-9ca7-20f00d6619a6"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloaded: input_audio_arjun.mp3\n",
      "Downloaded: input_audio_hamdeep.m4a\n",
      "Downloaded: input_audio_shresth.m4a\n",
      "All files downloaded successfully!\n",
      "Downloaded: languages_pair.json\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "# List of audio files to download.\n",
    "# this could be the collection of all the known speakers as well as the meeting recording if you are testing it on a recording.\n",
    "\n",
    "audio_files = [\n",
    "    \"input_audio_arjun.mp3\",\n",
    "    \"input_audio_hamdeep.m4a\",\n",
    "    \"input_audio_shresth.m4a\",  # Add more files as needed\n",
    "]\n",
    "\n",
    "# Base URL for the raw files on GitHub. We are downloading this from LanceDB's Vector Recipes repo.\n",
    "base_url = \"https://raw.githubusercontent.com/lancedb/vectordb-recipes/main/examples/Speaker_Mapped_Transcription/Data/\"\n",
    "\n",
    "# Create a directory to store the files\n",
    "os.makedirs(\"audio_files\", exist_ok=True)\n",
    "\n",
    "# Download each file\n",
    "for file in audio_files:\n",
    "    file_url = base_url + file\n",
    "    output_path = f\"audio_files/{file}\"\n",
    "    os.system(f\"wget -q {file_url} -O {output_path}\")\n",
    "    print(f\"Downloaded: {file}\")\n",
    "print(\"All files downloaded successfully!\")\n",
    "\n",
    "# Base URL for the raw files on GitHub.\n",
    "base_url = \"https://raw.githubusercontent.com/lancedb/vectordb-recipes/main/examples/Speaker_Mapped_Transcription/\"\n",
    "lang_json_file = \"languages_pair.json\"\n",
    "lang_file_path = base_url + lang_json_file\n",
    "output_path = \"languages_pair.json\"\n",
    "os.system(f\"wget -q {lang_file_path} -O {output_path}\")\n",
    "print(f\"Downloaded: {lang_json_file}\")\n",
    "# downloading language json pair used in whisper transcription"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "im77vxBa5HU-"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "# Load the JSON file\n",
    "with open(\"languages_pair.json\", \"r\") as f:\n",
    "    data = json.load(f)\n",
    "\n",
    "# these dictionaries are used later in helper functions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9f9DLDLZNqiq"
   },
   "source": [
    "#### Naive Transcription without Speaker Information"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 319,
     "referenced_widgets": [
      "ea8439c5188f416caf9a934aa462defb",
      "4a46258d0d0647e0b51d81bd9b23b41d",
      "4ac618e4ddec46d2ae4ecc5e2c66861a",
      "bad9dc86f11741af8959f56a91e7cd45",
      "4f1153cbab084c3395147929d86d117b",
      "408588a570fa40e7864239a742eaf6d7",
      "7f075617d0604a5d95fd7141ab6df845",
      "fc0479d6a6474420b4abdbd4e042c383",
      "a6220f9a9b304b0fa90885ec0026dd27",
      "8de4a423497d42c7a3a7a6874290f1a7",
      "b01f3999089e4b1799ae133f4df183ab",
      "574cea82db974bd29acbd924918bd406",
      "2d53041fdad24b559be398d81098b9f2",
      "a5e71c61827942e3aaea4709f25a6add",
      "113466a71f934e5e9f4c5c33387902d9",
      "48d47d8d8d5e4a2face5772f767fb5ab",
      "af8f4714f6254709be5a893e4912b00d",
      "4f183c4efc76429bbcdfc7a9c8791acb",
      "02e45ea943334115b5d5ec62f1fd3ea5",
      "d9051cb1760e49fab1e18e7bc20e3fab",
      "ce95614ec23b42248e02c3b53a3bdcc3",
      "44989240cca84d3690b223cd9e88f249",
      "8e6cf866c2fc4c77b451dbd65815ba5e",
      "d13fa8aa38504fd19c8daab54e344d0d",
      "1e2c1b6f8b1c4fd8b04a64c4e28265e1",
      "4623b958e97b4ea19612b04cf7d8297f",
      "2fc00f09f63c4bf9bbc255d06a3d23df",
      "edac29196dd24a8194c5f9e7cb8e8903",
      "6dcc075f600249e285a723d8be5af4b3",
      "e36dbf44a653438ba535671dbd30e619",
      "0a0b055247f246f9b926364d891af7e6",
      "c8fc1d03de134e5ba975a2a852f87de9",
      "58682a52cd0b4e948b07685931942676",
      "fd37b7cff0c0435f9f1e7c538bb5c895",
      "1f0921b9ac144b46bac82d6a108d9877",
      "da984d4e307041d1b585a2e4f001e8d6",
      "d32add1999cf4c83867d787d4840b9a3",
      "257f23fab918408b931e93f745667271",
      "6c31baadac954ae5805dbe13466080a0",
      "5e4df3fae3054c04b51a637b13f53e08",
      "2be0376f99ac4df9b628297c07fe3f9d",
      "d22f10dc17784b68a3f64c71aa10d347",
      "f9c1a48c17c4415fa4530114fd9698c0",
      "a98c6902109e49fd85cc450b98ec0405",
      "1b9ac37c1e6a4c779a554e257c05e1ae",
      "5216dadc01d54716abcc1138e772e0ac",
      "5e2f679570e14f8fa7f31608685d7f6c",
      "d8a21e172d2d4a1ead110fe9a74f3b9d",
      "2a381ba0823d4affa77a8ea80fcfdc98",
      "df2e127ff9f84054b3c475051fb33458",
      "826cde2353fb437c8d42af6be2ba7283",
      "ffd12c5b269c4849a336319d47637f51",
      "eeb570f4836d4303b0424567ed14dc9e",
      "2c1640becbbc40dbacefa8086b9f5774",
      "754cad0fc5ca481c912f5f6622024e39"
     ]
    },
    "id": "reR_m1G3k5Az",
    "outputId": "4d33fffb-84aa-47c6-8163-b5aa2fd18a43"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ea8439c5188f416caf9a934aa462defb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "574cea82db974bd29acbd924918bd406",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/2.80k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8e6cf866c2fc4c77b451dbd65815ba5e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.bin:   0%|          | 0.00/3.09G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "fd37b7cff0c0435f9f1e7c538bb5c895",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocabulary.txt:   0%|          | 0.00/460k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1b9ac37c1e6a4c779a554e257c05e1ae",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/2.20M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Timestamped Transcript:\n",
      "[{'start': 1.08, 'end': 1.86, 'text': \"I'm\"}, {'start': 1.86, 'end': 2.36, 'text': 'recording'}, {'start': 2.36, 'end': 2.6, 'text': 'this'}, {'start': 2.6, 'end': 3.0, 'text': 'audio'}, {'start': 3.0, 'end': 3.52, 'text': 'to'}, {'start': 3.52, 'end': 3.96, 'text': 'compare'}, {'start': 3.96, 'end': 4.22, 'text': 'this'}, {'start': 4.22, 'end': 4.36, 'text': 'to'}, {'start': 4.36, 'end': 4.52, 'text': 'my'}, {'start': 4.52, 'end': 5.16, 'text': 'initial'}, {'start': 5.16, 'end': 5.6, 'text': 'audio'}, {'start': 5.6, 'end': 5.86, 'text': 'that'}, {'start': 5.86, 'end': 6.04, 'text': 'I'}, {'start': 6.04, 'end': 6.42, 'text': 'passed.'}, {'start': 7.12, 'end': 7.32, 'text': 'So'}, {'start': 7.32, 'end': 8.02, 'text': 'in'}, {'start': 8.02, 'end': 8.22, 'text': 'that'}, {'start': 8.22, 'end': 8.52, 'text': 'audio,'}, {'start': 8.68, 'end': 8.76, 'text': 'I'}, {'start': 8.76, 'end': 9.2, 'text': 'mentioned'}, {'start': 9.2, 'end': 9.62, 'text': 'how'}, {'start': 9.62, 'end': 10.58, 'text': \"I'm\"}, {'start': 10.58, 'end': 10.84, 'text': 'building'}, {'start': 10.84, 'end': 11.34, 'text': 'an'}, {'start': 11.34, 'end': 11.78, 'text': 'application'}, {'start': 11.78, 'end': 12.14, 'text': 'which'}, {'start': 12.14, 'end': 13.12, 'text': 'will'}, {'start': 13.12, 'end': 13.5, 'text': 'map'}, {'start': 13.5, 'end': 14.06, 'text': 'speakers'}, {'start': 14.06, 'end': 14.42, 'text': 'based'}, {'start': 14.42, 'end': 14.64, 'text': 'on'}, {'start': 14.64, 'end': 14.78, 'text': 'their'}, {'start': 14.78, 'end': 15.08, 'text': 'voices'}, {'start': 15.08, 'end': 15.36, 'text': 'in'}, {'start': 15.36, 'end': 15.5, 'text': 'the'}, {'start': 15.5, 'end': 15.98, 'text': 'transcription.'}, {'start': 16.78, 'end': 16.96, 'text': 'And'}, {'start': 16.96, 'end': 17.36, 'text': 'also'}, {'start': 17.36, 'end': 17.56, 'text': 'I'}, {'start': 17.56, 'end': 17.98, 'text': 'discussed'}, {'start': 17.98, 'end': 18.46, 'text': 'about'}, {'start': 18.46, 'end': 19.38, 'text': 'how'}, {'start': 19.38, 'end': 19.66, 'text': \"I'll\"}, {'start': 19.66, 'end': 19.88, 'text': 'be'}, {'start': 19.88, 'end': 20.4, 'text': 'visiting'}, {'start': 20.4, 'end': 21.62, 'text': 'an'}, {'start': 21.62, 'end': 22.14, 'text': 'event'}, {'start': 22.14, 'end': 22.6, 'text': 'tomorrow.'}, {'start': 23.25, 'end': 23.64, 'text': 'This'}, {'start': 23.64, 'end': 23.88, 'text': 'event'}, {'start': 23.88, 'end': 24.08, 'text': 'is'}, {'start': 24.08, 'end': 24.38, 'text': 'hosted'}, {'start': 24.38, 'end': 24.68, 'text': 'by'}, {'start': 24.68, 'end': 25.04, 'text': 'TensorFlow'}, {'start': 25.04, 'end': 25.62, 'text': 'Group'}, {'start': 25.62, 'end': 26.18, 'text': 'Ghaziabad'}, {'start': 26.18, 'end': 26.42, 'text': 'and'}, {'start': 26.42, 'end': 26.64, 'text': \"it's\"}, {'start': 26.64, 'end': 26.84, 'text': 'called'}, {'start': 26.84, 'end': 27.26, 'text': 'ML'}, {'start': 27.26, 'end': 27.8, 'text': 'Saturday.'}, {'start': 28.6, 'end': 28.9, 'text': 'So'}, {'start': 28.9, 'end': 29.26, 'text': 'it'}, {'start': 29.26, 'end': 29.44, 'text': 'is'}, {'start': 29.44, 'end': 29.56, 'text': 'on'}, {'start': 29.56, 'end': 29.96, 'text': 'Saturday,'}, {'start': 30.16, 'end': 30.32, 'text': \"that's\"}, {'start': 30.32, 'end': 30.42, 'text': 'why'}, {'start': 30.42, 'end': 30.52, 'text': 'it'}, {'start': 30.52, 'end': 30.74, 'text': 'is'}, {'start': 30.74, 'end': 31.18, 'text': 'named'}, {'start': 31.18, 'end': 31.46, 'text': 'as'}, {'start': 31.46, 'end': 31.72, 'text': 'ML'}, {'start': 31.72, 'end': 32.14, 'text': 'Saturday'}, {'start': 32.14, 'end': 32.46, 'text': 'where'}, {'start': 32.46, 'end': 32.62, 'text': 'we'}, {'start': 32.62, 'end': 33.22, 'text': 'have'}, {'start': 33.22, 'end': 33.6, 'text': 'some'}, {'start': 33.6, 'end': 34.18, 'text': 'professionals'}, {'start': 34.18, 'end': 34.68, 'text': 'coming'}, {'start': 34.68, 'end': 34.94, 'text': 'in'}, {'start': 34.94, 'end': 35.14, 'text': 'from'}, {'start': 35.14, 'end': 35.4, 'text': 'machine'}, {'start': 35.4, 'end': 35.7, 'text': 'learning'}, {'start': 35.7, 'end': 36.1, 'text': 'domain.'}, {'start': 36.88, 'end': 37.48, 'text': 'See'}, {'start': 37.48, 'end': 37.64, 'text': 'you'}, {'start': 37.64, 'end': 37.88, 'text': 'at'}, {'start': 37.88, 'end': 38.3, 'text': '12'}, {'start': 38.3, 'end': 38.76, 'text': 'tomorrow,'}, {'start': 39.22, 'end': 39.46, 'text': 'thank'}, {'start': 39.46, 'end': 39.62, 'text': 'you.'}]\n",
      "\n",
      "Plain Text Transcript:\n",
      "I'm recording this audio to compare this to my initial audio that I passed. So in that audio, I mentioned how I'm building an application which will map speakers based on their voices in the transcription. And also I discussed about how I'll be visiting an event tomorrow. This event is hosted by TensorFlow Group Ghaziabad and it's called ML Saturday. So it is on Saturday, that's why it is named as ML Saturday where we have some professionals coming in from machine learning domain. See you at 12 tomorrow, thank you.\n"
     ]
    }
   ],
   "source": [
    "from faster_whisper import WhisperModel\n",
    "\n",
    "\n",
    "def transcribe_audio(audio_path, model_size=\"large-v2\"):\n",
    "    \"\"\"\n",
    "    Transcribes an audio file using Faster Whisper and returns:\n",
    "    1. A timestamped transcript\n",
    "    2. A plain text transcript\n",
    "    \"\"\"\n",
    "    # Load Whisper model (Set `compute_type=\"float16\"` for GPU acceleration)\n",
    "    model = WhisperModel(\n",
    "        model_size, compute_type=\"float32\"\n",
    "    )  # Use \"float16\" if you have a compatible GPU\n",
    "\n",
    "    # Transcribe audio\n",
    "    segments, _ = model.transcribe(audio_path, word_timestamps=True)\n",
    "\n",
    "    timestamped_transcript = []\n",
    "    plain_text_transcript = \"\"\n",
    "\n",
    "    for segment in segments:\n",
    "        for word in segment.words:\n",
    "            start = round(word.start, 2)\n",
    "            end = round(word.end, 2)\n",
    "            text = word.word.strip()\n",
    "\n",
    "            # Append to timestamped transcript\n",
    "            timestamped_transcript.append({\"start\": start, \"end\": end, \"text\": text})\n",
    "\n",
    "            # Append to plain text transcript\n",
    "            plain_text_transcript += text + \" \"\n",
    "\n",
    "    return timestamped_transcript, plain_text_transcript.strip()\n",
    "\n",
    "\n",
    "# Example Usage\n",
    "audio_file = \"/content/audio_files/input_audio_shresth.m4a\"\n",
    "timestamped, plain_text = transcribe_audio(audio_file)\n",
    "\n",
    "print(\"\\nTimestamped Transcript:\")\n",
    "print(timestamped)\n",
    "\n",
    "print(\"\\nPlain Text Transcript:\")\n",
    "print(plain_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7QI1oOIz_Qs9"
   },
   "source": [
    "While this gives good transcription results, it is boring xd. I don't know who is speaking these words. Don't worry we'll fix it. In second step, we'll see how to connect lancedb with azure so that we can use this feature during the development and then we'll jump onto building out solution."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "3AJF0_yikp2S"
   },
   "source": [
    "#### How to use LanceDB with Azure Blob?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "id": "MGR_kzgWSeWx"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"AZURE_STORAGE_ACCOUNT_NAME\"] = \"<your_storage_account_name>\"\n",
    "os.environ[\"AZURE_STORAGE_ACCOUNT_KEY\"] = \"<your_account_access_key>\"\n",
    "# note that you can add other parameters supported in similar manner as env variables."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "fkpPN40JMLMc",
    "outputId": "4f35d35e-0526-4f62-9b3f-2f522b86f019"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Connected to LanceDB on Azure Blob Storage!\n"
     ]
    }
   ],
   "source": [
    "import lancedb\n",
    "\n",
    "AZURE_BLOB_CONTAINER = \"externaldata\"\n",
    "\n",
    "# Define the LanceDB path in Azure Blob\n",
    "lance_db_path = f\"abfs://{AZURE_BLOB_CONTAINER}/lancedb/\"\n",
    "\n",
    "# Connect to LanceDB with Azure Blob Storage as backend\n",
    "# db = await lancedb.connect_async(f\"az://{AZURE_BLOB_CONTAINER}/lancedb/\")\n",
    "\n",
    "db = lancedb.connect(f\"az://{AZURE_BLOB_CONTAINER}/lancedb/\")\n",
    "# Check connection\n",
    "print(\"Connected to LanceDB on Azure Blob Storage!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "id": "DLSA4XvDKttT"
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "table_name = \"testing\"\n",
    "\n",
    "df = pd.DataFrame(\n",
    "    {\n",
    "        \"text\": \"Hello, my name is Shresth\",  # Ensure text is in a list for proper row-wise representation\n",
    "        \"vector\": [\n",
    "            [[23, 45, 6, 7, 8, 8, 8, 923, 3, 3, 3, 3]]\n",
    "        ],  # Nested list structure to represent vector values correctly\n",
    "    }\n",
    ")\n",
    "\n",
    "db.create_table(table_name, data=df, mode=\"overwrite\")\n",
    "# The table with the same name will be overwritten when we rerun the query with another text."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "id": "4s4LCxNUWlSf"
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "id": "8YO7O8EYTaBG"
   },
   "outputs": [],
   "source": [
    "table = db.open_table(\"eng_chunks\")\n",
    "# Finally, print out the data in the table\n",
    "print(table)\n",
    "\n",
    "arrow_table = table.to_pandas()\n",
    "print(arrow_table)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4LHxyopA_ILH"
   },
   "source": [
    "I think now we are ready to build our application."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "65VqGHH4k1GF"
   },
   "source": [
    "#### Speaker Mapping using Whisper, Nemo-MSDD and LanceDB\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "6ao7m7LV8f99"
   },
   "source": [
    "##### Create database of known speakers. You need to have mutiple audio files with correct names at this step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "5uwH0bMjDtPK"
   },
   "outputs": [],
   "source": [
    "import torchaudio\n",
    "import torch\n",
    "import speechbrain\n",
    "from speechbrain.inference import SpeakerRecognition\n",
    "import lancedb\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import pyarrow as pa"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 301,
     "referenced_widgets": [
      "564c62933e514bc3b1b0cd637955e834",
      "6ae50d0679e14e04a9aa0976b4f5cbf2",
      "01ddbc76e3aa4ac6b339ffe619c7ebf6",
      "7660a0d46ec5432fb441b0298ea7d57c",
      "2826cabeea0a45cf82c73af6b8058c1a",
      "4eebdf5238924e4ca6f96bb3d62ae8ff",
      "421ccc8aaaef4d2588e119afb48d5250",
      "4cc708f967544af9b9b69c3822b40deb",
      "db2925d7a7ee4031acbeabe5dbd89644",
      "441c67c04ea04acb905430c3e7db4cc8",
      "87d0987794f44b33bb0d57bf1b752835",
      "429f8b285c9c4377a97b352776c89e2a",
      "02560736a2404264a95daceec6b57fba",
      "d93bb10f701c48eaaa4c02ec47dba3bc",
      "8925ce0aa70c4fbfa6933ed5909050b2",
      "c51ac07ab4c844438cb3853520ce1e2a",
      "63d6838a68374c20935901b8affaaa05",
      "3946890d1af14d8cbba33f2d377b074a",
      "3f525d949cfd447a9f615bb8654a0b6c",
      "4adb957bf2464a46a2d3741ea02cbe39",
      "f753c16add784a4183313c8751dc2e8a",
      "5e7aa8469fe1439ca54c7e57c59f4dc0",
      "f70c139408c14a07ac3a55efed1fb568",
      "5396d1cedf0e4f849e807e6a57853c56",
      "3e9962f3076448fcb545d910cc96fe39",
      "8a305c948b6c43d38f5926fbf4e16926",
      "4b45f154eaee4f58ba8e17ffc51af520",
      "998be91c0d9c47c986d0b002b81dd747",
      "9b186b4dcce4472eb6a9b801d089f7b4",
      "1123b887c02a4979859e9886a5471b0f",
      "4689f879d10346fdae4e4b1dc03c7677",
      "b9a8e6b80ddc456a84570d6151e50600",
      "6848fce09ba2416bab3451a087ddcf26",
      "5e6137901bde49e0af0093e0df24131f",
      "de5c3b41a2484e1e8b4277dd838f24a6",
      "6127efebdf084bed99067ee186250417",
      "ead4489d7e52473bb0ce27eb80c3b52a",
      "f86d9e1addb44445b17aeeb7abb04fa8",
      "3b61f36a6a064e8d816cc14139faab5b",
      "d2068e2784a74c16985c6cfbbb5beb6d",
      "9e282f9f6a344fa8908f4db1ca9f1d06",
      "85727b39b0ce4946b83a835af2960bd7",
      "b3d8bb1010b842b8b60f7a22ff0f3c88",
      "34377b8bc73647aea9c395cba2a054bf",
      "8dd5351dc94f42ae95f99ce90ce22a95",
      "5dc49a8f0439473c8077be2646901610",
      "f55a8abac8f44c03acc2942b3910ad07",
      "87ba279885b3463fa37ae9fe310ccc15",
      "667ce59ba933429684b17791cce1b1b4",
      "06b81a2b882b4b249d4328f8f6b36a14",
      "baacb1d563f141d995d4621a123395ef",
      "be30659b8acc4540b88688e53f39e90d",
      "6612425bc2954ab9bc813c650cff2edd",
      "3a67bd315cd840799093f3cc9c1fedc1",
      "3d390e028ce34c2a9f70714545c7a6ba"
     ]
    },
    "id": "Y7LnmG2mEln8",
    "outputId": "87d0907e-04b7-4290-e561-312756380a51"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "564c62933e514bc3b1b0cd637955e834",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "hyperparams.yaml:   0%|          | 0.00/1.92k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.11/dist-packages/speechbrain/utils/autocast.py:68: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.\n",
      "  wrapped_fwd = torch.cuda.amp.custom_fwd(fwd, cast_inputs=cast_inputs)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "429f8b285c9c4377a97b352776c89e2a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "embedding_model.ckpt:   0%|          | 0.00/83.3M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f70c139408c14a07ac3a55efed1fb568",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "mean_var_norm_emb.ckpt:   0%|          | 0.00/1.92k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5e6137901bde49e0af0093e0df24131f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "classifier.ckpt:   0%|          | 0.00/5.53M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8dd5351dc94f42ae95f99ce90ce22a95",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "label_encoder.txt:   0%|          | 0.00/129k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.11/dist-packages/speechbrain/utils/checkpoints.py:200: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
      "  state_dict = torch.load(path, map_location=device)\n",
      "/usr/local/lib/python3.11/dist-packages/speechbrain/processing/features.py:1311: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
      "  stats = torch.load(path, map_location=device)\n"
     ]
    }
   ],
   "source": [
    "# Load the Speaker Recognition model\n",
    "model = SpeakerRecognition.from_hparams(\n",
    "    source=\"speechbrain/spkrec-ecapa-voxceleb\", savedir=\"tmp_model\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "id": "-sKnNdFEE62y"
   },
   "outputs": [],
   "source": [
    "# def get_embedding(audio_path):\n",
    "#     \"\"\"Extracts speaker embedding from an audio file\"\"\"\n",
    "#     signal, fs = torchaudio.load(audio_path)\n",
    "\n",
    "\n",
    "#     embedding = model.encode_batch(signal).squeeze().detach().cpu().numpy()\n",
    "#     return embedding.tolist()  # Convert to list for Lancedb storage\n",
    "\n",
    "\n",
    "def get_embedding(audio_path):\n",
    "    \"\"\"Extracts speaker embedding from an audio file\"\"\"\n",
    "    signal, fs = torchaudio.load(audio_path)\n",
    "\n",
    "    # Convert stereo to mono (if needed)\n",
    "    if signal.shape[0] > 1:\n",
    "        signal = torch.mean(signal, dim=0, keepdim=True)  # Average both channels\n",
    "\n",
    "    embedding = model.encode_batch(signal).squeeze().detach().cpu().numpy()\n",
    "    return embedding.flatten().tolist()  # Convert to list for Lancedb storage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "4ATrviGGxpyS"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"AZURE_STORAGE_ACCOUNT_NAME\"] = \"<your_storage_account_name>\"\n",
    "os.environ[\"AZURE_STORAGE_ACCOUNT_KEY\"] = \"<your_account_access_key>\"\n",
    "# note that you can add other parameters supported in similar manner as env variables."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "sez8i9vvvf0Q"
   },
   "outputs": [],
   "source": [
    "# making blob connection\n",
    "\n",
    "AZURE_BLOB_CONTAINER = \"externaldata\"\n",
    "\n",
    "# Define the LanceDB path in Azure Blob\n",
    "lance_db_path = f\"abfs://{AZURE_BLOB_CONTAINER}/lancedb/\"\n",
    "\n",
    "# Connect to LanceDB with Azure Blob Storage as backend\n",
    "# db = await lancedb.connect_async(f\"az://{AZURE_BLOB_CONTAINER}/lancedb/\")\n",
    "\n",
    "db = lancedb.connect(f\"az://{AZURE_BLOB_CONTAINER}/lancedb/\")\n",
    "# Check connection\n",
    "print(\"Connected to LanceDB on Azure Blob Storage!\")\n",
    "\n",
    "# db = lancedb.connect(\"./speaker_db\")  # Creates/opens LanceDB directory on colab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "id": "NdZppvTq1KvU"
   },
   "outputs": [],
   "source": [
    "# Define schema with audio storage\n",
    "schema = pa.schema(\n",
    "    [\n",
    "        (\"name\", pa.string()),\n",
    "        (\"embedding\", pa.list_(pa.float32(), 192)),  # 192-dimensional embedding\n",
    "        (\"audio\", pa.binary()),  # Store raw audio bytes\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "def load_audio_as_bytes(audio_path):\n",
    "    \"\"\"Reads an audio file and converts it to bytes\"\"\"\n",
    "    with open(audio_path, \"rb\") as f:\n",
    "        return f.read()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "Fx4__Jhk07nw"
   },
   "outputs": [],
   "source": [
    "# Create table with the correct schema\n",
    "table = db.create_table(\"speakers\", schema=schema, mode=\"overwrite\")\n",
    "# Sample known speakers\n",
    "\n",
    "known_speakers = {\n",
    "    \"Shresth\": \"/content/audio_files/input_audio_shresth.m4a\",\n",
    "    \"Hamdeep\": \"/content/audio_files/input_audio_hamdeep.m4a\",\n",
    "    \"Arjun\": \"/content/audio_files/input_audio_arjun.mp3\",\n",
    "}\n",
    "\n",
    "# Store known speaker embeddings in LanceDB\n",
    "data = []\n",
    "for name, file in known_speakers.items():\n",
    "    embedding = get_embedding(file)\n",
    "    audio_bytes = load_audio_as_bytes(\n",
    "        file\n",
    "    )  # Convert audio to bytes. you can convert it into byte64 string as well\n",
    "\n",
    "    data.append(\n",
    "        {\n",
    "            \"name\": name,\n",
    "            \"embedding\": embedding,\n",
    "            \"audio\": audio_bytes,  # Store audio in LanceDB\n",
    "        }\n",
    "    )\n",
    "\n",
    "table.add(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "WtYBgvs_gGwV",
    "outputId": "ecc8dc0c-4a28-4e3f-fa62-dbf88a717e27"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n",
      "192\n",
      "192\n",
      "192\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "print(len(data))\n",
    "print(len(data[0][\"embedding\"]))\n",
    "print(len(data[1][\"embedding\"]))\n",
    "print(len(data[2][\"embedding\"]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "xOJgQmvhF4Fe"
   },
   "outputs": [],
   "source": [
    "# Reading audio bytes from LanceDB and saving it in the audio format.\n",
    "# Fetch first speaker's data for reference.\n",
    "\n",
    "# row = table.search().limit(1).to_list()[0]\n",
    "\n",
    "# # Save audio back\n",
    "# with open(f\"{row['name']}.m4a\", \"wb\") as f:\n",
    "#     f.write(row[\"audio\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZN4xoruo81Eb"
   },
   "source": [
    "##### Step 2 - Set up base parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "gqgUj7YL9Kj4"
   },
   "outputs": [],
   "source": [
    "# Name of the audio file. #you can pass your meeting recording at this step if working on that use case.\n",
    "audio_path = \"/content/audio_files/input_audio_arjun.mp3\"\n",
    "\n",
    "# Stemming to decide whether to enable music removal from speech, helps increase diarization quality but uses alot of ram.\n",
    "# we'll keep it false due to limited ram capacity. We'll test this directly on the audio. But for better quality of output, keep this True.\n",
    "\n",
    "enable_stemming = False\n",
    "\n",
    "# (choose from 'tiny.en', 'tiny', 'base.en', 'base', 'small.en', 'small', 'medium.en', 'medium', 'large-v1', 'large-v2', 'large-v3', 'large'). I think large-v2 performs decent. You can switch to  larger models for better results.\n",
    "whisper_model_name = \"large-v2\"\n",
    "\n",
    "# replaces numerical digits with their pronounciation, increases diarization accuracy. Again this is not mandatory step but if you want better results you can experiment with these paramters.\n",
    "suppress_numerals = True\n",
    "\n",
    "batch_size = 8\n",
    "\n",
    "language = None  # autodetect language\n",
    "\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"cpu\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b_zFAgL1_2Rk"
   },
   "source": [
    "##### Helper Functions from referenced notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "eI-LCpdCAE_s"
   },
   "source": [
    "You don't need to check them all. We'll be using only a few of these functions in this notebook, whichever are required. To create TXT or SRT files after mapping, please refer to the other notebook and integrate the rest of the code at the end after creating speaker mapping."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "JgP7NuEn--i6"
   },
   "outputs": [],
   "source": [
    "punct_model_langs = [\n",
    "    \"en\",\n",
    "    \"fr\",\n",
    "    \"de\",\n",
    "    \"es\",\n",
    "    \"it\",\n",
    "    \"nl\",\n",
    "    \"pt\",\n",
    "    \"bg\",\n",
    "    \"pl\",\n",
    "    \"cs\",\n",
    "    \"sk\",\n",
    "    \"sl\",\n",
    "]\n",
    "\n",
    "# Extract dictionaries\n",
    "LANGUAGES = data[\"LANGUAGES\"]\n",
    "# print(LANGUAGES)\n",
    "\n",
    "\n",
    "TO_LANGUAGE_CODE = data[\"TO_LANGUAGE_CODE\"]\n",
    "# print(TO_LANGUAGE_CODE)\n",
    "\n",
    "\n",
    "langs_to_iso = {\n",
    "    \"af\": \"afr\",\n",
    "    \"am\": \"amh\",\n",
    "    \"ar\": \"ara\",\n",
    "    \"as\": \"asm\",\n",
    "    \"az\": \"aze\",\n",
    "    \"ba\": \"bak\",\n",
    "    \"be\": \"bel\",\n",
    "    \"bg\": \"bul\",\n",
    "    \"bn\": \"ben\",\n",
    "    \"bo\": \"tib\",\n",
    "    \"br\": \"bre\",\n",
    "    \"bs\": \"bos\",\n",
    "    \"ca\": \"cat\",\n",
    "    \"cs\": \"cze\",\n",
    "    \"cy\": \"wel\",\n",
    "    \"da\": \"dan\",\n",
    "    \"de\": \"ger\",\n",
    "    \"el\": \"gre\",\n",
    "    \"en\": \"eng\",\n",
    "    \"es\": \"spa\",\n",
    "    \"et\": \"est\",\n",
    "    \"eu\": \"baq\",\n",
    "    \"fa\": \"per\",\n",
    "    \"fi\": \"fin\",\n",
    "    \"fo\": \"fao\",\n",
    "    \"fr\": \"fre\",\n",
    "    \"gl\": \"glg\",\n",
    "    \"gu\": \"guj\",\n",
    "    \"ha\": \"hau\",\n",
    "    \"haw\": \"haw\",\n",
    "    \"he\": \"heb\",\n",
    "    \"hi\": \"hin\",\n",
    "    \"hr\": \"hrv\",\n",
    "    \"ht\": \"hat\",\n",
    "    \"hu\": \"hun\",\n",
    "    \"hy\": \"arm\",\n",
    "    \"id\": \"ind\",\n",
    "    \"is\": \"ice\",\n",
    "    \"it\": \"ita\",\n",
    "    \"ja\": \"jpn\",\n",
    "    \"jw\": \"jav\",\n",
    "    \"ka\": \"geo\",\n",
    "    \"kk\": \"kaz\",\n",
    "    \"km\": \"khm\",\n",
    "    \"kn\": \"kan\",\n",
    "    \"ko\": \"kor\",\n",
    "    \"la\": \"lat\",\n",
    "    \"lb\": \"ltz\",\n",
    "    \"ln\": \"lin\",\n",
    "    \"lo\": \"lao\",\n",
    "    \"lt\": \"lit\",\n",
    "    \"lv\": \"lav\",\n",
    "    \"mg\": \"mlg\",\n",
    "    \"mi\": \"mao\",\n",
    "    \"mk\": \"mac\",\n",
    "    \"ml\": \"mal\",\n",
    "    \"mn\": \"mon\",\n",
    "    \"mr\": \"mar\",\n",
    "    \"ms\": \"may\",\n",
    "    \"mt\": \"mlt\",\n",
    "    \"my\": \"bur\",\n",
    "    \"ne\": \"nep\",\n",
    "    \"nl\": \"dut\",\n",
    "    \"nn\": \"nno\",\n",
    "    \"no\": \"nor\",\n",
    "    \"oc\": \"oci\",\n",
    "    \"pa\": \"pan\",\n",
    "    \"pl\": \"pol\",\n",
    "    \"ps\": \"pus\",\n",
    "    \"pt\": \"por\",\n",
    "    \"ro\": \"rum\",\n",
    "    \"ru\": \"rus\",\n",
    "    \"sa\": \"san\",\n",
    "    \"sd\": \"snd\",\n",
    "    \"si\": \"sin\",\n",
    "    \"sk\": \"slo\",\n",
    "    \"sl\": \"slv\",\n",
    "    \"sn\": \"sna\",\n",
    "    \"so\": \"som\",\n",
    "    \"sq\": \"alb\",\n",
    "    \"sr\": \"srp\",\n",
    "    \"su\": \"sun\",\n",
    "    \"sv\": \"swe\",\n",
    "    \"sw\": \"swa\",\n",
    "    \"ta\": \"tam\",\n",
    "    \"te\": \"tel\",\n",
    "    \"tg\": \"tgk\",\n",
    "    \"th\": \"tha\",\n",
    "    \"tk\": \"tuk\",\n",
    "    \"tl\": \"tgl\",\n",
    "    \"tr\": \"tur\",\n",
    "    \"tt\": \"tat\",\n",
    "    \"uk\": \"ukr\",\n",
    "    \"ur\": \"urd\",\n",
    "    \"uz\": \"uzb\",\n",
    "    \"vi\": \"vie\",\n",
    "    \"yi\": \"yid\",\n",
    "    \"yo\": \"yor\",\n",
    "    \"yue\": \"yue\",\n",
    "    \"zh\": \"chi\",\n",
    "}\n",
    "\n",
    "whisper_langs = sorted(LANGUAGES.keys()) + sorted(\n",
    "    [k.title() for k in TO_LANGUAGE_CODE.keys()]\n",
    ")\n",
    "\n",
    "\n",
    "def create_config(output_dir):\n",
    "    DOMAIN_TYPE = \"telephonic\"  # Can be meeting, telephonic, or general based on domain type of the audio file.\n",
    "    CONFIG_FILE_NAME = f\"diar_infer_{DOMAIN_TYPE}.yaml\"\n",
    "    CONFIG_URL = f\"https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/inference/{CONFIG_FILE_NAME}\"\n",
    "    MODEL_CONFIG = os.path.join(output_dir, CONFIG_FILE_NAME)\n",
    "    if not os.path.exists(MODEL_CONFIG):\n",
    "        MODEL_CONFIG = wget.download(CONFIG_URL, output_dir)\n",
    "\n",
    "    config = OmegaConf.load(MODEL_CONFIG)\n",
    "\n",
    "    data_dir = os.path.join(output_dir, \"data\")\n",
    "    os.makedirs(data_dir, exist_ok=True)\n",
    "\n",
    "    meta = {\n",
    "        \"audio_filepath\": os.path.join(output_dir, \"mono_file.wav\"),\n",
    "        \"offset\": 0,\n",
    "        \"duration\": None,\n",
    "        \"label\": \"infer\",\n",
    "        \"text\": \"-\",\n",
    "        \"rttm_filepath\": None,\n",
    "        \"uem_filepath\": None,\n",
    "    }\n",
    "    with open(os.path.join(data_dir, \"input_manifest.json\"), \"w\") as fp:\n",
    "        json.dump(meta, fp)\n",
    "        fp.write(\"\\n\")\n",
    "\n",
    "    pretrained_vad = \"vad_multilingual_marblenet\"\n",
    "    pretrained_speaker_model = \"titanet_large\"\n",
    "    config.num_workers = 0  # Workaround for multiprocessing hanging with ipython issue\n",
    "    config.diarizer.manifest_filepath = os.path.join(data_dir, \"input_manifest.json\")\n",
    "    config.diarizer.out_dir = (\n",
    "        output_dir  # Directory to store intermediate files and prediction outputs\n",
    "    )\n",
    "\n",
    "    config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model\n",
    "    config.diarizer.oracle_vad = (\n",
    "        False  # compute VAD provided with model_path to vad config\n",
    "    )\n",
    "    config.diarizer.clustering.parameters.oracle_num_speakers = False\n",
    "\n",
    "    # Here, we use our in-house pretrained NeMo VAD model\n",
    "    config.diarizer.vad.model_path = pretrained_vad\n",
    "    config.diarizer.vad.parameters.onset = 0.8\n",
    "    config.diarizer.vad.parameters.offset = 0.6\n",
    "    config.diarizer.vad.parameters.pad_offset = -0.05\n",
    "    config.diarizer.msdd_model.model_path = (\n",
    "        \"diar_msdd_telephonic\"  # Telephonic speaker diarization model\n",
    "    )\n",
    "\n",
    "    return config\n",
    "\n",
    "\n",
    "def get_word_ts_anchor(s, e, option=\"start\"):\n",
    "    if option == \"end\":\n",
    "        return e\n",
    "    elif option == \"mid\":\n",
    "        return (s + e) / 2\n",
    "    return s\n",
    "\n",
    "\n",
    "def get_words_speaker_mapping(wrd_ts, spk_ts, word_anchor_option=\"start\"):\n",
    "    s, e, sp = spk_ts[0]\n",
    "    wrd_pos, turn_idx = 0, 0\n",
    "    wrd_spk_mapping = []\n",
    "    for wrd_dict in wrd_ts:\n",
    "        ws, we, wrd = (\n",
    "            int(wrd_dict[\"start\"] * 1000),\n",
    "            int(wrd_dict[\"end\"] * 1000),\n",
    "            wrd_dict[\"text\"],\n",
    "        )\n",
    "        wrd_pos = get_word_ts_anchor(ws, we, word_anchor_option)\n",
    "        while wrd_pos > float(e):\n",
    "            turn_idx += 1\n",
    "            turn_idx = min(turn_idx, len(spk_ts) - 1)\n",
    "            s, e, sp = spk_ts[turn_idx]\n",
    "            if turn_idx == len(spk_ts) - 1:\n",
    "                e = get_word_ts_anchor(ws, we, option=\"end\")\n",
    "        wrd_spk_mapping.append(\n",
    "            {\"word\": wrd, \"start_time\": ws, \"end_time\": we, \"speaker\": sp}\n",
    "        )\n",
    "    return wrd_spk_mapping\n",
    "\n",
    "\n",
    "sentence_ending_punctuations = \".?!\"\n",
    "\n",
    "\n",
    "def get_first_word_idx_of_sentence(word_idx, word_list, speaker_list, max_words):\n",
    "    is_word_sentence_end = (\n",
    "        lambda x: x >= 0 and word_list[x][-1] in sentence_ending_punctuations\n",
    "    )\n",
    "    left_idx = word_idx\n",
    "    while (\n",
    "        left_idx > 0\n",
    "        and word_idx - left_idx < max_words\n",
    "        and speaker_list[left_idx - 1] == speaker_list[left_idx]\n",
    "        and not is_word_sentence_end(left_idx - 1)\n",
    "    ):\n",
    "        left_idx -= 1\n",
    "\n",
    "    return left_idx if left_idx == 0 or is_word_sentence_end(left_idx - 1) else -1\n",
    "\n",
    "\n",
    "def get_last_word_idx_of_sentence(word_idx, word_list, max_words):\n",
    "    is_word_sentence_end = (\n",
    "        lambda x: x >= 0 and word_list[x][-1] in sentence_ending_punctuations\n",
    "    )\n",
    "    right_idx = word_idx\n",
    "    while (\n",
    "        right_idx < len(word_list) - 1\n",
    "        and right_idx - word_idx < max_words\n",
    "        and not is_word_sentence_end(right_idx)\n",
    "    ):\n",
    "        right_idx += 1\n",
    "\n",
    "    return (\n",
    "        right_idx\n",
    "        if right_idx == len(word_list) - 1 or is_word_sentence_end(right_idx)\n",
    "        else -1\n",
    "    )\n",
    "\n",
    "\n",
    "def get_realigned_ws_mapping_with_punctuation(\n",
    "    word_speaker_mapping, max_words_in_sentence=50\n",
    "):\n",
    "    is_word_sentence_end = (\n",
    "        lambda x: x >= 0\n",
    "        and word_speaker_mapping[x][\"word\"][-1] in sentence_ending_punctuations\n",
    "    )\n",
    "    wsp_len = len(word_speaker_mapping)\n",
    "\n",
    "    words_list, speaker_list = [], []\n",
    "    for k, line_dict in enumerate(word_speaker_mapping):\n",
    "        word, speaker = line_dict[\"word\"], line_dict[\"speaker\"]\n",
    "        words_list.append(word)\n",
    "        speaker_list.append(speaker)\n",
    "\n",
    "    k = 0\n",
    "    while k < len(word_speaker_mapping):\n",
    "        line_dict = word_speaker_mapping[k]\n",
    "        if (\n",
    "            k < wsp_len - 1\n",
    "            and speaker_list[k] != speaker_list[k + 1]\n",
    "            and not is_word_sentence_end(k)\n",
    "        ):\n",
    "            left_idx = get_first_word_idx_of_sentence(\n",
    "                k, words_list, speaker_list, max_words_in_sentence\n",
    "            )\n",
    "            right_idx = (\n",
    "                get_last_word_idx_of_sentence(\n",
    "                    k, words_list, max_words_in_sentence - k + left_idx - 1\n",
    "                )\n",
    "                if left_idx > -1\n",
    "                else -1\n",
    "            )\n",
    "            if min(left_idx, right_idx) == -1:\n",
    "                k += 1\n",
    "                continue\n",
    "\n",
    "            spk_labels = speaker_list[left_idx : right_idx + 1]\n",
    "            mod_speaker = max(set(spk_labels), key=spk_labels.count)\n",
    "            if spk_labels.count(mod_speaker) < len(spk_labels) // 2:\n",
    "                k += 1\n",
    "                continue\n",
    "\n",
    "            speaker_list[left_idx : right_idx + 1] = [mod_speaker] * (\n",
    "                right_idx - left_idx + 1\n",
    "            )\n",
    "            k = right_idx\n",
    "\n",
    "        k += 1\n",
    "\n",
    "    k, realigned_list = 0, []\n",
    "    while k < len(word_speaker_mapping):\n",
    "        line_dict = word_speaker_mapping[k].copy()\n",
    "        line_dict[\"speaker\"] = speaker_list[k]\n",
    "        realigned_list.append(line_dict)\n",
    "        k += 1\n",
    "\n",
    "    return realigned_list\n",
    "\n",
    "\n",
    "def get_sentences_speaker_mapping(word_speaker_mapping, spk_ts):\n",
    "    sentence_checker = nltk.tokenize.PunktSentenceTokenizer().text_contains_sentbreak\n",
    "    s, e, spk = spk_ts[0]\n",
    "    prev_spk = spk\n",
    "\n",
    "    snts = []\n",
    "    snt = {\"speaker\": f\"Speaker {spk}\", \"start_time\": s, \"end_time\": e, \"text\": \"\"}\n",
    "\n",
    "    for wrd_dict in word_speaker_mapping:\n",
    "        wrd, spk = wrd_dict[\"word\"], wrd_dict[\"speaker\"]\n",
    "        s, e = wrd_dict[\"start_time\"], wrd_dict[\"end_time\"]\n",
    "        if spk != prev_spk or sentence_checker(snt[\"text\"] + \" \" + wrd):\n",
    "            snts.append(snt)\n",
    "            snt = {\n",
    "                \"speaker\": f\"Speaker {spk}\",\n",
    "                \"start_time\": s,\n",
    "                \"end_time\": e,\n",
    "                \"text\": \"\",\n",
    "            }\n",
    "        else:\n",
    "            snt[\"end_time\"] = e\n",
    "        snt[\"text\"] += wrd + \" \"\n",
    "        prev_spk = spk\n",
    "\n",
    "    snts.append(snt)\n",
    "    return snts\n",
    "\n",
    "\n",
    "def get_speaker_aware_transcript(sentences_speaker_mapping, f):\n",
    "    previous_speaker = sentences_speaker_mapping[0][\"speaker\"]\n",
    "    f.write(f\"{previous_speaker}: \")\n",
    "\n",
    "    for sentence_dict in sentences_speaker_mapping:\n",
    "        speaker = sentence_dict[\"speaker\"]\n",
    "        sentence = sentence_dict[\"text\"]\n",
    "\n",
    "        # If this speaker doesn't match the previous one, start a new paragraph\n",
    "        if speaker != previous_speaker:\n",
    "            f.write(f\"\\n\\n{speaker}: \")\n",
    "            previous_speaker = speaker\n",
    "\n",
    "        # No matter what, write the current sentence\n",
    "        f.write(sentence + \" \")\n",
    "\n",
    "\n",
    "def format_timestamp(\n",
    "    milliseconds: float, always_include_hours: bool = False, decimal_marker: str = \".\"\n",
    "):\n",
    "    assert milliseconds >= 0, \"non-negative timestamp expected\"\n",
    "\n",
    "    hours = milliseconds // 3_600_000\n",
    "    milliseconds -= hours * 3_600_000\n",
    "\n",
    "    minutes = milliseconds // 60_000\n",
    "    milliseconds -= minutes * 60_000\n",
    "\n",
    "    seconds = milliseconds // 1_000\n",
    "    milliseconds -= seconds * 1_000\n",
    "\n",
    "    hours_marker = f\"{hours:02d}:\" if always_include_hours or hours > 0 else \"\"\n",
    "    return (\n",
    "        f\"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}\"\n",
    "    )\n",
    "\n",
    "\n",
    "def write_srt(transcript, file):\n",
    "    \"\"\"\n",
    "    Write a transcript to a file in SRT format.\n",
    "    \"\"\"\n",
    "    for i, segment in enumerate(transcript, start=1):\n",
    "        # write srt lines\n",
    "        print(\n",
    "            f\"{i}\\n\"\n",
    "            f\"{format_timestamp(segment['start_time'], always_include_hours=True, decimal_marker=',')} --> \"\n",
    "            f\"{format_timestamp(segment['end_time'], always_include_hours=True, decimal_marker=',')}\\n\"\n",
    "            f\"{segment['speaker']}: {segment['text'].strip().replace('-->', '->')}\\n\",\n",
    "            file=file,\n",
    "            flush=True,\n",
    "        )\n",
    "\n",
    "\n",
    "def find_numeral_symbol_tokens(tokenizer):\n",
    "    numeral_symbol_tokens = [\n",
    "        -1,\n",
    "    ]\n",
    "    for token, token_id in tokenizer.get_vocab().items():\n",
    "        has_numeral_symbol = any(c in \"0123456789%$£\" for c in token)\n",
    "        if has_numeral_symbol:\n",
    "            numeral_symbol_tokens.append(token_id)\n",
    "    return numeral_symbol_tokens\n",
    "\n",
    "\n",
    "def _get_next_start_timestamp(word_timestamps, current_word_index, final_timestamp):\n",
    "    # if current word is the last word\n",
    "    if current_word_index == len(word_timestamps) - 1:\n",
    "        return word_timestamps[current_word_index][\"start\"]\n",
    "\n",
    "    next_word_index = current_word_index + 1\n",
    "    while current_word_index < len(word_timestamps) - 1:\n",
    "        if word_timestamps[next_word_index].get(\"start\") is None:\n",
    "            # if next word doesn't have a start timestamp\n",
    "            # merge it with the current word and delete it\n",
    "            word_timestamps[current_word_index][\"word\"] += (\n",
    "                \" \" + word_timestamps[next_word_index][\"word\"]\n",
    "            )\n",
    "\n",
    "            word_timestamps[next_word_index][\"word\"] = None\n",
    "            next_word_index += 1\n",
    "            if next_word_index == len(word_timestamps):\n",
    "                return final_timestamp\n",
    "\n",
    "        else:\n",
    "            return word_timestamps[next_word_index][\"start\"]\n",
    "\n",
    "\n",
    "def filter_missing_timestamps(\n",
    "    word_timestamps, initial_timestamp=0, final_timestamp=None\n",
    "):\n",
    "    # handle the first and last word\n",
    "    if word_timestamps[0].get(\"start\") is None:\n",
    "        word_timestamps[0][\"start\"] = (\n",
    "            initial_timestamp if initial_timestamp is not None else 0\n",
    "        )\n",
    "        word_timestamps[0][\"end\"] = _get_next_start_timestamp(\n",
    "            word_timestamps, 0, final_timestamp\n",
    "        )\n",
    "\n",
    "    result = [\n",
    "        word_timestamps[0],\n",
    "    ]\n",
    "\n",
    "    for i, ws in enumerate(word_timestamps[1:], start=1):\n",
    "        # if ws doesn't have a start and end\n",
    "        # use the previous end as start and next start as end\n",
    "        if ws.get(\"start\") is None and ws.get(\"word\") is not None:\n",
    "            ws[\"start\"] = word_timestamps[i - 1][\"end\"]\n",
    "            ws[\"end\"] = _get_next_start_timestamp(word_timestamps, i, final_timestamp)\n",
    "\n",
    "        if ws[\"word\"] is not None:\n",
    "            result.append(ws)\n",
    "    return result\n",
    "\n",
    "\n",
    "def cleanup(path: str):\n",
    "    \"\"\"path could either be relative or absolute.\"\"\"\n",
    "    # check if file or directory exists\n",
    "    if os.path.isfile(path) or os.path.islink(path):\n",
    "        # remove file\n",
    "        os.remove(path)\n",
    "    elif os.path.isdir(path):\n",
    "        # remove directory and all its content\n",
    "        shutil.rmtree(path)\n",
    "    else:\n",
    "        raise ValueError(\"Path {} is not a file or dir.\".format(path))\n",
    "\n",
    "\n",
    "def process_language_arg(language: str, model_name: str):\n",
    "    \"\"\"\n",
    "    Process the language argument to make sure it's valid and convert language names to language codes.\n",
    "    \"\"\"\n",
    "    if language is not None:\n",
    "        language = language.lower()\n",
    "    if language not in LANGUAGES:\n",
    "        if language in TO_LANGUAGE_CODE:\n",
    "            language = TO_LANGUAGE_CODE[language]\n",
    "        else:\n",
    "            raise ValueError(f\"Unsupported language: {language}\")\n",
    "\n",
    "    if model_name.endswith(\".en\") and language != \"en\":\n",
    "        if language is not None:\n",
    "            logging.warning(\n",
    "                f\"{model_name} is an English-only model but received '{language}'; using English instead.\"\n",
    "            )\n",
    "        language = \"en\"\n",
    "    return language"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uDli49hI__o8"
   },
   "source": [
    "##### Transcription using Whisper"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "KtllC12LAPUT",
    "outputId": "44c23e9c-2621-4d31-af49-dea82161d081"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "We are using Audio Directly\n"
     ]
    }
   ],
   "source": [
    "if enable_stemming:\n",
    "    # Isolate vocals from the rest of the audio\n",
    "\n",
    "    return_code = os.system(\n",
    "        f'python -m demucs.separate -n htdemucs --two-stems=vocals \"{audio_path}\" -o \"temp_outputs\" --device \"{device}\"'\n",
    "    )\n",
    "\n",
    "    if return_code != 0:\n",
    "        logging.warning(\"Source splitting failed, using original audio file.\")\n",
    "        vocal_target = audio_path\n",
    "    else:\n",
    "        vocal_target = os.path.join(\n",
    "            \"temp_outputs\",\n",
    "            \"htdemucs\",\n",
    "            os.path.splitext(os.path.basename(audio_path))[0],\n",
    "            \"vocals.wav\",\n",
    "        )\n",
    "else:\n",
    "    print(\"We are using Audio Directly\")\n",
    "    vocal_target = audio_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 145,
     "referenced_widgets": [
      "4533c103854d49048ff698e95df05993",
      "59823ab330654f859dd4476053c31524",
      "b6dbc6355a63450b823e33cfae9fd5fa",
      "075b8ab91bd64ef09ce08d5eb1bf1454",
      "2f2901eb16b844a49544c8c029d0ce6d",
      "a1ff9cdbbde340049e3e26a81f4940c5",
      "e95d6a982c65417a88bc03df837fa4f3",
      "bc428b87637e4f79806f08f899980d9b",
      "d51ba27d014247b2b0d8e323f29e9f0a",
      "8b9deb806d734c8483697ed4069d10d5",
      "377c2bb637f44ea29ea8390c0903ef12",
      "bf332b8ef2c442a48684eba10c0dc0c4",
      "7e5f7c92c66c46a4a0cb53d528b1656c",
      "a1cc9094d33d4651998e15be49972270",
      "aea294f6b580415f810894c5ccca78ea",
      "89585d4074224e32b7d29d38cb8d6523",
      "bafa86ea363f4605ae94ee6d68fa1e1d",
      "11d051f751d941c5ba21604cfad28732",
      "f518cc5de2f643b18fceaa57bd36030d",
      "322c13e0875f456eb30f0b4f5b68d02f",
      "3fddbf669b93464a85b9e47f344f5bf8",
      "88f97ac12578468abe01583d76a0cf22",
      "494c985b22ea45f69e9efec6d27c0abe",
      "c407091f0fd24b23a3f860341ef1a93d",
      "b622fef7fa8e496785e3b1196244d261",
      "8287d29f16f3430bb68f6a124801b64f",
      "cb18e818f3474c78944ffff7c1ecea6c",
      "5a9dd93935c04aa2a77bda7637998a7b",
      "ca48bc8c9bdf4d7780b433b81023c93d",
      "561b80ee154b4cc4af6319d544a9fe50",
      "760932d7b8c5446395ff7a2a14b73348",
      "b51e940c3571424b8012d2dd8d7b6338",
      "19a32124e3e94478bb8a1df472ce5246",
      "f8680e94a511497a982a2ae3776547fd",
      "f581ad2e130d4213bdc24aebd9b8b78c",
      "37f8fc58dfd64d6cbb116d0bded50f55",
      "4a67b5445a7043bbaae5f09a44bf6a85",
      "3c3181e1708648c09c8bf73c2fd57c22",
      "c1f996a87b97486080e044552b63e9ef",
      "bdfecfcd53554afcbd952e07b84a18d7",
      "910567496bdc44548300536eee8a4b35",
      "7d3f36e0164c4030bebf358903656f4e",
      "f33dee97fdda40338501ec534921cdc0",
      "92d3445fa3534c628790cddc4d2fce52"
     ]
    },
    "id": "jXdmBPYV-kO8",
    "outputId": "d5ca7b29-2c54-4ac3-d37e-9dbfdc096854"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4533c103854d49048ff698e95df05993",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/2.20M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "bf332b8ef2c442a48684eba10c0dc0c4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/2.80k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "494c985b22ea45f69e9efec6d27c0abe",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocabulary.txt:   0%|          | 0.00/460k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f8680e94a511497a982a2ae3776547fd",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.bin:   0%|          | 0.00/3.09G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "compute_type = \"int8\"\n",
    "# or run on GPU with INT8\n",
    "# compute_type = \"int8_float16\"\n",
    "# or run on CPU with INT8\n",
    "# compute_type = \"int8\"\n",
    "\n",
    "whisper_model = faster_whisper.WhisperModel(\n",
    "    whisper_model_name, device=device, compute_type=compute_type\n",
    ")\n",
    "whisper_pipeline = faster_whisper.BatchedInferencePipeline(whisper_model)\n",
    "\n",
    "audio_waveform = faster_whisper.decode_audio(vocal_target)\n",
    "\n",
    "suppress_tokens = (\n",
    "    find_numeral_symbol_tokens(whisper_model.hf_tokenizer)\n",
    "    if suppress_numerals\n",
    "    else [-1]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "-8SR9IoS9Kgq",
    "outputId": "622d95f7-2c6a-44cd-cf3c-3f90618b5bb0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " name and address of importer z lifestyle private private limited USB Wire\n"
     ]
    }
   ],
   "source": [
    "# the colab session usually crash at this step due to processing. make sure your other notebooks are not using colab runtime. or switch to TPU/local setup :)\n",
    "# if you only want to see how can we create speaker mapped RTTM file, you don't need to run this cell. If you want to create transcription (text/srt files), you can run this and then perform forced alignment for word-timestamp transcript.\n",
    "\n",
    "transcript_segments, info = whisper_pipeline.transcribe(\n",
    "    audio_waveform,\n",
    "    language,\n",
    "    suppress_tokens=suppress_tokens,\n",
    "    batch_size=batch_size,\n",
    "    without_timestamps=True,\n",
    ")\n",
    "\n",
    "\n",
    "full_transcript = \"\".join(segment.text for segment in transcript_segments)\n",
    "print(full_transcript)\n",
    "\n",
    "# clear gpu vram\n",
    "del whisper_model, whisper_pipeline\n",
    "torch.cuda.empty_cache()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "id": "C20YZoTa9Kc2"
   },
   "outputs": [],
   "source": [
    "# ignore this code for now. you'll need this to create transcription with forced alignment. #audio_wareform that we create here is used in next cell during diarization.\n",
    "# note that without_timestamps in above code is set to TRUE. This is because, we can use forced alignment for correct word-timestamp mapping.\n",
    "# Refer to the notebook mentioned above (in starting cells) for full code on forced alignment and creating transcript with correct timestamps using forced alignment.\n",
    "# I have limited this notebook to create RTTM file with updated speaker names.\n",
    "\n",
    "alignment_model, alignment_tokenizer = load_alignment_model(\n",
    "    device,\n",
    "    dtype=torch.float16 if device == \"cuda\" else torch.float32,\n",
    ")\n",
    "\n",
    "audio_waveform = (\n",
    "    torch.from_numpy(audio_waveform)\n",
    "    .to(alignment_model.dtype)\n",
    "    .to(alignment_model.device)\n",
    ")\n",
    "\n",
    "# you can use this audio_waveform during diarization step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "id": "sJswhhltLXDN"
   },
   "outputs": [],
   "source": [
    "del alignment_model\n",
    "torch.cuda.empty_cache()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "KjcOi5pUC_3m"
   },
   "source": [
    "##### Performing Diarization using Nemo-MSDD"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": true,
    "id": "errKa6eTDGdt"
   },
   "outputs": [],
   "source": [
    "# converting to mono for nemo compatibility.\n",
    "ROOT = os.getcwd()\n",
    "temp_path = os.path.join(ROOT, \"temp_outputs\")\n",
    "os.makedirs(temp_path, exist_ok=True)\n",
    "torchaudio.save(\n",
    "    os.path.join(temp_path, \"mono_file.wav\"),\n",
    "    audio_waveform.cpu().unsqueeze(0).float(),\n",
    "    16000,\n",
    "    channels_first=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "lw22eZmhFhtb",
    "outputId": "7f56b0ce-2e1b-4a03-b779-aebff32bff43"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:30:34 nemo_logging:393] Loading pretrained diar_msdd_telephonic model from NGC\n",
      "[NeMo I 2025-02-23 11:30:34 nemo_logging:393] Downloading from: https://api.ngc.nvidia.com/v2/models/nvidia/nemo/diar_msdd_telephonic/versions/1.0.1/files/diar_msdd_telephonic.nemo to /root/.cache/torch/NeMo/NeMo_2.2.0rc2/diar_msdd_telephonic/3c3697a0a46f945574fa407149975a13/diar_msdd_telephonic.nemo\n",
      "[NeMo I 2025-02-23 11:30:35 nemo_logging:393] Instantiating model from pre-trained checkpoint\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2025-02-23 11:30:37 nemo_logging:405] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.\n",
      "    Train config : \n",
      "    manifest_filepath: null\n",
      "    emb_dir: null\n",
      "    sample_rate: 16000\n",
      "    num_spks: 2\n",
      "    soft_label_thres: 0.5\n",
      "    labels: null\n",
      "    batch_size: 15\n",
      "    emb_batch_size: 0\n",
      "    shuffle: true\n",
      "    \n",
      "[NeMo W 2025-02-23 11:30:37 nemo_logging:405] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s). \n",
      "    Validation config : \n",
      "    manifest_filepath: null\n",
      "    emb_dir: null\n",
      "    sample_rate: 16000\n",
      "    num_spks: 2\n",
      "    soft_label_thres: 0.5\n",
      "    labels: null\n",
      "    batch_size: 15\n",
      "    emb_batch_size: 0\n",
      "    shuffle: false\n",
      "    \n",
      "[NeMo W 2025-02-23 11:30:37 nemo_logging:405] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).\n",
      "    Test config : \n",
      "    manifest_filepath: null\n",
      "    emb_dir: null\n",
      "    sample_rate: 16000\n",
      "    num_spks: 2\n",
      "    soft_label_thres: 0.5\n",
      "    labels: null\n",
      "    batch_size: 15\n",
      "    emb_batch_size: 0\n",
      "    shuffle: false\n",
      "    seq_eval_mode: false\n",
      "    \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:30:37 nemo_logging:393] PADDING: 16\n",
      "[NeMo I 2025-02-23 11:30:37 nemo_logging:393] PADDING: 16\n",
      "[NeMo I 2025-02-23 11:30:38 nemo_logging:393] Model EncDecDiarLabelModel was successfully restored from /root/.cache/torch/NeMo/NeMo_2.2.0rc2/diar_msdd_telephonic/3c3697a0a46f945574fa407149975a13/diar_msdd_telephonic.nemo.\n",
      "[NeMo I 2025-02-23 11:30:38 nemo_logging:393] PADDING: 16\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Loading pretrained vad_multilingual_marblenet model from NGC\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Downloading from: https://api.ngc.nvidia.com/v2/models/nvidia/nemo/vad_multilingual_marblenet/versions/1.10.0/files/vad_multilingual_marblenet.nemo to /root/.cache/torch/NeMo/NeMo_2.2.0rc2/vad_multilingual_marblenet/670f425c7f186060b7a7268ba6dfacb2/vad_multilingual_marblenet.nemo\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Instantiating model from pre-trained checkpoint\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2025-02-23 11:30:39 nemo_logging:405] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.\n",
      "    Train config : \n",
      "    manifest_filepath: /manifests/ami_train_0.63.json,/manifests/freesound_background_train.json,/manifests/freesound_laughter_train.json,/manifests/fisher_2004_background.json,/manifests/fisher_2004_speech_sampled.json,/manifests/google_train_manifest.json,/manifests/icsi_all_0.63.json,/manifests/musan_freesound_train.json,/manifests/musan_music_train.json,/manifests/musan_soundbible_train.json,/manifests/mandarin_train_sample.json,/manifests/german_train_sample.json,/manifests/spanish_train_sample.json,/manifests/french_train_sample.json,/manifests/russian_train_sample.json\n",
      "    sample_rate: 16000\n",
      "    labels:\n",
      "    - background\n",
      "    - speech\n",
      "    batch_size: 256\n",
      "    shuffle: true\n",
      "    is_tarred: false\n",
      "    tarred_audio_filepaths: null\n",
      "    tarred_shard_strategy: scatter\n",
      "    augmentor:\n",
      "      shift:\n",
      "        prob: 0.5\n",
      "        min_shift_ms: -10.0\n",
      "        max_shift_ms: 10.0\n",
      "      white_noise:\n",
      "        prob: 0.5\n",
      "        min_level: -90\n",
      "        max_level: -46\n",
      "        norm: true\n",
      "      noise:\n",
      "        prob: 0.5\n",
      "        manifest_path: /manifests/noise_0_1_musan_fs.json\n",
      "        min_snr_db: 0\n",
      "        max_snr_db: 30\n",
      "        max_gain_db: 300.0\n",
      "        norm: true\n",
      "      gain:\n",
      "        prob: 0.5\n",
      "        min_gain_dbfs: -10.0\n",
      "        max_gain_dbfs: 10.0\n",
      "        norm: true\n",
      "    num_workers: 16\n",
      "    pin_memory: true\n",
      "    \n",
      "[NeMo W 2025-02-23 11:30:39 nemo_logging:405] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s). \n",
      "    Validation config : \n",
      "    manifest_filepath: /manifests/ami_dev_0.63.json,/manifests/freesound_background_dev.json,/manifests/freesound_laughter_dev.json,/manifests/ch120_moved_0.63.json,/manifests/fisher_2005_500_speech_sampled.json,/manifests/google_dev_manifest.json,/manifests/musan_music_dev.json,/manifests/mandarin_dev.json,/manifests/german_dev.json,/manifests/spanish_dev.json,/manifests/french_dev.json,/manifests/russian_dev.json\n",
      "    sample_rate: 16000\n",
      "    labels:\n",
      "    - background\n",
      "    - speech\n",
      "    batch_size: 256\n",
      "    shuffle: false\n",
      "    val_loss_idx: 0\n",
      "    num_workers: 16\n",
      "    pin_memory: true\n",
      "    \n",
      "[NeMo W 2025-02-23 11:30:39 nemo_logging:405] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).\n",
      "    Test config : \n",
      "    manifest_filepath: null\n",
      "    sample_rate: 16000\n",
      "    labels:\n",
      "    - background\n",
      "    - speech\n",
      "    batch_size: 128\n",
      "    shuffle: false\n",
      "    test_loss_idx: 0\n",
      "    \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] PADDING: 16\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Model EncDecClassificationModel was successfully restored from /root/.cache/torch/NeMo/NeMo_2.2.0rc2/vad_multilingual_marblenet/670f425c7f186060b7a7268ba6dfacb2/vad_multilingual_marblenet.nemo.\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Multiscale Weights: [1, 1, 1, 1, 1]\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Clustering Parameters: {\n",
      "        \"oracle_num_speakers\": false,\n",
      "        \"max_num_speakers\": 8,\n",
      "        \"enhanced_count_thres\": 80,\n",
      "        \"max_rp_threshold\": 0.25,\n",
      "        \"sparse_search_volume\": 30,\n",
      "        \"maj_vote_spk_count\": false,\n",
      "        \"chunk_cluster_count\": 50,\n",
      "        \"embeddings_per_chunk\": 10000\n",
      "    }\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Number of files to diarize: 1\n",
      "[NeMo I 2025-02-23 11:30:39 nemo_logging:393] Split long audio file to avoid CUDA memory issue\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "splitting manifest: 100%|██████████| 1/1 [00:21<00:00, 21.26s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:00 nemo_logging:393] Perform streaming frame-level VAD\n",
      "[NeMo I 2025-02-23 11:31:00 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:00 nemo_logging:393] Dataset successfully loaded with 1 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:00 nemo_logging:393] # 1 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "vad: 100%|██████████| 1/1 [00:02<00:00,  2.22s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:03 nemo_logging:393] Generating predictions with overlapping input segments\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "                                                               "
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:03 nemo_logging:393] Converting frame level prediction to speech/no-speech segment in start and end times format.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "creating speech segments: 100%|██████████| 1/1 [00:00<00:00,  7.23it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Subsegmentation for embedding extraction: scale0, temp_outputs/speaker_outputs/subsegments_scale0.json\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Extracting embeddings for Diarization\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Dataset successfully loaded with 26 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] # 26 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[1/5] extract embeddings: 100%|██████████| 1/1 [00:00<00:00,  2.24it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Saved embedding files to temp_outputs/speaker_outputs/embeddings\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Subsegmentation for embedding extraction: scale1, temp_outputs/speaker_outputs/subsegments_scale1.json\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Extracting embeddings for Diarization\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Dataset successfully loaded with 33 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] # 33 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[2/5] extract embeddings: 100%|██████████| 1/1 [00:00<00:00,  4.65it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Saved embedding files to temp_outputs/speaker_outputs/embeddings\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Subsegmentation for embedding extraction: scale2, temp_outputs/speaker_outputs/subsegments_scale2.json\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Extracting embeddings for Diarization\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] Dataset successfully loaded with 38 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:04 nemo_logging:393] # 38 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[3/5] extract embeddings: 100%|██████████| 1/1 [00:00<00:00,  5.05it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Saved embedding files to temp_outputs/speaker_outputs/embeddings\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Subsegmentation for embedding extraction: scale3, temp_outputs/speaker_outputs/subsegments_scale3.json\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Extracting embeddings for Diarization\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Dataset successfully loaded with 55 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] # 55 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[4/5] extract embeddings: 100%|██████████| 1/1 [00:00<00:00,  6.62it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Saved embedding files to temp_outputs/speaker_outputs/embeddings\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Subsegmentation for embedding extraction: scale4, temp_outputs/speaker_outputs/subsegments_scale4.json\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Extracting embeddings for Diarization\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Filtered duration for loading collection is  0.00 hours.\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Dataset successfully loaded with 83 items and total duration provided from manifest is  0.01 hours.\n",
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] # 83 files loaded accounting to # 1 labels\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[5/5] extract embeddings: 100%|██████████| 2/2 [00:00<00:00,  7.86it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:05 nemo_logging:393] Saved embedding files to temp_outputs/speaker_outputs/embeddings\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "clustering: 100%|██████████| 1/1 [00:00<00:00,  1.32it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Outputs are saved in /content/temp_outputs directory\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[NeMo W 2025-02-23 11:31:06 nemo_logging:405] Check if each ground truth RTTMs were present in the provided manifest file. Skipping calculation of Diariazation Error Rate\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading embedding pickle file of scale:0 at temp_outputs/speaker_outputs/embeddings/subsegments_scale0_embeddings.pkl\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading embedding pickle file of scale:1 at temp_outputs/speaker_outputs/embeddings/subsegments_scale1_embeddings.pkl\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading embedding pickle file of scale:2 at temp_outputs/speaker_outputs/embeddings/subsegments_scale2_embeddings.pkl\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading embedding pickle file of scale:3 at temp_outputs/speaker_outputs/embeddings/subsegments_scale3_embeddings.pkl\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading embedding pickle file of scale:4 at temp_outputs/speaker_outputs/embeddings/subsegments_scale4_embeddings.pkl\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Loading cluster label file from temp_outputs/speaker_outputs/subsegments_scale4_cluster.label\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Filtered duration for loading collection is 0.000000.\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Total 1 session files loaded accounting to # 1 audio clips\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 1/1 [00:00<00:00,  6.43it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393]      [Threshold: 0.7000] [use_clus_as_main=False] [diar_window=50]\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Number of files to diarize: 1\n",
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Number of files to diarize: 1\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2025-02-23 11:31:06 nemo_logging:405] Check if each ground truth RTTMs were present in the provided manifest file. Skipping calculation of Diariazation Error Rate\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Number of files to diarize: 1\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2025-02-23 11:31:06 nemo_logging:405] Check if each ground truth RTTMs were present in the provided manifest file. Skipping calculation of Diariazation Error Rate\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393] Number of files to diarize: 1\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2025-02-23 11:31:06 nemo_logging:405] Check if each ground truth RTTMs were present in the provided manifest file. Skipping calculation of Diariazation Error Rate\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2025-02-23 11:31:06 nemo_logging:393]   \n",
      "    \n"
     ]
    }
   ],
   "source": [
    "# Initialize NeMo MSDD diarization model. #Once you run this code, you'll get your RTTM file in the temp_outputs folder.\n",
    "# It is a time taking step and can be speed up with better infra if you are taking this for production. Or you can chose to use APIs for this.\n",
    "\n",
    "temp_path = \"temp_outputs\"\n",
    "msdd_model = NeuralDiarizer(cfg=create_config(temp_path))  # .to(\"cuda\")\n",
    "msdd_model.diarize()\n",
    "\n",
    "del msdd_model\n",
    "torch.cuda.empty_cache()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "c8f-oe_mG3UB"
   },
   "source": [
    "Extracting first 10 or more seconds for each speaker using RTTM file created above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Op67P0tcFyk6",
    "outputId": "47ea4777-a8da-4ef0-a600-1fec1a76bf75"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Saved extracted_speakers_audio\\speaker_0_first_10s.wav\n"
     ]
    }
   ],
   "source": [
    "# You can pass this information in the following code to extract 10(or more) seconds audio for each speaker to identify and query from your database.\n",
    "# You can decide whether you want to save this audio file and perform this step of speaker mapping separately after creating the complete timestamped transcription and replace speakers in text file or you want to replace it in the RTTM file itself and then create the complete transcription. Both works.\n",
    "\n",
    "from pydub import AudioSegment\n",
    "\n",
    "# Path to input audio\n",
    "audio_path = audio_path\n",
    "\n",
    "# Load audio file\n",
    "audio = AudioSegment.from_file(audio_path)\n",
    "\n",
    "# Read RTTM file and extract timestamps\n",
    "rttm_file = \"/content/temp_outputs/pred_rttms/mono_file.rttm\"\n",
    "\n",
    "speaker_segments = {}\n",
    "\n",
    "with open(rttm_file, \"r\") as f:\n",
    "    for line in f:\n",
    "        parts = line.strip().split()\n",
    "        if len(parts) >= 8:\n",
    "            speaker = parts[7]  # Speaker ID (e.g., spk_0)\n",
    "            start_time = float(parts[3]) * 1000  # Convert sec → milliseconds\n",
    "            duration = float(parts[4]) * 1000  # Convert sec → milliseconds\n",
    "            end_time = start_time + duration\n",
    "\n",
    "            # Store segments for each speaker\n",
    "            if speaker not in speaker_segments:\n",
    "                speaker_segments[speaker] = []\n",
    "\n",
    "            speaker_segments[speaker].append((start_time, end_time))\n",
    "\n",
    "# Process first 10 seconds for each speaker\n",
    "for speaker, segments in speaker_segments.items():\n",
    "    speaker_audio = AudioSegment.silent(duration=0)  # Empty audio segment\n",
    "    total_duration = 0\n",
    "\n",
    "    for start_time, end_time in segments:\n",
    "        segment_duration = min(\n",
    "            end_time - start_time, 10_000 - total_duration\n",
    "        )  # Limit to 10 sec. Modify it as per your need.\n",
    "        speaker_audio += audio[start_time : start_time + segment_duration]\n",
    "        total_duration += segment_duration\n",
    "        if total_duration >= 10_000:  # Stop at 10 seconds\n",
    "            break\n",
    "\n",
    "    # Save speaker's first 10 seconds\n",
    "    if total_duration > 0:\n",
    "        output_filename = f\"extracted_speakers_audio\\{speaker}_first_10s.wav\"\n",
    "        speaker_audio.export(output_filename, format=\"wav\")\n",
    "        print(f\"Saved {output_filename}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LrVOj0VlHP7i"
   },
   "source": [
    "You can decide how to store these audio files for querying from the vector database. A good practice would be to create a folder and save the audio files with speaker IDs in the filenames. This way, you can easily use this information while mapping. You'll need to create a mapping of each speaker with their correct names (say dictionary for now) to use the next part of the code."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XEUoQL039LQX"
   },
   "source": [
    "##### Querying audio from the Lancedb vector database"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "8yL_KYHWBr6O",
    "outputId": "6c6ecff7-3afe-4ade-c524-16f23357d30c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Identified Speaker: Arjun, Similarity Score: 1.0\n"
     ]
    }
   ],
   "source": [
    "# Assuming you have your audio file created in the extracted_speakers_audio folder, now you can use the following code to get correct speaker names.\n",
    "\n",
    "# Given a new speaker audio sample. You need to do this activity in loop to map each speaker with their names.\n",
    "query_embedding = get_embedding(\n",
    "    \"extracted_speakers_audio\\speaker_0_first_10s.wav\"\n",
    ")  # assuming it is extracted audio from the full audio input/recording. pass the correct path to the folder.\n",
    "\n",
    "# Search in LanceDB and retrieve similarity scores\n",
    "results = table.search(query_embedding).metric(\"cosine\").limit(1).to_pandas()\n",
    "\n",
    "# Get the closest match and its similarity score\n",
    "if not results.empty:\n",
    "    identified_speaker = results.iloc[0][\"name\"]\n",
    "    similarity_score = 1 - results.iloc[0][\"_distance\"]  # Lower distance = better match\n",
    "    if similarity_score < 0.5:\n",
    "        identified_speaker = \"Unknown\"\n",
    "        print(\n",
    "            identified_speaker,\n",
    "            \"Speaker not found. Similarity score in current dataset - \",\n",
    "            similarity_score,\n",
    "        )\n",
    "\n",
    "    else:\n",
    "        print(\n",
    "            f\"Identified Speaker: {identified_speaker}, Similarity Score: {similarity_score}\"\n",
    "        )\n",
    "\n",
    "# the above code works for querying audio from the known speakers database.\n",
    "# Once checked, you need to create a dictionary to map speakers with their correct names in the file."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IdEpf56KlF0t"
   },
   "source": [
    "##### Replace Speakers with their Correct Names"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "IodOffikk00T",
    "outputId": "a59d38c0-9a12-4c1a-e6bc-cdebca41fd65"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Modified RTTM file saved successfully!\n"
     ]
    }
   ],
   "source": [
    "# assuming you have used above code in loop to create mapping like this after querying from the database.\n",
    "# You can also hose to create transcription as it is and then replace speakers in final transcript created.\n",
    "\n",
    "# Define the speaker mapping\n",
    "speaker_mapping = {\n",
    "    \"speaker_0\": \"Shresth\",\n",
    "    \"speaker_1\": \"Arjun\",\n",
    "    \"speaker_2\": \"Hamdeep\",\n",
    "    # Add more mappings as needed\n",
    "}\n",
    "\n",
    "# Load the RTTM file\n",
    "rttm_file_path = \"/content/temp_outputs/pred_rttms/mono_file.rttm\"\n",
    "output_file_path = \"/content/temp_outputs/pred_rttms/updated_mono_file.rttm\"\n",
    "\n",
    "# Read and modify the file line by line\n",
    "with open(rttm_file_path, \"r\") as file:\n",
    "    lines = file.readlines()\n",
    "\n",
    "# Replace speaker labels while preserving spacing\n",
    "with open(output_file_path, \"w\") as file:\n",
    "    for line in lines:\n",
    "        parts = line.strip().split()  # Split by whitespace\n",
    "        if (\n",
    "            len(parts) > 7 and parts[7] in speaker_mapping\n",
    "        ):  # Check if column 8 (index 7) is in mapping\n",
    "            parts[7] = speaker_mapping[parts[7]]  # Replace speaker label\n",
    "        file.write(\" \".join(parts) + \"\\n\")  # Preserve original spacing\n",
    "\n",
    "print(\"Modified RTTM file saved successfully!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "9EPar5vimKr1",
    "outputId": "7b7289cf-95e4-472d-b5b5-d3c01e5dccdb"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original RTTM File\n",
      "SPEAKER mono_file 1   0.060   1.900 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   2.380   0.300 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   2.940   1.580 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   5.020   1.020 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   6.300   0.780 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   7.500   2.940 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   10.940   2.780 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   14.060   0.940 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   16.140   1.900 <NA> <NA> speaker_0 <NA> <NA>\n",
      "SPEAKER mono_file 1   18.300   0.620 <NA> <NA> speaker_0 <NA> <NA>\n",
      "\n",
      "\n",
      "\n",
      "Updated RTTM File\n",
      "SPEAKER mono_file 1 0.060 1.900 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 2.380 0.300 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 2.940 1.580 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 5.020 1.020 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 6.300 0.780 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 7.500 2.940 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 10.940 2.780 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 14.060 0.940 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 16.140 1.900 <NA> <NA> Shresth <NA> <NA>\n",
      "SPEAKER mono_file 1 18.300 0.620 <NA> <NA> Shresth <NA> <NA>\n"
     ]
    }
   ],
   "source": [
    "# let's check if the file has been updated properly.\n",
    "\n",
    "# Define the path to your RTTM file\n",
    "# Read and print the file content\n",
    "print(\"Original RTTM File\")\n",
    "with open(rttm_file_path, \"r\") as file:\n",
    "    content = file.readlines()\n",
    "\n",
    "# Display the first few lines\n",
    "for line in content[:10]:  # Display only first 10 lines\n",
    "    print(line.strip())\n",
    "\n",
    "print(\"\\n\\n\")\n",
    "print(\"Updated RTTM File\")\n",
    "with open(output_file_path, \"r\") as file:\n",
    "    content = file.readlines()\n",
    "\n",
    "# Display the first few lines\n",
    "for line in content[:10]:  # Display only first 10 lines\n",
    "    print(line.strip())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Gdus278cncAq"
   },
   "source": [
    "#### Next Steps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "L6ABnsZPnfJf"
   },
   "source": [
    "I think you now have a clear idea of how to proceed with this. Once you obtain the updated RTTM file, you need to map speakers to sentences based on their timestamps. Additionally, you need to create a word-level speaker mapping. For this step, you can refer to the reference notebook shared, where you'll find the code for forced alignment to map timestamps with words and finally write the results into an SRT/TXT file. All the best!"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
