{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
    "\n",
    "Instructions for setting up Colab are as follows:\n",
    "1. Open a new Python 3 notebook.\n",
    "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
    "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
    "4. Run this cell to set up dependencies.\n",
    "\"\"\"\n",
    "# If you're using Google Colab and not running locally, run this cell.\n",
    "\n",
    "## Install dependencies\n",
    "!pip install wget\n",
    "!apt-get install sox libsndfile1 ffmpeg\n",
    "!pip install text-unidecode\n",
    "!pip install ipython\n",
    "\n",
    "# ## Install NeMo\n",
    "BRANCH = 'main'\n",
    "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[asr]\n",
    "\n",
    "## Install TorchAudio\n",
    "!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Streaming End-to-End Speaker Diarization \n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Streaming Diarization Inference with Sortformer\n",
    "\n",
    "As explained in the [Sortformer Diarization Training](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Diarization_Training.ipynb) tutorial, Sortformer is trained with Sort-Loss to generate speaker segments in arrival-time order. If a diarization model can generate speaker segments in a pre-defined manner or order, we do not need to match the permutations when we train diarization model with multi-speaker automatic speech recognition (ASR) models, nor do we need to match permutations from each window when a diarization model is running in streaming mode where audio chunk sequences are processed, creating a problem of permutation matching between inference windows. \n",
    "\n",
    "### Arrival-Order Speaker Cache\n",
    "\n",
    "We propose the [Arrival-Order Speaker Cache (AOSC)](https://arxiv.org/pdf/2507.18446), which stores frame-level embeddings from the pre-encode NEST module. Unlike [speaker-tracing buffer](https://arxiv.org/pdf/2006.02616) in the previous [EEND-based online systems](https://arxiv.org/pdf/2101.08473), AOSC organizes embeddings by speaker index in arrival-time order. Combined with Sortformer's built-in arrival-ordering mechanism, this automatically resolves between-chunk permutations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"images/cache_fifo_chunk.png\" alt=\"Cache, FIFO and Chunk\" style=\"width: 800px;\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Speaker cache updates in streaming Sortformer \n",
    "\n",
    "Short-chunk processing often reduces accuracy due to limited context. To address this, we combine AOSC with a FIFO queue, enhancing context utilization and enabling less frequent AOSC updates (rather than per-chunk), improving robustness and efficiency. As shown in the above figure, the system includes a speaker cache, FIFO queue, and input buffer (holding the current chunk and future context). Frames pushed out of the queue are processed by the speaker cache update mechanism."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"images/streaming_steps.png\" alt=\"Streaming steps\" style=\"width: 1200px;\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The AOSC update acts as a no-op function if the input sequence is shorter than the maximum cache length `spkcache_len`. For longer sequences, it compresses the input to `spkcache_len` frames by keeping only the highest-scoring embeddings based on the model's frame-level predictions. You can find more details about the speaker cache update mechanism in the [Streaming Sortformer](https://arxiv.org/pdf/2507.18446) paper."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### A toy example for diarization with a streaming Sortformer model \n",
    "\n",
    "Download a toy example audio file (`an4_diarize_test.wav`) and its ground-truth label file (`an4_diarize_test.rttm`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import wget\n",
    "ROOT = os.getcwd()\n",
    "data_dir = os.path.join(ROOT,'data')\n",
    "os.makedirs(data_dir, exist_ok=True)\n",
    "an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')\n",
    "an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')\n",
    "if not os.path.exists(an4_audio):\n",
    "    an4_audio_url = \"https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav\"\n",
    "    an4_audio = wget.download(an4_audio_url, data_dir)\n",
    "if not os.path.exists(an4_rttm):\n",
    "    an4_rttm_url = \"https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm\"\n",
    "    an4_rttm = wget.download(an4_rttm_url, data_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's plot the waveform and listen to the audio. You'll notice that there are two speakers in the audio clip."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import IPython\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import librosa\n",
    "\n",
    "sr = 16000\n",
    "signal, sr = librosa.load(an4_audio,sr=sr) \n",
    "\n",
    "fig,ax = plt.subplots(1,1)\n",
    "fig.set_figwidth(20)\n",
    "fig.set_figheight(2)\n",
    "plt.plot(np.arange(len(signal)),signal,'gray')\n",
    "fig.suptitle('Reference merged an4 audio', fontsize=16)\n",
    "plt.xlabel('time (secs)', fontsize=18)\n",
    "ax.margins(x=0)\n",
    "plt.ylabel('signal strength', fontsize=16)\n",
    "a,_ = plt.xticks();plt.xticks(a,a/sr)\n",
    "\n",
    "IPython.display.Audio(signal, rate=sr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have downloaded the example audio file contains multiple speakers, we can feed the audio clip into Sortformer diarizer and get the speaker diarization results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download Sortformer diarizer model\n",
    "\n",
    "To download streaming Sortformer diarizer from [HuggingFace model card](https://huggingface.co/nvidia) you need to get a [HuggingFace Acces Token](https://huggingface.co/docs/hub/en/security-tokens) and feed your access token in your python environment using [HuggingFace CLI](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli).\n",
    "\n",
    "If you are having trouble getting a HuggingFace token, you can download Sortformer model from [Streaming Sortformer HuggingFace model card](https://huggingface.co/nvidia) and specify the `.nemo` file path to the downloaded model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nemo.collections.asr.models import SortformerEncLabelModel\n",
    "from huggingface_hub import get_token as get_hf_token\n",
    "import torch\n",
    "\n",
    "if get_hf_token() is not None and get_hf_token().startswith(\"hf_\"):\n",
    "    # If you have logged into HuggingFace hub and have access token \n",
    "    diar_model = SortformerEncLabelModel.from_pretrained(\"nvidia/diar_streaming_sortformer_4spk-v2\")\n",
    "else:\n",
    "    # You can downloaded \".nemo\" file from https://huggingface.co/nvidia/diar_streaming_sortformer_4spk-v2 and specify the path.\n",
    "    diar_model = SortformerEncLabelModel.restore_from(restore_path=\"/path/to/diar_streaming_sortformer_4spk-v2.nemo\", map_location=torch.device('cuda'), strict=False)\n",
    "diar_model.eval()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Diarization output display function\n",
    "\n",
    "To visualize the streaming diarization output, we use the same diarization output display function as in offline Sortformer diarizer tutorial. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "\n",
    "def plot_diarout(preds):\n",
    "    preds_mat = preds.cpu().numpy().transpose()\n",
    "    cmap_str, grid_color_p= 'viridis', 'gray'\n",
    "    LW, FS = 0.4, 36\n",
    "\n",
    "    yticklabels = [\"spk0\", \"spk1\", \"spk2\", \"spk3\"]\n",
    "    yticks = np.arange(len(yticklabels))\n",
    "    fig, axs = plt.subplots(1, 1, figsize=(30, 3)) \n",
    "\n",
    "    axs.imshow(preds_mat, cmap=cmap_str, interpolation='nearest') \n",
    "    axs.set_title('Predictions', fontsize=FS)\n",
    "    axs.set_xticks(np.arange(-.5, preds_mat.shape[1], 1), minor=True)\n",
    "    axs.set_yticks(yticks)\n",
    "    axs.set_yticklabels(yticklabels)\n",
    "    axs.set_xlabel(f\"80 ms Frames\", fontsize=FS)\n",
    "    axs.grid(which='minor', color=grid_color_p, linestyle='-', linewidth=LW)\n",
    "\n",
    "    plt.savefig('plot.png', dpi=300) \n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Running Streaming Sortformer diarizer\n",
    "\n",
    "### Parameter configuration for Streaming Sortformer \n",
    "\n",
    "Now it's time to setup the model with streaming parameters (all measured in 80ms frames). \n",
    "\n",
    "`chunk_len`: The number of frames in a processing chunk.  \n",
    "`chunk_right_context`: The right context length.  \n",
    "`fifo_len`: The number of previous frames attached before the chunk, from the FIFO queue.\n",
    "`spkcache_update_period`: The number of frames extracted from the FIFO queue to update the speaker cache.\n",
    "`spkcache_len`: The total number of frames in the speaker cache.\n",
    "\n",
    "Note that the input buffer latency is determined by `chunk_len` + `chunk_right_context`.\n",
    "\n",
    "The following restrictions apply to the Streaming Sortformer parameters:   \n",
    "    * All streaming parameters must be non-negative integers.   \n",
    "    * `chunk_len` and `spkcache_update_period` must be greater than 0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "import math\n",
    "import torch\n",
    "import torch.amp\n",
    "from tqdm import tqdm \n",
    "\n",
    "# If cuda is available, assign the model to cuda\n",
    "if torch.cuda.is_available():\n",
    "    diar_model.to(torch.device(\"cuda\"))\n",
    "\n",
    "global autocast\n",
    "autocast = torch.amp.autocast(diar_model.device.type, enabled=True)\n",
    "\n",
    "# Set the streaming parameters corresponding to 1.04s latency setup. This will affect the streaming feat loader.\n",
    "diar_model.sortformer_modules.chunk_len = 6\n",
    "diar_model.sortformer_modules.spkcache_len = 188\n",
    "diar_model.sortformer_modules.chunk_right_context = 7\n",
    "diar_model.sortformer_modules.fifo_len = 188\n",
    "diar_model.sortformer_modules.spkcache_update_period = 144\n",
    "diar_model.sortformer_modules.log = False\n",
    "\n",
    "# Validate that the streaming parameters are set correctly\n",
    "diar_model.sortformer_modules._check_streaming_parameters()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Feature extraction from an audio file  \n",
    "\n",
    "\n",
    "Now, set up the input audio signal and convert it to log-mel features. Note that we are simulating the streaming scenario. In real life, we won't be able to access the whole utterance at once."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "audio_signal = torch.tensor(signal).unsqueeze(0).to(diar_model.device)\n",
    "audio_signal_length = torch.tensor([audio_signal.shape[1]]).to(diar_model.device)\n",
    "processed_signal, processed_signal_length = diar_model.preprocessor(input_signal=audio_signal, length=audio_signal_length)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Run a streaming loop for streaming diarization\n",
    "\n",
    "The following variables are needed to run the simulated streaming speaker diarization session. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size = 1\n",
    "processed_signal_offset = torch.zeros((batch_size,), dtype=torch.long, device=diar_model.device)\n",
    "\n",
    "streaming_state = diar_model.sortformer_modules.init_streaming_state(\n",
    "        batch_size = batch_size,\n",
    "        async_streaming = True,\n",
    "        device = diar_model.device\n",
    "    )\n",
    "total_preds = torch.zeros((batch_size, 0, diar_model.sortformer_modules.n_spk), device=diar_model.device)\n",
    "\n",
    "streaming_loader = diar_model.sortformer_modules.streaming_feat_loader(\n",
    "    feat_seq=processed_signal,\n",
    "    feat_seq_length=processed_signal_length,\n",
    "    feat_seq_offset=processed_signal_offset,\n",
    ")\n",
    "\n",
    "num_chunks = math.ceil(\n",
    "    processed_signal.shape[2] / (diar_model.sortformer_modules.chunk_len * diar_model.sortformer_modules.subsampling_factor)\n",
    ")\n",
    "\n",
    "plot_preds = torch.zeros(\n",
    "    (batch_size, num_chunks * diar_model.sortformer_modules.chunk_len, diar_model.sortformer_modules.n_spk), device=diar_model.device\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we are ready to run streaming diarization step. Check out the output at the end of the streaming for loop. Note that this is a way to simulate the streaming input. In real-life setting,  `chunk_feat_seq_t` needs to be replaced with real-time streaming microphone audio stream."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# To simulate the real-time streaming, we will sleep for a chunk duration after each step\n",
    "chunk_duration_seconds = diar_model.sortformer_modules.chunk_len * diar_model.sortformer_modules.subsampling_factor * diar_model.preprocessor._cfg.window_stride\n",
    "print(f\"Chunk duration: {chunk_duration_seconds} seconds\")\n",
    "\n",
    "for i, chunk_feat_seq_t, feat_lengths, left_offset, right_offset in tqdm(\n",
    "    streaming_loader,\n",
    "    total=num_chunks,\n",
    "    desc=\"Streaming Steps\",\n",
    "    disable=False,\n",
    "):\n",
    "    loop_start_time = time.time()\n",
    "    with torch.inference_mode():\n",
    "        with autocast:\n",
    "            streaming_state, total_preds = diar_model.forward_streaming_step(\n",
    "                processed_signal=chunk_feat_seq_t,\n",
    "                processed_signal_length=feat_lengths,\n",
    "                streaming_state=streaming_state,\n",
    "                total_preds=total_preds,\n",
    "                left_offset=left_offset,\n",
    "                right_offset=right_offset,\n",
    "            )\n",
    "            # plot the predictions\n",
    "            plot_preds[:,:total_preds.shape[1]] = total_preds\n",
    "            plot_diarout(plot_preds[0,:]) \n",
    "            time.sleep(chunk_duration_seconds)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  },
  "pycharm": {
   "stem_cell": {
    "cell_type": "raw",
    "metadata": {
     "collapsed": false
    },
    "source": []
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
