{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# How to combine multiple datasets\n",
    "\n",
    "This notebook shows how to leverage Lhotse for combining multiple datasets.\n",
    "\n",
    "We do not perform any data I/O or transforms here for simplicity, but the samplers defined in this tutorial can be used with everything we demonstrate in other tutorials (e.g., see the training dataset in `examples/00-basic-workflow.ipynb`).\n",
    "\n",
    "⚠️ Throughout this notebook, we mostly use `SimpleCutSampler` and `BucketingSampler` and \"eager\" (fully in-memory) `CutSet`s because we work with very small datasets here. \n",
    "When working with larger datasets, you will usually want to read cuts lazily (e.g., with `CutSet.from_jsonl_lazy()`) and use dynamic samplers (e.g., `DynamicCutSampler` and `DynamicBucketingSampler`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    },
    "pycharm": {
     "is_executing": true,
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "# Optional auto-formatting\n",
    "\n",
    "#!pip install nb_black\n",
    "%load_ext lab_black"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get the latest version of Lhotse, if not installed:\n",
    "\n",
    "#!pip install git+https://github.com/lhotse-speech/lhotse"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from pathlib import Path\n",
    "\n",
    "import torch\n",
    "\n",
    "from lhotse import CutSet\n",
    "from lhotse.dataset import (\n",
    "    BucketingSampler,\n",
    "    DynamicBucketingSampler,\n",
    "    DynamicCutSampler,\n",
    "    SimpleCutSampler,\n",
    "    RoundRobinSampler,\n",
    "    ZipSampler,\n",
    ")\n",
    "from lhotse.recipes import (\n",
    "    download_librispeech,\n",
    "    download_yesno,\n",
    "    prepare_librispeech,\n",
    "    prepare_yesno,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "root_dir = Path(\"data\")\n",
    "tmp_dir = Path(\"tmp\")\n",
    "tmp_dir.mkdir(exist_ok=True)\n",
    "num_jobs = os.cpu_count() - 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# (mini) LibriSpeech\n",
    "\n",
    "This dataset contains 5h of training data and 2h of dev data.\n",
    "\n",
    "We're downloading the data, preparing recording/supervision manfiests, and compiling them into CutSets. \n",
    "\n",
    "Approx. download size 450MB."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0e0d6039b0e24536b9fb5936c0a3bf10",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading LibriSpeech parts:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "986ada0b2b814a6da386c79c72f4180e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Dataset parts:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Distributing tasks: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Processing:   0%|          | 0/1089 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Distributing tasks: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Processing:   0%|          | 0/1519 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# libri_variant = \"librispeech\"\n",
    "libri_variant = \"mini_librispeech\"\n",
    "libri_root = download_librispeech(root_dir, dataset_parts=libri_variant)\n",
    "libri = prepare_librispeech(\n",
    "    libri_root, dataset_parts=libri_variant, output_dir=root_dir, num_jobs=num_jobs\n",
    ")\n",
    "cuts_libri_train = CutSet.from_manifests(\n",
    "    **libri[\"train-clean-5\"]\n",
    ").trim_to_supervisions()\n",
    "cuts_libri_dev = CutSet.from_manifests(**libri[\"dev-clean-2\"]).trim_to_supervisions()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Cuts count: 1519\n",
      "Total duration (hours): 5.3\n",
      "Speech duration (hours): 5.3 (100.0%)\n",
      "***\n",
      "Duration statistics (seconds):\n",
      "mean\t12.6\n",
      "std\t3.6\n",
      "min\t1.9\n",
      "25%\t11.3\n",
      "50%\t13.9\n",
      "75%\t15.2\n",
      "99%\t16.6\n",
      "99.5%\t16.7\n",
      "99.9%\t17.1\n",
      "max\t17.3\n"
     ]
    }
   ],
   "source": [
    "cuts_libri_train.describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# YesNo\n",
    "\n",
    "This dataset contains 30 training utterances and 30 dev utterances. \n",
    "It has only two words: yes and no.\n",
    "It is approx. 50x smaller than mini LibriSpeech, resulting in a heavy data imbalance.\n",
    "\n",
    "We're downloading the data, preparing recording/supervision manfiests, and compiling them into CutSets. \n",
    "\n",
    "Approx. download size 4.5MB.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "yesno_root = download_yesno(root_dir)\n",
    "yesno = prepare_yesno(yesno_root, output_dir=root_dir)\n",
    "cuts_yesno_train = CutSet.from_manifests(**yesno[\"train\"])\n",
    "cuts_yesno_dev = CutSet.from_manifests(**yesno[\"test\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Cuts count: 30\n",
      "Total duration (hours): 0.1\n",
      "Speech duration (hours): 0.1 (100.0%)\n",
      "***\n",
      "Duration statistics (seconds):\n",
      "mean\t6.0\n",
      "std\t0.4\n",
      "min\t4.9\n",
      "25%\t5.9\n",
      "50%\t6.1\n",
      "75%\t6.2\n",
      "99%\t6.7\n",
      "99.5%\t6.7\n",
      "99.9%\t6.7\n",
      "max\t6.7\n"
     ]
    }
   ],
   "source": [
    "cuts_yesno_train.describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: we can see that YesNo has much shorter utterances than mini LibriSpeech."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Helper code\n",
    "\n",
    "## Mark each cut to see which dataset it comes from"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "for c in cuts_libri_train:\n",
    "    c.origin = \"libri\"\n",
    "for c in cuts_yesno_train:\n",
    "    c.origin = \"yesno\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Identity dataset, just to enable iterating DataLoader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "class DummyDataset(torch.utils.data.Dataset):\n",
    "    \"\"\"\n",
    "    Dataset that actually does nothing and just returns the CutSet.\n",
    "    It will help us illustrate iteration over the data using a DataLoader.\n",
    "    \"\"\"\n",
    "\n",
    "    def __getitem__(self, cuts: CutSet) -> CutSet:\n",
    "        return cuts\n",
    "\n",
    "\n",
    "dataset = DummyDataset()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Method 1: simply concatenate datasets\n",
    "\n",
    "This method is the simplest. \n",
    "\n",
    "**When is it good to use it?**\n",
    "\n",
    "✅ You believe that the quantity ratio between the datasets is adequate (in other words, you don't care about data imbalance for any reason).\n",
    "\n",
    "✅ You work with small to medium sized data and do not use lazy manifests: you can shuffle everything in memory which ensures that the examples from the smaller dataset are seen uniformly throughout a training epoch.\n",
    "\n",
    "**When to expect poor performance?**\n",
    "\n",
    "⚠️ You expect the dataset imbalance to create issues for your model's training.\n",
    "\n",
    "⚠️ You work with large datasets and Dynamic* samplers -- you will be shuffling data lazily with a buffer window, and examples from smaller datasets will likely only be seen closer to the start or the end of a training epoch."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "## 1.1 Using SimpleCutSampler\n",
    "\n",
    "SimpleCutSampler simply shuffles everything and iterates over the cuts without any regard for their durations. The shuffling is \"exact\", so you can expect good randomness. We observe that there aren't too many YesNo cuts in the first 20 batches, which reflects the datasets distribution.\n",
    "\n",
    "Note: unless you specifically prepared the cuts to have similar durations (e.g. for VAD, diarization, speaker ID training), then each mini-batch may contain cuts of various duration, resulting in excessive padding with this type of sampler."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = cuts_yesno_train + cuts_libri_train\n",
    "\n",
    "sampler = SimpleCutSampler(cuts_train, max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  1 | 88.3s speech | 50.5s padding |\n",
      "| batch  1 | #libri-cuts  6 | #yesno-cuts  2 | 92.6s speech | 67.4s padding |\n",
      "| batch  2 | #libri-cuts  6 | #yesno-cuts  0 | 86.5s speech | 11.2s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  1 | 98.4s speech | 42.6s padding |\n",
      "| batch  4 | #libri-cuts  7 | #yesno-cuts  0 | 93.7s speech | 15.9s padding |\n",
      "| batch  5 | #libri-cuts  8 | #yesno-cuts  1 | 98.7s speech | 59.9s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  0 | 96.8s speech |  7.8s padding |\n",
      "| batch  7 | #libri-cuts  7 | #yesno-cuts  0 | 96.5s speech | 17.3s padding |\n",
      "| batch  8 | #libri-cuts  8 | #yesno-cuts  0 | 87.3s speech | 41.9s padding |\n",
      "| batch  9 | #libri-cuts  7 | #yesno-cuts  0 | 89.5s speech | 17.1s padding |\n",
      "| batch 10 | #libri-cuts  6 | #yesno-cuts  1 | 92.3s speech | 35.3s padding |\n",
      "| batch 11 | #libri-cuts  8 | #yesno-cuts  0 | 91.6s speech | 31.9s padding |\n",
      "| batch 12 | #libri-cuts  6 | #yesno-cuts  0 | 87.7s speech |  6.3s padding |\n",
      "| batch 13 | #libri-cuts  7 | #yesno-cuts  0 | 89.1s speech | 25.0s padding |\n",
      "| batch 14 | #libri-cuts  6 | #yesno-cuts  1 | 95.3s speech | 32.8s padding |\n",
      "| batch 15 | #libri-cuts  7 | #yesno-cuts  0 | 97.9s speech | 11.3s padding |\n",
      "| batch 16 | #libri-cuts  7 | #yesno-cuts  1 | 92.7s speech | 55.9s padding |\n",
      "| batch 17 | #libri-cuts  7 | #yesno-cuts  0 | 94.6s speech | 15.7s padding |\n",
      "| batch 18 | #libri-cuts  6 | #yesno-cuts  1 | 89.6s speech | 37.5s padding |\n",
      "| batch 19 | #libri-cuts  7 | #yesno-cuts  0 | 92.9s speech | 21.0s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.2 Using BucketingSampler\n",
    "\n",
    "BucketingSampler also shuffles the full cutset in memory, so you can expect good randomness and less padding overall. The batch size is more dynamic with this type of sampler."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = cuts_yesno_train + cuts_libri_train\n",
    "\n",
    "sampler = BucketingSampler(cuts_train.to_eager(), max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  0 | 88.1s speech |  2.8s padding |\n",
      "| batch  1 | #libri-cuts  8 | #yesno-cuts  2 | 81.3s speech | 30.7s padding |\n",
      "| batch  2 | #libri-cuts  6 | #yesno-cuts  0 | 84.7s speech |  1.3s padding |\n",
      "| batch  3 | #libri-cuts  6 | #yesno-cuts  0 | 87.8s speech |  1.1s padding |\n",
      "| batch  4 | #libri-cuts  6 | #yesno-cuts  0 | 93.1s speech |  0.9s padding |\n",
      "| batch  5 | #libri-cuts 11 | #yesno-cuts  0 | 83.7s speech | 14.7s padding |\n",
      "| batch  6 | #libri-cuts  6 | #yesno-cuts  0 | 90.7s speech |  1.0s padding |\n",
      "| batch  7 | #libri-cuts  6 | #yesno-cuts  0 | 93.6s speech |  0.7s padding |\n",
      "| batch  8 | #libri-cuts  6 | #yesno-cuts  0 | 93.3s speech |  0.7s padding |\n",
      "| batch  9 | #libri-cuts  6 | #yesno-cuts  0 | 88.5s speech |  0.9s padding |\n",
      "| batch 10 | #libri-cuts  8 | #yesno-cuts  0 | 88.1s speech |  7.7s padding |\n",
      "| batch 11 | #libri-cuts  6 | #yesno-cuts  0 | 96.4s speech |  2.2s padding |\n",
      "| batch 12 | #libri-cuts  9 | #yesno-cuts  1 | 79.8s speech | 25.5s padding |\n",
      "| batch 13 | #libri-cuts  6 | #yesno-cuts  0 | 97.6s speech |  2.1s padding |\n",
      "| batch 14 | #libri-cuts  6 | #yesno-cuts  0 | 88.1s speech |  0.8s padding |\n",
      "| batch 15 | #libri-cuts  6 | #yesno-cuts  0 | 87.9s speech |  1.0s padding |\n",
      "| batch 16 | #libri-cuts  6 | #yesno-cuts  0 | 88.4s speech |  1.0s padding |\n",
      "| batch 17 | #libri-cuts  6 | #yesno-cuts  0 | 91.2s speech |  0.8s padding |\n",
      "| batch 18 | #libri-cuts  7 | #yesno-cuts  0 | 94.5s speech |  1.9s padding |\n",
      "| batch 19 | #libri-cuts  8 | #yesno-cuts  0 | 87.2s speech |  8.2s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.3 Use DynamicCutSampler\n",
    "\n",
    "This time, since the dynamic sampler performs shuffling with a fixed-window buffer, you'll notice that there is a higher concentration of YesNo cuts in the first few batches (even though shuffling is enabled!). This might cause some convergence issues during training."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = cuts_yesno_train + cuts_libri_train\n",
    "\n",
    "sampler = DynamicCutSampler(cuts_train, max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  1 | 88.3s speech | 50.5s padding |\n",
      "| batch  1 | #libri-cuts  6 | #yesno-cuts  2 | 92.6s speech | 67.4s padding |\n",
      "| batch  2 | #libri-cuts  6 | #yesno-cuts  0 | 86.5s speech | 11.2s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  1 | 98.4s speech | 42.6s padding |\n",
      "| batch  4 | #libri-cuts  7 | #yesno-cuts  0 | 93.7s speech | 15.9s padding |\n",
      "| batch  5 | #libri-cuts  8 | #yesno-cuts  1 | 98.7s speech | 59.9s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  0 | 96.8s speech |  7.8s padding |\n",
      "| batch  7 | #libri-cuts  7 | #yesno-cuts  0 | 96.5s speech | 17.3s padding |\n",
      "| batch  8 | #libri-cuts  8 | #yesno-cuts  0 | 87.3s speech | 41.9s padding |\n",
      "| batch  9 | #libri-cuts  7 | #yesno-cuts  0 | 89.5s speech | 17.1s padding |\n",
      "| batch 10 | #libri-cuts  6 | #yesno-cuts  1 | 92.3s speech | 35.3s padding |\n",
      "| batch 11 | #libri-cuts  8 | #yesno-cuts  0 | 91.6s speech | 31.9s padding |\n",
      "| batch 12 | #libri-cuts  6 | #yesno-cuts  0 | 87.7s speech |  6.3s padding |\n",
      "| batch 13 | #libri-cuts  7 | #yesno-cuts  0 | 89.1s speech | 25.0s padding |\n",
      "| batch 14 | #libri-cuts  6 | #yesno-cuts  1 | 95.3s speech | 32.8s padding |\n",
      "| batch 15 | #libri-cuts  7 | #yesno-cuts  0 | 97.9s speech | 11.3s padding |\n",
      "| batch 16 | #libri-cuts  7 | #yesno-cuts  1 | 92.7s speech | 55.9s padding |\n",
      "| batch 17 | #libri-cuts  7 | #yesno-cuts  0 | 94.6s speech | 15.7s padding |\n",
      "| batch 18 | #libri-cuts  6 | #yesno-cuts  1 | 89.6s speech | 37.5s padding |\n",
      "| batch 19 | #libri-cuts  7 | #yesno-cuts  0 | 92.9s speech | 21.0s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Method 2: have two DataLoaders\n",
    "\n",
    "This method is also conceptually simple, although requires to write the training loop in a bit different way.\n",
    "\n",
    "**When is it good to use it?**\n",
    "\n",
    "✅ You want to stop the epoch as soon as the smallest dataset has been fully iterated. This effectively under-samples the larger datasets and compensates for dataset imbalance.\n",
    "\n",
    "**When to expect poor performance?**\n",
    "\n",
    "⚠️ You want to leverage 100% of data at your disposal -- this might not happen here.\n",
    "\n",
    "⚠️ You work with large datasets and Dynamic* samplers -- since you will be shuffling data lazily with a buffer window, during each epoch you'll probably see mostly the same examples from the larger dataset, just in a different order. Expect about `len(larger_dataset) - len(smaller_dataset)` examples from the larger dataset to be unused during training, unless you specifically design your code to alleviate that (e.g., by sharding the larger dataset and reading different shards each epoch)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/pzelasko/meaning/lhotse/lhotse/dataset/sampling/dynamic.py:115: UserWarning: You are using DynamicCutSampler with an eagerly read CutSet. You won't see any memory/speed benefits with that setup. Use e.g. 'CutSet.from_jsonl_lazy' to read the CutSet lazily.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "sampler_libri = DynamicCutSampler(cuts_libri_train, max_duration=100, shuffle=True)\n",
    "dloader_libri = torch.utils.data.DataLoader(\n",
    "    dataset, sampler=sampler_libri, batch_size=None\n",
    ")\n",
    "\n",
    "sampler_yesno = DynamicCutSampler(cuts_yesno_train, max_duration=100, shuffle=True)\n",
    "dloader_yesno = torch.utils.data.DataLoader(\n",
    "    dataset, sampler=sampler_yesno, batch_size=None\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  0 | #yesno-cuts 16 | 96.6s speech | 11.2s padding |\n",
      "| batch  1 | #libri-cuts  7 | #yesno-cuts  0 | 90.4s speech | 23.3s padding |\n",
      "| batch  2 | #libri-cuts  0 | #yesno-cuts 14 | 84.8s speech |  7.3s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  0 | 93.8s speech |  9.7s padding |\n"
     ]
    }
   ],
   "source": [
    "dloaders = [iter(dloader_yesno), iter(dloader_libri)]\n",
    "idx = 0\n",
    "while True:\n",
    "    choice = idx % 2\n",
    "    chosen_dloader = dloaders[choice]\n",
    "\n",
    "    try:\n",
    "        batch = next(chosen_dloader)\n",
    "    except StopIteration:\n",
    "        break\n",
    "\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )\n",
    "\n",
    "    idx += 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "## Method 2b: RoundRobinSampler\n",
    "\n",
    "This method is more straightforward to use than two DataLoaders and may be more memory friendly and easier to manage, as it requires to spawn sub-process workers only from a single DataLoader.\n",
    "\n",
    "There is an argument `stop_early` that allows us to use balanced (`True`) or imbalanced (`False`) mix of datasets.\n",
    "\n",
    "### `stop_early=False`\n",
    "\n",
    "Notice that yesno becomes exhausted after batch 3, and the rest of the epoch is purely mini librispeech."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "sampler_libri = DynamicCutSampler(cuts_libri_train, max_duration=100, shuffle=True)\n",
    "sampler_yesno = DynamicCutSampler(cuts_yesno_train, max_duration=100, shuffle=True)\n",
    "\n",
    "sampler_both = RoundRobinSampler(sampler_libri, sampler_yesno, stop_early=False)\n",
    "\n",
    "dloader_both = torch.utils.data.DataLoader(\n",
    "    dataset, sampler=sampler_both, batch_size=None\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  0 | 90.4s speech | 23.3s padding |\n",
      "| batch  1 | #libri-cuts  0 | #yesno-cuts 16 | 96.6s speech | 11.2s padding |\n",
      "| batch  2 | #libri-cuts  7 | #yesno-cuts  0 | 93.8s speech |  9.7s padding |\n",
      "| batch  3 | #libri-cuts  0 | #yesno-cuts 14 | 84.8s speech |  7.3s padding |\n",
      "| batch  4 | #libri-cuts  8 | #yesno-cuts  0 | 89.0s speech | 40.6s padding |\n",
      "| batch  5 | #libri-cuts  7 | #yesno-cuts  0 | 97.2s speech |  9.2s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  0 | 87.7s speech | 22.8s padding |\n",
      "| batch  7 | #libri-cuts  6 | #yesno-cuts  0 | 86.0s speech |  5.7s padding |\n",
      "| batch  8 | #libri-cuts  6 | #yesno-cuts  0 | 91.8s speech |  8.8s padding |\n",
      "| batch  9 | #libri-cuts  7 | #yesno-cuts  0 | 90.5s speech | 19.4s padding |\n",
      "| batch 10 | #libri-cuts  9 | #yesno-cuts  0 | 98.5s speech | 35.9s padding |\n",
      "| batch 11 | #libri-cuts  7 | #yesno-cuts  0 | 95.0s speech | 15.6s padding |\n",
      "| batch 12 | #libri-cuts  8 | #yesno-cuts  0 | 96.0s speech | 24.9s padding |\n",
      "| batch 13 | #libri-cuts  8 | #yesno-cuts  0 | 93.4s speech | 23.0s padding |\n",
      "| batch 14 | #libri-cuts  7 | #yesno-cuts  0 | 95.9s speech | 15.5s padding |\n",
      "| batch 15 | #libri-cuts  7 | #yesno-cuts  0 | 91.7s speech | 19.0s padding |\n",
      "| batch 16 | #libri-cuts  7 | #yesno-cuts  0 | 98.4s speech | 13.1s padding |\n",
      "| batch 17 | #libri-cuts  8 | #yesno-cuts  0 | 97.3s speech | 29.0s padding |\n",
      "| batch 18 | #libri-cuts  7 | #yesno-cuts  0 | 88.8s speech | 19.1s padding |\n",
      "| batch 19 | #libri-cuts  8 | #yesno-cuts  0 | 91.5s speech | 35.4s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader_both):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "### Balanced mix of datasets `stop_early=True`\n",
    "\n",
    "Notice that the epoch consists only of 5 mini-batches, become one of the datasets (yesno) is extremely small."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "sampler_libri = DynamicCutSampler(cuts_libri_train, max_duration=100, shuffle=True)\n",
    "sampler_yesno = DynamicCutSampler(cuts_yesno_train, max_duration=100, shuffle=True)\n",
    "\n",
    "sampler_both = RoundRobinSampler(sampler_libri, sampler_yesno, stop_early=True)\n",
    "\n",
    "dloader_both = torch.utils.data.DataLoader(\n",
    "    dataset, sampler=sampler_both, batch_size=None\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  0 | 90.4s speech | 23.3s padding |\n",
      "| batch  1 | #libri-cuts  0 | #yesno-cuts 16 | 96.6s speech | 11.2s padding |\n",
      "| batch  2 | #libri-cuts  7 | #yesno-cuts  0 | 93.8s speech |  9.7s padding |\n",
      "| batch  3 | #libri-cuts  0 | #yesno-cuts 14 | 84.8s speech |  7.3s padding |\n",
      "| batch  4 | #libri-cuts  8 | #yesno-cuts  0 | 89.0s speech | 40.6s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader_both):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "# Method 3: CutSet multiplexing\n",
    "\n",
    "This method creates a lazily-evaluated CutSet out of two or more other CutSets. The result acts as a stochastic multiplexer.\n",
    "\n",
    "**When is it good to use it?**\n",
    "\n",
    "✅ You work with large datasets and Dynamic* samplers -- multiplexing doesn't require to read everything into memory and can improve the randomness of buffered shuffling.\n",
    "\n",
    "✅ (when `stop_early=True`) You want to stop the epoch as soon as the smallest dataset has been fully iterated. This effectively under-samples the larger datasets and compensates for dataset imbalance. \n",
    "\n",
    "**When to expect poor performance?**\n",
    "\n",
    "⚠️ You skipped setting the mux-ing weights despite the datasets being significantly imbalanced vs. each other.\n",
    "\n",
    "⚠️ (when `stop_early=True`) You work with large datasets and Dynamic* samplers -- since you will be shuffling data lazily with a buffer window, during each epoch you'll probably see mostly the same examples from the larger dataset, just in a different order. Expect about `len(larger_dataset) - len(smaller_dataset)` examples from the larger dataset to be unused during training, unless you specifically design your code to alleviate that (e.g., by sharding the larger dataset and reading different shards each epoch).\n",
    "\n",
    "⚠️ (when `stop_early=True`) You want to leverage 100% of data at your disposal -- this might not happen here."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Unweighted multiplexing\n",
    "\n",
    "This will tend to exhaust the smaller datasets much sooner than the larger ones, but will keep iterating until all data has been seen."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = CutSet.mux(cuts_libri_train, cuts_yesno_train)\n",
    "\n",
    "sampler = DynamicCutSampler(cuts_train, max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  2 | 95.5s speech | 74.2s padding |\n",
      "| batch  1 | #libri-cuts  5 | #yesno-cuts  3 | 84.8s speech | 91.2s padding |\n",
      "| batch  2 | #libri-cuts  7 | #yesno-cuts  0 | 91.3s speech | 22.8s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  0 | 99.6s speech |  7.7s padding |\n",
      "| batch  4 | #libri-cuts  6 | #yesno-cuts  1 | 91.0s speech | 35.9s padding |\n",
      "| batch  5 | #libri-cuts  8 | #yesno-cuts  1 | 99.0s speech | 60.5s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  0 | 96.0s speech |  8.7s padding |\n",
      "| batch  7 | #libri-cuts  8 | #yesno-cuts  0 | 98.9s speech | 30.2s padding |\n",
      "| batch  8 | #libri-cuts  8 | #yesno-cuts  0 | 98.9s speech | 22.9s padding |\n",
      "| batch  9 | #libri-cuts  7 | #yesno-cuts  0 | 86.5s speech | 22.0s padding |\n",
      "| batch 10 | #libri-cuts  8 | #yesno-cuts  0 | 99.7s speech | 27.9s padding |\n",
      "| batch 11 | #libri-cuts  7 | #yesno-cuts  0 | 92.0s speech | 16.3s padding |\n",
      "| batch 12 | #libri-cuts  7 | #yesno-cuts  0 | 94.0s speech | 15.6s padding |\n",
      "| batch 13 | #libri-cuts  6 | #yesno-cuts  1 | 89.0s speech | 41.4s padding |\n",
      "| batch 14 | #libri-cuts  6 | #yesno-cuts  1 | 90.8s speech | 37.3s padding |\n",
      "| batch 15 | #libri-cuts  6 | #yesno-cuts  0 | 89.7s speech |  9.4s padding |\n",
      "| batch 16 | #libri-cuts  8 | #yesno-cuts  0 | 96.2s speech | 24.0s padding |\n",
      "| batch 17 | #libri-cuts  7 | #yesno-cuts  0 | 98.0s speech | 12.3s padding |\n",
      "| batch 18 | #libri-cuts  6 | #yesno-cuts  1 | 90.5s speech | 39.7s padding |\n",
      "| batch 19 | #libri-cuts  7 | #yesno-cuts  0 | 90.7s speech | 18.0s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Early stopping (ends iteration when any of the cutsets gets depleted)\n",
    "\n",
    "This acts similarly to Method #2 with multiple DataLoaders and will balance your datasets by under-sampling the larger dataset. At the moment of writing, it is unclear to me which of these methods is better."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = CutSet.mux(\n",
    "    cuts_libri_train,\n",
    "    cuts_yesno_train,\n",
    "    stop_early=True,\n",
    ")\n",
    "\n",
    "sampler = DynamicCutSampler(cuts_train, max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  3 | #yesno-cuts  8 | 87.4s speech | 221.4s padding |\n",
      "| batch  1 | #libri-cuts  3 | #yesno-cuts  7 | 85.8s speech | 169.0s padding |\n",
      "| batch  2 | #libri-cuts  4 | #yesno-cuts  7 | 100.0s speech | 175.9s padding |\n",
      "| batch  3 | #libri-cuts  6 | #yesno-cuts  2 | 93.8s speech | 62.9s padding |\n",
      "| batch  4 | #libri-cuts  5 | #yesno-cuts  5 | 86.0s speech | 153.3s padding |\n",
      "| batch  5 | #libri-cuts  3 | #yesno-cuts  1 | 49.6s speech | 27.8s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Proportionally weighted multiplexing\n",
    "\n",
    "It works well for distributing the examples from all datasets roughly uniformly throughout the epoch, even if the datasets themselves are fairly large and CutSets cannot be fully read in memory (e.g., opened with `CutSet.from_jsonl_lazy`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "cuts_train = CutSet.mux(\n",
    "    cuts_libri_train,\n",
    "    cuts_yesno_train,\n",
    "    weights=[len(cuts_libri_train), len(cuts_yesno_train)],\n",
    ")\n",
    "\n",
    "sampler = DynamicCutSampler(cuts_train, max_duration=100, shuffle=True)\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  7 | #yesno-cuts  0 | 95.1s speech | 18.6s padding |\n",
      "| batch  1 | #libri-cuts  7 | #yesno-cuts  0 | 86.7s speech | 25.4s padding |\n",
      "| batch  2 | #libri-cuts  8 | #yesno-cuts  0 | 98.1s speech | 26.1s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  0 | 88.1s speech | 19.9s padding |\n",
      "| batch  4 | #libri-cuts  8 | #yesno-cuts  1 | 94.9s speech | 68.2s padding |\n",
      "| batch  5 | #libri-cuts  7 | #yesno-cuts  0 | 86.8s speech | 28.3s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  0 | 97.6s speech | 14.2s padding |\n",
      "| batch  7 | #libri-cuts  7 | #yesno-cuts  0 | 92.9s speech | 22.0s padding |\n",
      "| batch  8 | #libri-cuts  6 | #yesno-cuts  3 | 93.4s speech | 83.8s padding |\n",
      "| batch  9 | #libri-cuts  8 | #yesno-cuts  0 | 94.5s speech | 32.1s padding |\n",
      "| batch 10 | #libri-cuts  7 | #yesno-cuts  0 | 95.9s speech | 11.6s padding |\n",
      "| batch 11 | #libri-cuts  7 | #yesno-cuts  0 | 98.9s speech | 13.1s padding |\n",
      "| batch 12 | #libri-cuts  7 | #yesno-cuts  0 | 91.3s speech | 12.7s padding |\n",
      "| batch 13 | #libri-cuts  7 | #yesno-cuts  0 | 97.6s speech | 19.5s padding |\n",
      "| batch 14 | #libri-cuts  7 | #yesno-cuts  0 | 96.3s speech | 14.3s padding |\n",
      "| batch 15 | #libri-cuts  8 | #yesno-cuts  0 | 96.4s speech | 26.0s padding |\n",
      "| batch 16 | #libri-cuts  7 | #yesno-cuts  0 | 89.1s speech | 20.1s padding |\n",
      "| batch 17 | #libri-cuts  8 | #yesno-cuts  0 | 94.8s speech | 26.8s padding |\n",
      "| batch 18 | #libri-cuts  6 | #yesno-cuts  0 | 86.4s speech |  7.7s padding |\n",
      "| batch 19 | #libri-cuts  9 | #yesno-cuts  1 | 96.0s speech | 76.0s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Method 4: ZipSampler\n",
    "\n",
    "This method creates a sampler out of other samplers. The resulting sampler yields mini-batches with a constant ratio of data from each source. E.g., if sampler one has max_duration=80 and sampler two has max_duration=20, usually you will get around 100s of speech with an 80:20 proportion. The iteration stops as soon as the smaller sampler is depleted.\n",
    "\n",
    "**When is it good to use it?**\n",
    "\n",
    "✅ You want to have a constant ratio of duration between each dataset in every mini-batch.\n",
    "\n",
    "✅ Your training examples have little or none variation in duration (e.g., fixed-size windows of utterances/recordings).\n",
    "\n",
    "✅ You work with small to medium sized datasets and can shuffle them completely in memory. During each epoch, you'll observe mostly different examples from the larger dataset.\n",
    "\n",
    "✅ You want to stop the epoch as soon as the smallest dataset has been fully iterated. This effectively under-samples the larger datasets and compensates for dataset imbalance. \n",
    "\n",
    "**When to expect poor performance?**\n",
    "\n",
    "⚠️ You are using bucketing samplers (dynamic or regular). In these cases, you will usually sample buckets of different cut durations from both sources which will add excessive padding in your mini-batches.\n",
    "\n",
    "⚠️ You work with large datasets and Dynamic* samplers -- since you will be shuffling data lazily with a buffer window, during each epoch you'll probably see mostly the same examples from the larger dataset, just in a different order. Expect about `len(larger_dataset) - len(smaller_dataset)` examples from the larger dataset to be unused during training, unless you specifically design your code to alleviate that (e.g., by sharding the larger dataset and reading different shards each epoch).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "sampler_libri = SimpleCutSampler(cuts_libri_train, max_duration=80, shuffle=True)\n",
    "sampler_yesno = SimpleCutSampler(cuts_yesno_train, max_duration=20, shuffle=True)\n",
    "\n",
    "sampler = ZipSampler(sampler_libri, sampler_yesno)\n",
    "\n",
    "dloader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "| batch  0 | #libri-cuts  6 | #yesno-cuts  3 | 90.0s speech | 94.5s padding |\n",
      "| batch  1 | #libri-cuts  6 | #yesno-cuts  3 | 94.7s speech | 92.8s padding |\n",
      "| batch  2 | #libri-cuts  5 | #yesno-cuts  3 | 91.1s speech | 82.7s padding |\n",
      "| batch  3 | #libri-cuts  7 | #yesno-cuts  3 | 92.9s speech | 104.0s padding |\n",
      "| batch  4 | #libri-cuts  5 | #yesno-cuts  3 | 86.2s speech | 84.3s padding |\n",
      "| batch  5 | #libri-cuts  5 | #yesno-cuts  3 | 89.0s speech | 88.4s padding |\n",
      "| batch  6 | #libri-cuts  7 | #yesno-cuts  3 | 92.9s speech | 107.2s padding |\n",
      "| batch  7 | #libri-cuts  5 | #yesno-cuts  3 | 90.5s speech | 81.8s padding |\n",
      "| batch  8 | #libri-cuts  5 | #yesno-cuts  3 | 88.7s speech | 89.9s padding |\n",
      "| batch  9 | #libri-cuts  5 | #yesno-cuts  3 | 88.8s speech | 82.8s padding |\n"
     ]
    }
   ],
   "source": [
    "for idx, batch in enumerate(dloader):\n",
    "    if idx == 20:\n",
    "        break\n",
    "    n_libri = len([cut for cut in batch if cut.origin == \"libri\"])\n",
    "    n_yesno = len(batch) - n_libri\n",
    "    tot_dur = sum(cut.duration for cut in batch)\n",
    "    pad_dur = sum(cut.duration for cut in batch.pad()) - tot_dur\n",
    "    print(\n",
    "        f\"| batch {idx:>2d} | #libri-cuts {n_libri:>2d} | #yesno-cuts {n_yesno:>2d} | {tot_dur:>4.1f}s speech | {pad_dur:>4.1f}s padding |\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: in the example above, there is an excessive padding because YesNo has much shorter cuts than mini LibriSpeech, and every batch contains YesNo data. For most small, medium, and large speech datasets, you shouldn't see such drastic differences in durations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
