{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Defining a custom task\n",
    "\n",
    "In `pyannote.audio`, a *task* is a combination of a **_problem_** that needs to be addressed and an **experimental protocol**.\n",
    "\n",
    "For example, one can address **_voice activity detection_** following the **AMI only_words** experimental protocol, by instantiating the following *task*:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# this assumes that the AMI corpus has been setup for diarization\n",
    "# according to https://github.com/pyannote/AMI-diarization-setup\n",
    "import os\n",
    "os.environ['PYANNOTE_DATABASE_CONFIG'] = '/Users/bredin/Development/pyannote/pyannote-db/AMI-diarization-setup/pyannote/database.yml'\n",
    "\n",
    "from pyannote.database import get_protocol, FileFinder\n",
    "ami = get_protocol('AMI.SpeakerDiarization.only_words', \n",
    "                   preprocessors={'audio': FileFinder()})\n",
    "\n",
    "# address voice activity detection\n",
    "from pyannote.audio.tasks import VoiceActivityDetection\n",
    "task = VoiceActivityDetection(ami)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A growing collection of tasks is readily available in `pyannote.audio.tasks`..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyannote.audio.tasks import __all__ as TASKS; print('\\n'.join(TASKS))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "... but you will eventually want to use `pyannote.audio` to address a different task.  \n",
    "In this example, we will add a new task addressing the **sound event detection** problem.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Problem specification\n",
    "\n",
    "A problem is expected to be solved by a model $f$ that takes an audio chunk  $X$ as input and returns its predicted solution $\\hat{y} = f(X)$. \n",
    "\n",
    "### Resolution\n",
    "\n",
    "Depending on the addressed problem, you might expect the model to output just one prediction for the whole audio chunk (`Resolution.CHUNK`) or a temporal sequence of predictions (`Resolution.FRAME`).\n",
    "\n",
    "In our particular case, we would like the model to provide one decision for the whole chunk:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyannote.audio.core.task import Resolution\n",
    "resolution = Resolution.CHUNK"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Type of problem\n",
    "\n",
    "Similarly, the type of your problem may fall into one of these generic machine learning categories:\n",
    "* `Problem.BINARY_CLASSIFICATION` for binary classification\n",
    "* `Problem.MONO_LABEL_CLASSIFICATION` for multi-class classification \n",
    "* `Problem.MULTI_LABEL_CLASSIFICATION` for multi-label classification\n",
    "* `Problem.REGRESSION` for regression\n",
    "* `Problem.REPRESENTATION` for representation learning\n",
    "\n",
    "In our particular case, we would like the model to do multi-label classification because one audio chunk may contain multiple sound events:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyannote.audio.core.task import Problem\n",
    "problem = Problem.MULTI_LABEL_CLASSIFICATION"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyannote.audio.core.task import Specifications\n",
    "specifications = Specifications(\n",
    "    problem=problem,\n",
    "    resolution=resolution,\n",
    "    duration=5.0,\n",
    "    classes=[\"Speech\", \"Dog\", \"Cat\", \"Alarm_bell_ringing\", \"Dishes\", \n",
    "             \"Frying\", \"Blender\", \"Running_water\", \"Vacuum_cleaner\", \n",
    "             \"Electric_shaver_toothbrush\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A task is expected to be solved by a model $f$ that (usually) takes an audio chunk  $X$ as input and returns its predicted solution $\\hat{y} = f(X)$. \n",
    "\n",
    "To help training the model $f$, the task $\\mathcal{T}$ is in charge of \n",
    "- generating $(X, y)$ training samples using the **dataset**\n",
    "- defining the loss function $\\mathcal{L}(y, \\hat{y})$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Optional\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import numpy as np\n",
    "from pyannote.core import Annotation\n",
    "from pyannote.audio import Model\n",
    "from pyannote.audio.core.task import Task, Resolution\n",
    "\n",
    "# Your custom task must be a subclass of `pyannote.audio.core.task.Task`\n",
    "class SoundEventDetection(Task):\n",
    "    \"\"\"Sound event detection\"\"\"\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        protocol: Protocol,\n",
    "        duration: float = 5.0,\n",
    "        warm_up: Union[float, Tuple[float, float]] = 0.0,\n",
    "        batch_size: int = 32,\n",
    "        num_workers: int = None,\n",
    "        pin_memory: bool = False,\n",
    "        augmentation: BaseWaveformTransform = None,\n",
    "        **other_params,\n",
    "    ):\n",
    "\n",
    "        super().__init__(\n",
    "            protocol,\n",
    "            duration=duration,\n",
    "            min_duration=min_duration,\n",
    "            warm_up=warm_up,\n",
    "            batch_size=batch_size,\n",
    "            num_workers=num_workers,\n",
    "            pin_memory=pin_memory,\n",
    "            augmentation=augmentation,\n",
    "        )\n",
    "\n",
    "    def setup(self, stage=None):\n",
    "\n",
    "        if stage == \"fit\":\n",
    "\n",
    "            # load metadata for training subset\n",
    "            self.train_metadata_ = list()\n",
    "            for training_file in self.protocol.train():\n",
    "                self.training_metadata_.append({\n",
    "                    # path to audio file (str)\n",
    "                    \"audio\": training_file[\"audio\"],\n",
    "                    # duration of audio file (float)\n",
    "                    \"duration\": training_file[\"duration\"],\n",
    "                    # reference annotation (pyannote.core.Annotation)\n",
    "                    \"annotation\": training_file[\"annotation\"],\n",
    "                })\n",
    "\n",
    "            # gather the list of classes\n",
    "            classes = set()\n",
    "            for training_file in self.train_metadata_:\n",
    "                classes.update(training_file[\"reference\"].labels())\n",
    "            classes = sorted(classes)\n",
    "\n",
    "            # specify the addressed problem\n",
    "            self.specifications = Specifications(\n",
    "                # it is a multi-label classification problem\n",
    "                problem=Problem.MULTI_LABEL_CLASSIFICATION,\n",
    "                # we expect the model to output one prediction \n",
    "                # for the whole chunk\n",
    "                resolution=Resolution.CHUNK,\n",
    "                # the model will ingest chunks with that duration (in seconds)\n",
    "                duration=self.duration,\n",
    "                # human-readable names of classes\n",
    "                classes=classes)\n",
    "\n",
    "            # `has_validation` is True iff protocol defines a development set\n",
    "            if not self.has_validation:\n",
    "                return\n",
    "\n",
    "            # load metadata for validation subset\n",
    "            self.validation_metadata_ = list()\n",
    "            for validation_file in self.protocol.development():\n",
    "                self.validation_metadata_.append({\n",
    "                    \"audio\": validation_file[\"audio\"],\n",
    "                    \"num_samples\": math.floor(validation_file[\"duration\"] / self.duration),\n",
    "                    \"annotation\": validation_file[\"annotation\"],\n",
    "                })\n",
    "            \n",
    "            \n",
    "\n",
    "    def train__iter__(self):\n",
    "        # this method generates training samples, one at a time, \"ad infinitum\". each worker \n",
    "        # of the dataloader will run it, independently from other workers. pyannote.audio and\n",
    "        # pytorch-lightning will take care of making batches out of it.\n",
    "\n",
    "        # create worker-specific random number generator (RNG) to avoid this common bug:\n",
    "        # tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/\n",
    "        rng = create_rng_for_worker(self.model.current_epoch)\n",
    "\n",
    "        # load list and number of classes\n",
    "        classes = self.specifications.classes\n",
    "        num_classes = len(classes)\n",
    "\n",
    "        # yield training samples \"ad infinitum\"\n",
    "        while True:\n",
    "\n",
    "            # select training file at random\n",
    "            random_training_file, *_ = rng.choices(self.train_metadata_, k=1)\n",
    "\n",
    "            # select one chunk at random \n",
    "            random_start_time = rng.uniform(0, random_training_file[\"duration\"] - self.duration)\n",
    "            random_chunk = Segment(random_start_time, random_start_time + self.duration)\n",
    "\n",
    "            # load audio excerpt corresponding to random chunk\n",
    "            X = self.model.audio.crop(random_training_file[\"audio\"], \n",
    "                                      random_chunk, \n",
    "                                      fixed=self.duration)\n",
    "            \n",
    "            # load labels corresponding to random chunk as {0|1} numpy array\n",
    "            # y[k] = 1 means that kth class is active\n",
    "            y = np.zeros((num_classes,))\n",
    "            active_classes = random_training_file[\"annotation\"].crop(random_chunk).labels()\n",
    "            for active_class in active_classes:\n",
    "                y[classes.index(active_class)] = 1\n",
    "        \n",
    "            # yield training samples as a dict (use 'X' for input and 'y' for target)\n",
    "            yield {'X': X, 'y': y}\n",
    "\n",
    "    def train__len__(self):\n",
    "        # since train__iter__ runs \"ad infinitum\", we need a way to define what an epoch is.\n",
    "        # this is the purpose of this method. it outputs the number of training samples that\n",
    "        # make an epoch.\n",
    "\n",
    "        # we compute this number as the total duration of the training set divided by \n",
    "        # duration of training chunks. we make sure that an epoch is at least one batch long,\n",
    "        # or pytorch-lightning will complain\n",
    "        train_duration = sum(training_file[\"duration\"] for training_file in self.train_metadata_)\n",
    "        return max(self.batch_size, math.ceil(train_duration / self.duration))\n",
    "\n",
    "    def val__getitem__(self, sample_idx):\n",
    "\n",
    "        # load list and number of classes\n",
    "        classes = self.specifications.classes\n",
    "        num_classes = len(classes)\n",
    "\n",
    "\n",
    "        # find which part of the validation set corresponds to sample_idx\n",
    "        num_samples = np.cumsum([\n",
    "            validation_file[\"num_samples\"] for validation_file in self.validation_metadata_])\n",
    "        file_idx = np.where(num_samples < sample_idx)[0][0]\n",
    "        validation_file = self.validation_metadata_[file_idx]\n",
    "        idx = sample_idx - (num_samples[file_idx] - validation_file[\"num_samples\"]) \n",
    "        chunk = SlidingWindow(start=0., duration=self.duration, step=self.duration)[idx]\n",
    "\n",
    "        # load audio excerpt corresponding to current chunk\n",
    "        X = self.model.audio.crop(validation_file[\"audio\"], chunk, fixed=self.duration)\n",
    "\n",
    "        # load labels corresponding to random chunk as {0|1} numpy array\n",
    "        # y[k] = 1 means that kth class is active\n",
    "        y = np.zeros((num_classes,))\n",
    "        active_classes = validaiton_file[\"annotation\"].crop(chunk).labels()\n",
    "        for active_class in active_classes:\n",
    "            y[classes.index(active_class)] = 1\n",
    "\n",
    "        return {'X': X, 'y': y}\n",
    "\n",
    "    def val__len__(self):\n",
    "        return sum(validation_file[\"num_samples\"] \n",
    "                   for validation_file in self.validation_metadata_)\n",
    "\n",
    "    # `pyannote.audio.core.task.Task` base class provides a `LightningModule.training_step` and \n",
    "    # `LightningModule.validation_step` methods that rely on self.specifications to guess which \n",
    "    # loss and metrics should be used. you can obviously choose to customize them. \n",
    "    # More details can be found in pytorch-lightning documentation and in \n",
    "    # pyannote.audio.core.task.Task source code. \n",
    "\n",
    "    # def training_step(self, batch, batch_idx: int):\n",
    "    #    return loss\n",
    "\n",
    "    # def validation_step(self, batch, batch_idx: int):\n",
    "    #    return metric\n",
    "\n",
    "    # pyannote.audio.tasks.segmentation.mixin also provides a convenient mixin\n",
    "    # for \"segmentation\" tasks (ie. with Resolution.FRAME) that already defines\n",
    "    # a bunch of useful methods. \n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.8.5 64-bit ('pyannote-audio-v2': conda)",
   "name": "python385jvsc74a57bd0af55542e943232842f746a64555e4e006c72c98a3a863e85e6cbaf12772fa219"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
