{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright 2020 NVIDIA Corporation. All Rights Reserved.\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "# =============================================================================="
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FastPitch: Voice Modification with Custom Transformations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model overview"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The [FastPitch](https://arxiv.org/abs/2006.06873) model is based on the [FastSpeech](https://arxiv.org/abs/1905.09263) model. Similarly to [FastSpeech2](https://arxiv.org/abs/2006.04558), which has been developed concurrently, it learns to predict the pitch contour and conditions the generation on such contour.\n",
    "\n",
    "The simple mechanism of predicting the pitch on grapheme-level (rather than frame-level, as FastSpeech2 does) allows to easily alter the pitch during synthesis. FastPitch can thus change the perceived emotional state of the speaker, or slightly emphasise certain lexical units."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Requirements"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run the notebook inside the container. By default the container forwards port `8888`.\n",
    "```\n",
    "bash scripts/docker/interactive.sh\n",
    "\n",
    "# inside the container\n",
    "cd notebooks\n",
    "jupyter notebook --ip='*' --port=8888\n",
    "```\n",
    "Please refer the Requirement section in `README.md` for more details and running outside the container."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "assert os.getcwd().split('/')[-1] == 'notebooks'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generate audio samples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Training a FastPitch model from scrath takes 3 to 27 hours depending on the type and number of GPUs, performance numbers can be found in Section \"Training performance results\" in `README.md`. Therefore, to save the time of running this notebook, we recommend to download the pretrained FastPitch checkpoints on NGC for inference.\n",
    "\n",
    "You can find FP32 checkpoint at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:fastpitch_pyt_fp32_ckpt_v1/files) , and AMP (Automatic Mixed Precision) checkpoint at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:fastpitch_pyt_amp_ckpt_v1/files).\n",
    "\n",
    "To synthesize audio, you will need a WaveGlow model, which generates waveforms based on mel-spectrograms generated by FastPitch.You can download a pre-trained WaveGlow AMP model at [NGC](https://ngc.nvidia.com/catalog/models/nvidia:waveglow256pyt_fp16)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! mkdir -p output\n",
    "! MODEL_DIR='../pretrained_models' ../scripts/download_fastpitch.sh\n",
    "! MODEL_DIR='../pretrained_models' ../scripts/download_waveglow.sh"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can perform inference using the respective checkpoints that are passed as `--fastpitch` and `--waveglow` arguments. Next, you will use FastPitch model to generate audio samples for input text, including the basic version and the variations i npace, fade out, and pitch transforms, etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import IPython\n",
    "\n",
    "# store paths in aux variables\n",
    "fastp = '../pretrained_models/fastpitch/nvidia_fastpitch_200518.pt'\n",
    "waveg = '../pretrained_models/waveglow/waveglow_1076430_14000_amp.pt'\n",
    "flags = f'--cuda --fastpitch {fastp} --waveglow {waveg} --wn-channels 256'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Basic speech synthesis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You need to create an input file with some text, or just input the text in the below cell:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile text.txt\n",
    "This is a sample sentence you can synthesize using this wonderful model!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Basic synthesis\n",
    "!python ../inference.py {flags} -i text.txt -o output/original --pace 0.75 > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/original/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 'Low - high, odd - even' speech transformation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ../pitch_transform.py\n",
    "import torch\n",
    "import numpy as np\n",
    "\n",
    "def pitch_transform_custom(pitch, pitch_lens):\n",
    "    \"\"\"Apply a custom pitch transformation to predicted pitch values.\n",
    "\n",
    "    Odd - even sentence transformation.\n",
    "    This sample modification decreses the pitch for even words\n",
    "    and increses the pitch for odd words in the sentence.\n",
    "\n",
    "    PARAMS\n",
    "    ------\n",
    "    pitch: torch.Tensor (bs, max_len)\n",
    "        Predicted pitch values for each lexical unit, padded to max_len (in Hz).\n",
    "    pitch_lens: torch.Tensor (bs, max_len)\n",
    "        Number of lexical units in each utterance.\n",
    "\n",
    "    RETURNS\n",
    "    -------\n",
    "    pitch: torch.Tensor\n",
    "        Modified pitch (in Hz).\n",
    "    \"\"\"\n",
    "    \n",
    "    sentence = 'This is a sample sentence you can synthesize using this wonderful model!'\n",
    "    sep_sums = np.cumsum(np.asarray([c == ' ' for c in sentence]))\n",
    "    transform = np.where(sep_sums % 2 == 0, 0.6, 1.2)\n",
    "    transform = torch.tensor(transform, dtype=torch.float32, device=pitch.device)\n",
    "\n",
    "    return pitch * transform"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Synthesis with pace 0.75 and odd - even sentence transformation\n",
    "!python ../inference.py {flags} -i text.txt -o output/custom --pitch-transform-custom --pace 0.75 > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/custom/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 'Really' speech transformation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile text.txt\n",
    "Really? It sounds nothing like that."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Basic synthesis\n",
    "!python ../inference.py {flags} -i text.txt -o output/original_really > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/original_really/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ../pitch_transform.py\n",
    "import torch\n",
    "\n",
    "def pitch_transform_custom(pitch, pitch_lens):\n",
    "    \n",
    "    sentence = \"Really? I wouldn't be so sure.\"\n",
    "    \n",
    "    # Put emphasis on `lly?` in 'Really?'\n",
    "    for i in range(len('Rea'), len('Really?')):\n",
    "        pitch[0][i] = 280 + (i - 3) * 20\n",
    "\n",
    "    return pitch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Synthesis with 'really' question transformation and pace 0.9\n",
    "!python ../inference.py {flags} -i text.txt -o output/custom_really_question \\\n",
    "    --pitch-transform-custom --pace 0.9 > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/custom_really_question/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ../pitch_transform.py\n",
    "import torch\n",
    "\n",
    "def pitch_transform_custom(pitch, pitch_lens):\n",
    "    \n",
    "    sentence = 'Really? It does not sound like that!'\n",
    "    \n",
    "    # Fixed 'really' word adjustment\n",
    "    for i in range(len('Really?')):\n",
    "        pitch[0][i] = 215 - i * 10\n",
    "\n",
    "    return pitch * torch.tensor(0.8)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Synthesis with 'really' sceptical transformation and pace 0.9\n",
    "!python ../inference.py {flags} -i text.txt -o output/custom_really_sceptical \\\n",
    "    --pitch-transform-custom --pace 0.9 > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/custom_really_sceptical/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 'Right' speech transformation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile text.txt\n",
    "It's obvious... right?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Basic synthesis\n",
    "!python ../inference.py {flags} -i text.txt -o output/original_right > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/original_right/audio_0.wav\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ../pitch_transform.py\n",
    "import torch\n",
    "\n",
    "def pitch_transform_custom(pitch, pitch_lens):\n",
    "            \n",
    "    pitch[0][-6] = 180  # R\n",
    "    pitch[0][-5] = 260  # i\n",
    "    pitch[0][-4] = 360  # g\n",
    "    pitch[0][-3] = 360  # h\n",
    "    pitch[0][-2] = 380  # t\n",
    "    pitch[0][-1] = 400  # ?\n",
    "\n",
    "    return pitch * torch.tensor(0.9)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Synthesis with 'right' question transformation\n",
    "!python ../inference.py {flags} -i text.txt -o output/custom_right_question \\\n",
    "    --pitch-transform-custom > /dev/null\n",
    "\n",
    "IPython.display.Audio(\"output/custom_right_question/audio_0.wav\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
