{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "RYGnI-EZp_nK"
   },
   "source": [
    "# Getting Started: Sample Conversational AI application\n",
    "This notebook shows how to use NVIDIA NeMo (https://github.com/NVIDIA/NeMo) to construct a toy demo which translate Russian audio file into English one.\n",
    "\n",
    "The demo demonstrates how to: \n",
    "\n",
    "* Instantiate pre-trained NeMo models from NVIDIA NGC.\n",
    "* Transcribe audio with (Russian) speech recognition model.\n",
    "* Translate text with machine translation model.\n",
    "* Generate audio with text-to-speech models."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "V72HXYuQ_p9a"
   },
   "source": [
    "## Installation\n",
    "NeMo can be installed via simple pip command.\n",
    "This will take about 4 minutes.\n",
    "\n",
    "(The installation method below should work inside your new Conda environment or in an NVIDIA docker container.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "efDmTWf1_iYK"
   },
   "outputs": [],
   "source": [
    "BRANCH = 'r1.0.0rc1'\n",
    "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "EyJ5HiiPrPKA"
   },
   "source": [
    "## Import all necessary packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "tdUqxeUEA8nw"
   },
   "outputs": [],
   "source": [
    "# Import NeMo and it's ASR, NLP and TTS collections\n",
    "import nemo\n",
    "# Import Speech Recognition collection\n",
    "import nemo.collections.asr as nemo_asr\n",
    "# Import Natural Language Processing colleciton\n",
    "import nemo.collections.nlp as nemo_nlp\n",
    "# Import Speech Synthesis collection\n",
    "import nemo.collections.tts as nemo_tts\n",
    "# We'll use this to listen to audio\n",
    "import IPython"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "bt2EZyU3A1aq"
   },
   "source": [
    "## Instantiate pre-trained NeMo models\n",
    "\n",
    "Every NeMo model has these methods:\n",
    "\n",
    "* ``list_available_models()`` - it will list all models currently available on NGC and their names.\n",
    "\n",
    "* ``from_pretrained(...)`` API downloads and initialized model directly from the NGC using model name.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YNNHs5Xjr8ox"
   },
   "outputs": [],
   "source": [
    "# Here is an example of all CTC-based models:\n",
    "nemo_asr.models.EncDecCTCModel.list_available_models()\n",
    "# More ASR Models are available - see: nemo_asr.models.ASRModel.list_available_models()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "1h9nhICjA5Dk"
   },
   "outputs": [],
   "source": [
    "# Speech Recognition model - QuartzNet trained on Russian part of MCV 6.0\n",
    "quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name=\"stt_ru_quartznet15x5\").cuda()\n",
    "# Neural Machine Translation model\n",
    "nmt_model = nemo_nlp.models.MTEncDecModel.from_pretrained(model_name='nmt_ru_en_transformer6x6').cuda()\n",
    "# Spectrogram generator which takes text as an input and produces spectrogram\n",
    "spectrogram_generator = nemo_tts.models.Tacotron2Model.from_pretrained(model_name=\"tts_en_tacotron2\").cuda()\n",
    "# Vocoder model which takes spectrogram and produces actual audio\n",
    "vocoder = nemo_tts.models.WaveGlowModel.from_pretrained(model_name=\"tts_waveglow_88m\").cuda()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "KPota-JtsqSY"
   },
   "source": [
    "## Get an audio sample in Russian language"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "7cGCEKkcLr52"
   },
   "outputs": [],
   "source": [
    "# Download audio sample which we'll try\n",
    "# This is a sample from MCV 6.0 Dev dataset - the model hasn't seen it before\n",
    "# IMPORTANT: The audio must be mono with 16Khz sampling rate\n",
    "Audio_sample = 'common_voice_ru_19034087.wav'\n",
    "!wget 'https://nemo-public.s3.us-east-2.amazonaws.com/mcv-samples-ru/common_voice_ru_19034087.wav'\n",
    "# To listen it, click on the play button below\n",
    "IPython.display.Audio(Audio_sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BaCdNJhhtBfM"
   },
   "source": [
    "## Transcribe audio file\n",
    "We will use speech recognition model to convert audio into text.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "KTA7jM6sL6yC"
   },
   "outputs": [],
   "source": [
    "russian_text = quartznet.transcribe([Audio_sample])\n",
    "print(russian_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BjYb2TMtttCc"
   },
   "source": [
    "## Translate Russian text into English\n",
    "NeMo's NMT models have a handy ``.translate()`` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "kQTdE4b9Nm9O"
   },
   "outputs": [],
   "source": [
    "english_text = nmt_model.translate(russian_text)\n",
    "print(english_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9Rppc59Ut7uy"
   },
   "source": [
    "## Generate English audio from text\n",
    "Speech generation from text typically has two steps:\n",
    "* Generate spectrogram from the text. In this example we will use Tacotron 2 model for this.\n",
    "* Generate actual audio from the spectrogram. In this example we will use WaveGlow model for this.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wpMYfufgNt15"
   },
   "outputs": [],
   "source": [
    "# A helper function which combines Tacotron2 and WaveGlow to go directly from \n",
    "# text to audio\n",
    "def text_to_audio(text):\n",
    "  parsed = spectrogram_generator.parse(text)\n",
    "  spectrogram = spectrogram_generator.generate_spectrogram(tokens=parsed)\n",
    "  audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram)\n",
    "  return audio.to('cpu').numpy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "rsmx9mk0N_NL"
   },
   "outputs": [],
   "source": [
    "# Listen to generated audio in English\n",
    "IPython.display.Audio(text_to_audio(english_text[0]), rate=22050)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LiQ_GQpcBYUs"
   },
   "source": [
    "## Next steps\n",
    "A demo like this is great for prototyping and experimentation. However, for real production deployment, you would want to use a service like [NVIDIA Jarvis](https://developer.nvidia.com/nvidia-jarvis).\n",
    "\n",
    "**NeMo is built for training.** You can fine-tune, or train from scratch on your data all models used in this example. We recommend you checkout the following, more in-depth, tutorials next:\n",
    "\n",
    "* [NeMo fundamentals](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/00_NeMo_Primer.ipynb)\n",
    "* [NeMo models](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/01_NeMo_Models.ipynb)\n",
    "* [Speech Recognition](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb)\n",
    "* [Punctuation and Capitalization](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/nlp/Punctuation_and_Capitalization.ipynb)\n",
    "* [Speech Synthesis](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/tts/1_TTS_inference.ipynb)\n",
    "\n",
    "\n",
    "You can find scripts for training and fine-tuning ASR, NLP and TTS models [here](https://github.com/NVIDIA/NeMo/tree/r1.0.0rc1/examples). "
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "name": "NeMo Getting Started",
   "provenance": [],
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}