sudip1310 commited on
Commit
5c9d239
1 Parent(s): d8234ad

Upload Demo_DL_Based_Emotional_TTS_TF1.ipynb

Browse files
Demo_DL_Based_Emotional_TTS_TF1.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"nbformat": 4, "nbformat_minor": 0, "metadata": {"colab": {"provenance": [], "gpuType": "T4"}, "kernelspec": {"display_name": "Python 3", "name": "python3"}, "accelerator": "GPU", "gpuClass": "standard"}, "cells": [{"cell_type": "markdown", "metadata": {"id": "f9ds3NJZ5T92"}, "source": ["# DL Based Emotional Text to Speech\n", "\n", "In this demo, we provide an interface to generate emotional speech from user inputs for both the emotional label and the text.\n", "\n", "The models that are trained are [Tacotron](https://github.com/Emotional-Text-to-Speech/tacotron_pytorch) and [DC-TTS](https://github.com/Emotional-Text-to-Speech/pytorch-dc-tts).\n", "\n", "Further information about our approaches and *exactly how* did we develop this demo can be seen [here](https://github.com/Emotional-Text-to-Speech/dl-for-emo-tts).\n", "\n", "---\n", "---\n"]}, {"cell_type": "markdown", "metadata": {"id": "_7gRpQLXQSID"}, "source": ["## Download the required code and install the dependences\n", "\n", "- Make sure you have clicked on ```Open in Playground``` to be able to run the cells. Set your runtime to ```GPU```. This can be done with the following steps:\n", " - Click on ```Runtime``` on the menubar above \n", " - Select ```Change runtime type```\n", " - Select ```GPU``` from the ```Hardware accelerator``` dropdown and save.\n", "- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```\n", "\n", "\n"]}, {"cell_type": "code", "metadata": {"id": "V4d2LXHbC-Es", "colab": {"base_uri": "https://localhost:8080/"}, "outputId": "f82955ce-fba2-40ae-d550-dc2b4b07a57c"}, "source": "! git clone https://github.com/Emotional-Text-to-Speech/pytorch-dc-tts\n! git clone --recursive https://github.com/Emotional-Text-to-Speech/tacotron_pytorch.git\n! cd \"tacotron_pytorch/\" && pip install -e .\n! pip install unidecode\n! pip install gdown\n! mkdir trained_models\n\nimport gdown\nurl = 'https://drive.google.com/uc?id=1rmhtEl3N3kAfnQM6J0vDGSCCHlHLK6kw'\noutput = 'trained_models/angry_dctts.pth'\ngdown.download(url, output, quiet=False)\nurl = 'https://drive.google.com/uc?id=1bP0eJ6z4onr2klolzU17Y8SaNspxQjF-'\noutput = 'trained_models/neutral_dctts.pth'\ngdown.download(url, output, quiet=False)\nurl = 'https://drive.google.com/uc?id=1WWE9zxS3FRgD0Y5yIdNmLY9-t5gnBsNt'\noutput = 'trained_models/ssrn.pth'\ngdown.download(url, output, quiet=False)\nurl = 'https://drive.google.com/uc?id=1N6Ykrd1IaPiNdos_iv0J6JbY2gBDghod'\noutput = 'trained_models/disgust_tacotron.pth'\ngdown.download(url, output, quiet=False)\nurl = 'https://drive.google.com/uc?id=15m0PZ8xaBocb_6wDjAU6S4Aunbr3TKkM'\noutput = 'trained_models/amused_tacotron.pth'\ngdown.download(url, output, quiet=False)\nurl = 'https://drive.google.com/uc?id=1D6HGWYWvhdvLWQt4uOYqdmuVO7ZVLWNa'\noutput = 'trained_models/sleepiness_tacotron.pth'\ngdown.download(url, output, quiet=False)", "execution_count": null, "outputs": [{"output_type": "stream", "name": "stdout", "text": ["Cloning into 'pytorch-dc-tts'...\n", "remote: Enumerating objects: 1904, done.\u001b[K\n", "remote: Counting objects: 100% (319/319), done.\u001b[K\n", "remote: Compressing objects: 100% (233/233), done.\u001b[K\n", "remote: Total 1904 (delta 91), reused 302 (delta 86), pack-reused 1585\u001b[K\n", "Receiving objects: 100% (1904/1904), 277.55 MiB | 22.02 MiB/s, done.\n", "Resolving deltas: 100% (226/226), done.\n", "Cloning into 'tacotron_pytorch'...\n", "remote: Enumerating objects: 150, done.\u001b[K\n", "remote: Total 150 (delta 0), reused 0 (delta 0), pack-reused 150\u001b[K\n", "Receiving objects: 100% (150/150), 21.19 MiB | 23.16 MiB/s, done.\n", "Resolving deltas: 100% (71/71), done.\n", "Submodule 'lib/tacotron' (https://github.com/r9y9/tacotron) registered for path 'lib/tacotron'\n", "Cloning into '/content/tacotron_pytorch/lib/tacotron'...\n", "remote: Enumerating objects: 212, done. \n", "remote: Total 212 (delta 0), reused 0 (delta 0), pack-reused 212 \n", "Receiving objects: 100% (212/212), 62.01 KiB | 4.77 MiB/s, done.\n", "Resolving deltas: 100% (111/111), done.\n", "Submodule path 'lib/tacotron': checked out '0987cedd0d6a6909749c594ca978ac4e11ae79ae'\n", "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n", "Obtaining file:///content/tacotron_pytorch\n", " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n", "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from tacotron-pytorch==0.0.1+e6b1a39) (1.22.4)\n", "Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from tacotron-pytorch==0.0.1+e6b1a39) (1.10.1)\n", "Installing collected packages: tacotron-pytorch\n", " Running setup.py develop for tacotron-pytorch\n", "Successfully installed tacotron-pytorch-0.0.1+e6b1a39\n", "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n", "Collecting unidecode\n", " Downloading Unidecode-1.3.6-py3-none-any.whl (235 kB)\n", "\u001b[2K \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m235.9/235.9 kB\u001b[0m \u001b[31m11.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hInstalling collected packages: unidecode\n", "Successfully installed unidecode-1.3.6\n", "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n", "Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (4.6.6)\n", "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.12.0)\n", "Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.27.1)\n", "Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from gdown) (1.16.0)\n", "Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.65.0)\n", "Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.11.2)\n", "Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.4.1)\n", "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.26.15)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2022.12.7)\n", "Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.0.12)\n", "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.4)\n", "Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1)\n", "Access denied with the following error:\n", "Access denied with the following error:\n"]}, {"output_type": "stream", "name": "stderr", "text": ["\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=1rmhtEl3N3kAfnQM6J0vDGSCCHlHLK6kw \n", "\n", "\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=1bP0eJ6z4onr2klolzU17Y8SaNspxQjF- \n", "\n"]}, {"output_type": "stream", "name": "stdout", "text": ["Access denied with the following error:\n"]}, {"output_type": "stream", "name": "stderr", "text": ["\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=1WWE9zxS3FRgD0Y5yIdNmLY9-t5gnBsNt \n", "\n"]}, {"output_type": "stream", "name": "stdout", "text": ["Access denied with the following error:\n", "Access denied with the following error:\n"]}, {"output_type": "stream", "name": "stderr", "text": ["\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=1N6Ykrd1IaPiNdos_iv0J6JbY2gBDghod \n", "\n", "\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=15m0PZ8xaBocb_6wDjAU6S4Aunbr3TKkM \n", "\n"]}, {"output_type": "stream", "name": "stdout", "text": ["Access denied with the following error:\n"]}, {"output_type": "stream", "name": "stderr", "text": ["\n", " \tCannot retrieve the public link of the file. You may need to change\n", "\tthe permission to 'Anyone with the link', or have had many accesses. \n", "\n", "You may still be able to access the file from the browser:\n", "\n", "\t https://drive.google.com/uc?id=1D6HGWYWvhdvLWQt4uOYqdmuVO7ZVLWNa \n", "\n"]}]}, {"cell_type": "markdown", "metadata": {"id": "LaZ8INV0IOgH"}, "source": ["## Setup the required code\n", "\n", "- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```"]}, {"cell_type": "code", "source": "!pip install docopt", "metadata": {"colab": {"base_uri": "https://localhost:8080/"}, "id": "cPUGU04KJOkK", "outputId": "4b886b91-7f5b-405d-8182-f12edde07565"}, "execution_count": null, "outputs": [{"output_type": "stream", "name": "stdout", "text": ["Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n", "Collecting docopt\n", " Downloading docopt-0.6.2.tar.gz (25 kB)\n", " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n", "Building wheels for collected packages: docopt\n", " Building wheel for docopt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13707 sha256=ff54f207c0eee20da8801ef418e9b13ae4d7f932cb5436c451536d0af4d9f7c1\n", " Stored in directory: /root/.cache/pip/wheels/fc/ab/d4/5da2067ac95b36618c629a5f93f809425700506f72c9732fac\n", "Successfully built docopt\n", "Installing collected packages: docopt\n", "Successfully installed docopt-0.6.2\n"]}]}, {"cell_type": "code", "metadata": {"id": "GnZPVzThIBkd", "colab": {"base_uri": "https://localhost:8080/", "height": 415}, "outputId": "ef47d585-4d92-4384-a5d1-1e1875e2c584"}, "source": "#%tensorflow_version 1.x\n#%pylab inline\n#rcParams[\"figure.figsize\"] = (10,5)\n\nimport os\nimport sys\nimport numpy as np\nsys.path.append('pytorch-dc-tts/')\nsys.path.append('pytorch-dc-tts/models')\nsys.path.append(\"tacotron_pytorch/\")\nsys.path.append(\"tacotron_pytorch/lib/tacotron\")\n\n# For the DC-TTS\nimport torch\nfrom text2mel import Text2Mel\nfrom ssrn import SSRN\nfrom audio import save_to_wav, spectrogram2wav\nfrom utils import get_last_checkpoint_file_name, load_checkpoint_test, save_to_png, load_checkpoint\nfrom datasets.emovdb import vocab, get_test_data\n\n# For the Tacotron\nfrom text import text_to_sequence, symbols\n# from util import audio\n\nfrom tacotron_pytorch import Tacotron\nfrom synthesis import tts as _tts\n\n# For Audio/Display purposes\nimport librosa.display\nimport IPython\nfrom IPython.display import Audio\nfrom IPython.display import display\nfrom google.colab import widgets\nfrom google.colab import output\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ntorch.set_grad_enabled(False)\ntext2mel = Text2Mel(vocab).eval()\n\nssrn = SSRN().eval()\nload_checkpoint('trained_models/ssrn.pth', ssrn, None)\n\nmodel = Tacotron(n_vocab=len(symbols),\n embedding_dim=256,\n mel_dim=80,\n linear_dim=1025,\n r=5,\n padding_idx=None,\n use_memory_mask=False,\n )\n\ndef visualize(alignment, spectrogram, Emotion):\n label_fontsize = 16\n tb = widgets.TabBar(['Alignment', 'Spectrogram'], location='top')\n with tb.output_to('Alignment'):\n imshow(alignment.T, aspect=\"auto\", origin=\"lower\", interpolation=None)\n xlabel(\"Decoder timestamp\", fontsize=label_fontsize)\n ylabel(\"Encoder timestamp\", fontsize=label_fontsize)\n with tb.output_to('Spectrogram'):\n if Emotion == 'Disgust' or Emotion == 'Amused' or Emotion == 'Sleepiness':\n librosa.display.specshow(spectrogram.T, sr=fs,hop_length=hop_length, x_axis=\"time\", y_axis=\"linear\")\n else:\n librosa.display.specshow(spectrogram, sr=fs,hop_length=hop_length, x_axis=\"time\", y_axis=\"linear\")\n\n xlabel(\"Time\", fontsize=label_fontsize)\n ylabel(\"Hz\", fontsize=label_fontsize)\n\ndef tts_dctts(text2mel, ssrn, text):\n sentences = [text]\n\n max_N = len(text)\n L = torch.from_numpy(get_test_data(sentences, max_N))\n zeros = torch.from_numpy(np.zeros((1, 80, 1), np.float32))\n Y = zeros\n A = None\n\n for t in range(210):\n _, Y_t, A = text2mel(L, Y, monotonic_attention=True)\n Y = torch.cat((zeros, Y_t), -1)\n _, attention = torch.max(A[0, :, -1], 0)\n attention = attention.item()\n if L[0, attention] == vocab.index('E'): # EOS\n break\n\n _, Z = ssrn(Y)\n Y = Y.cpu().detach().numpy()\n A = A.cpu().detach().numpy()\n Z = Z.cpu().detach().numpy()\n\n return spectrogram2wav(Z[0, :, :].T), A[0, :, :], Y[0, :, :]\n\n\ndef tts_tacotron(model, text):\n waveform, alignment, spectrogram = _tts(model, text)\n return waveform, alignment, spectrogram\n\ndef present(waveform, Emotion, figures=False):\n if figures!=False:\n visualize(figures[0], figures[1], Emotion)\n IPython.display.display(Audio(waveform, rate=fs))\n\n\nfs = 20000 #20000\nhop_length = 250\nmodel.decoder.max_decoder_steps = 200", "execution_count": null, "outputs": [{"output_type": "error", "ename": "AttributeError", "evalue": "ignored", "traceback": ["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m<ipython-input-5-a39733aa05d0>\u001b[0m in \u001b[0;36m<cell line: 26>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 25\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtacotron_pytorch\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mTacotron\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 26\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0msynthesis\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtts\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0m_tts\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 27\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 28\u001b[0m \u001b[0;31m# For Audio/Display purposes\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/content/tacotron_pytorch/synthesis.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 19\u001b[0m \u001b[0msys\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtacotron_lib_dir\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtext\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtext_to_sequence\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msymbols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 21\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mutil\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0maudio\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 22\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mutil\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mplot\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mplot_alignment\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/content/tacotron_pytorch/lib/tacotron/util/audio.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtensorflow\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mscipy\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0msignal\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mhparams\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mhparams\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 8\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mscipy\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mio\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mwavfile\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/content/tacotron_pytorch/hparams.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;31m# Default hyperparameters:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m hparams = tf.contrib.training.HParams(\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0;31m# Comma-separated list of cleaners to run on text prior to training and eval. For non-English\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;31m# text, you may want to use \"basic_cleaners\" or \"transliteration_cleaners\" See TRAINING_DATA.md.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mAttributeError\u001b[0m: module 'tensorflow' has no attribute 'contrib'"]}]}, {"cell_type": "markdown", "metadata": {"id": "wiSAxs7XIUO7"}, "source": ["## Run the Demo\n", "\n", "- Select an ```Emotion``` from the dropdown and enter the ```Text``` that you want to be generated. \n", "- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```\n", "\n", "**Play the speech with the generated audio player and view the required plots by clicking on their respective tabs!**\n", "\n", "\n"]}, {"cell_type": "code", "metadata": {"cellView": "form", "id": "3jKM6GfzlgpS"}, "source": "#@title Select the emotion and type the text\n\n%pylab inline\n\nEmotion = \"Neutral\" #@param [\"Neutral\", \"Angry\", \"Disgust\", \"Sleepiness\", \"Amused\"]\nText = 'I am exhausted.' #@param {type:\"string\"}\n\nwav, align, mel = None, None, None\n\nif Emotion == \"Neutral\":\n load_checkpoint('trained_models/'+Emotion.lower()+'_dctts.pth', text2mel, None)\n wav, align, mel = tts_dctts(text2mel, ssrn, Text)\nelif Emotion == \"Angry\":\n load_checkpoint_test('trained_models/'+Emotion.lower()+'_dctts.pth', text2mel, None)\n wav, align, mel = tts_dctts(text2mel, ssrn, Text)\n # wav = wav.T\nelif Emotion == \"Disgust\" or Emotion == \"Amused\" or Emotion == \"Sleepiness\":\n checkpoint = torch.load('trained_models/'+Emotion.lower()+'_tacotron.pth', map_location=torch.device('cpu'))\n model.load_state_dict(checkpoint[\"state_dict\"])\n wav, align, mel = tts_tacotron(model, Text)\n\npresent(wav, Emotion, (align,mel))\n\n", "execution_count": null, "outputs": []}, {"cell_type": "code", "metadata": {"id": "AchHD0xLcaE1"}, "source": [], "execution_count": null, "outputs": []}]}