{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Get started with Gemini-TTS voices using Text-to-Speech\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Faudio%2Fspeech%2Fgetting-started%2Fget_started_with_gemini_tts_voices.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/audio/speech/getting-started/get_started_with_gemini_tts_voices.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>            "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Ahmet Kizilay](https://github.com/ahmetkizilay) |\n",
        "| [Gary Chien](https://github.com/goldenchest) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook introduces [Gemini-TTS](https://cloud.google.com/text-to-speech/docs/gemini-tts), the latest evolution of our Text-to-Speech technology that's moving beyond just naturalness to giving granular control over generated audio using text-based prompts. Using Gemini-TTS, you can synthesize speech from short snippets to long-form narratives, precisely dictating style, accent, pace, tone, and even emotional expression, all steerable through natural-language prompts. You can create conversations between two speakers with the same emotional expression and steerability.\n",
        "\n",
        "\n",
        "There are currently 30 distinct voice options. See [all available voices](https://cloud.google.com/text-to-speech/docs/gemini-tts#voice_options).\n",
        "\n",
        "There are 80+ locale options to use for synthesis. See [all available locales](https://cloud.google.com/text-to-speech/docs/gemini-tts#language_availability)\n",
        "\n",
        "In this tutorial, you learn how to:\n",
        "\n",
        "- How to synthesize speech using real-time (online) processing\n",
        "- How to use formatting and expressive tags to modify the tone of the speech.\n",
        "- How to synthesize dialogues with two speakers."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Text-to-Speech SDK and other required packages\n",
        "\n",
        "Minimum google-cloud-texttospeech version  2.31.0 is required to be able to use the Gemini-TTS related fields."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "e73_ZgKWYedz"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "# Detect the operating system\n",
        "os=$(uname -s)\n",
        "\n",
        "if [[ \"$os\" == \"Linux\" ]]; then\n",
        "  # Linux installation\n",
        "  sudo apt update -y -qq\n",
        "  sudo apt install ffmpeg -y -qq\n",
        "  echo \"ffmpeg installed successfully on Linux.\"\n",
        "elif [[ \"$os\" == \"Darwin\" ]]; then\n",
        "  # macOS installation\n",
        "  if command -v brew &> /dev/null; then\n",
        "    brew install ffmpeg\n",
        "    if [[ $? -eq 0 ]]; then\n",
        "        echo \"ffmpeg installed successfully on macOS using Homebrew.\"\n",
        "    else\n",
        "        echo \"Error installing ffmpeg on macOS using Homebrew.\"\n",
        "    fi\n",
        "  else\n",
        "    echo \"Homebrew is not installed. Please install Homebrew and try again.\"\n",
        "  fi\n",
        "else\n",
        "  echo \"Unsupported operating system: $os\"\n",
        "fi"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-cloud-texttospeech\n",
        "%pip show google-cloud-texttospeech"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information and initialize SDK\n",
        "\n",
        "To get started using the Text-to-Speech API, you must have an existing Google Cloud project and [enable the API](https://console.cloud.google.com/flows/enableapi?apiid=texttospeech.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).\n",
        "\n",
        "For regional availability, see [documentation](https://cloud.google.com/text-to-speech/docs/gemini-tts#regional_availability)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "WIQyBhAn_9tK"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "TTS_LOCATION = \"global\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "76915236b1c9"
      },
      "outputs": [],
      "source": [
        "! gcloud config set project {PROJECT_ID}\n",
        "! gcloud auth application-default set-quota-project {PROJECT_ID}\n",
        "! gcloud auth application-default login -q"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "qqm0OQpAYCph"
      },
      "outputs": [],
      "source": [
        "from IPython.display import Audio, display\n",
        "from google.api_core.client_options import ClientOptions\n",
        "from google.cloud import texttospeech_v1beta1 as texttospeech"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sP8GBj3tBAC1"
      },
      "source": [
        "### Set constants\n",
        "\n",
        "Initiate the API endpoint and the text to speech client.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "rXTVeU1uBBqY"
      },
      "outputs": [],
      "source": [
        "API_ENDPOINT = (\n",
        "    f\"{TTS_LOCATION}-texttospeech.googleapis.com\"\n",
        "    if TTS_LOCATION != \"global\"\n",
        "    else \"texttospeech.googleapis.com\"\n",
        ")\n",
        "\n",
        "client = texttospeech.TextToSpeechClient(\n",
        "    client_options=ClientOptions(api_endpoint=API_ENDPOINT)\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VPVDNRyVxquo"
      },
      "source": [
        "## Synthesize using Gemini-TTS voices\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a9aa2ab365ac"
      },
      "source": [
        "### Synthesize speech using real-time (online) processing\n",
        "\n",
        "You define the text you want to convert, select a specific voice and language, and then instruct the API to generate an audio of the spoken text.\n",
        "\n",
        "This example uses the `Aoede` voice, which is a high-definition voice, offering improved clarity. Feel free to choose another voice from the `voice` drop-down menu.\n",
        "\n",
        "The code will call the `synthesize_speech` method, which handles the core conversion process, and the output will be an MP3 audio as `bytes`.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "_7XwbSxUD4eW"
      },
      "outputs": [],
      "source": [
        "MODEL = \"gemini-2.5-flash-tts\"  # @param [\"gemini-2.5-flash-tts\", \"gemini-2.5-pro-tts\"]\n",
        "\n",
        "# fmt: off\n",
        "VOICE = \"Aoede\"  # @param [\"Achernar\", \"Achird\", \"Algenib\", \"Algieba\", \"Alnilam\", \"Aoede\", \"Autonoe\", \"Callirrhoe\", \"Charon\", \"Despina\", \"Enceladus\", \"Erinome\", \"Fenrir\", \"Gacrux\", \"Iapetus\", \"Kore\", \"Laomedeia\", \"Leda\", \"Orus\", \"Puck\", \"Pulcherrima\", \"Rasalgethi\", \"Sadachbia\", \"Sadaltager\", \"Schedar\", \"Sulafat\", \"Umbriel\", \"Vindemiatrix\", \"Zephyr\", \"Zubenelgenubi\"]\n",
        "\n",
        "LANGUAGE_CODE = \"en-us\"  # @param [\"am-et\", \"ar-001\", \"ar-eg\",  \"az-az\",  \"be-by\",  \"bg-bg\", \"bn-bd\", \"ca-es\", \"ceb-ph\", \"cs-cz\",  \"da-dk\",  \"de-de\",  \"el-gr\", \"en-au\", \"en-gb\", \"en-in\",  \"en-us\",  \"es-es\",  \"es-419\", \"es-mx\", \"es-us\", \"et-ee\", \"eu-es\",  \"fa-ir\",  \"fi-fi\",  \"fil-ph\", \"fr-fr\", \"fr-ca\", \"gl-es\", \"gu-in\",  \"hi-in\",  \"hr-hr\",  \"ht-ht\",  \"hu-hu\", \"af-za\", \"hy-am\", \"id-id\",  \"is-is\",  \"it-it\",  \"he-il\",  \"ja-jp\", \"jv-jv\", \"ka-ge\", \"kn-in\",  \"ko-kr\",  \"kok-in\", \"la-va\",  \"lb-lu\", \"lo-la\", \"lt-lt\", \"lv-lv\",  \"mai-in\", \"mg-mg\",  \"mk-mk\",  \"ml-in\", \"mn-mn\", \"mr-in\", \"ms-my\",  \"my-mm\",  \"nb-no\",  \"ne-np\",  \"nl-nl\", \"nn-no\", \"or-in\", \"pa-in\",  \"pl-pl\",  \"ps-af\",  \"pt-br\",  \"pt-pt\", \"ro-ro\", \"ru-ru\", \"sd-in\",  \"si-lk\",  \"sk-sk\",  \"sl-si\",  \"sq-al\", \"sr-rs\", \"sv-se\", \"sw-ke\",  \"ta-in\",  \"te-in\",  \"th-th\",  \"tr-tr\", \"uk-ua\", \"ur-pk\", \"vi-vn\",  \"cmn-cn\", \"cmn-tw\"]\n",
        "# fmt: on\n",
        "\n",
        "\n",
        "voice = texttospeech.VoiceSelectionParams(\n",
        "    name=VOICE, language_code=LANGUAGE_CODE, model_name=MODEL\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "98d104dc4d27"
      },
      "outputs": [],
      "source": [
        "# @title capture emotion with prompts\n",
        "\n",
        "# fmt: off\n",
        "PROMPT = \"You are having a conversation with a friend. Say the following in a happy and casual way\"  # @param {type: \"string\"}\n",
        "# fmt: on\n",
        "TEXT = \"hahaha, i did NOT expect that. can you believe it!\"  # @param {type: \"string\"}\n",
        "\n",
        "# Perform the text-to-speech request on the text input with the selected\n",
        "# voice parameters and audio file type\n",
        "response = client.synthesize_speech(\n",
        "    input=texttospeech.SynthesisInput(text=TEXT, prompt=PROMPT),\n",
        "    voice=voice,\n",
        "    # Select the type of audio file you want returned\n",
        "    audio_config=texttospeech.AudioConfig(\n",
        "        audio_encoding=texttospeech.AudioEncoding.MP3\n",
        "    ),\n",
        ")\n",
        "\n",
        "# play the generated audio\n",
        "display(Audio(response.audio_content))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rHJaUJpRskdz"
      },
      "outputs": [],
      "source": [
        "# @title Modify pace of the speech\n",
        "\n",
        "# fmt: off\n",
        "PROMPT = \"Say the following very fast but still be intelligible\"  # @param {type: \"string\"}\n",
        "TEXT = \"Availability and terms may vary. Check our website or your local store for complete details and restrictions.\"  # @param {type: \"string\"}\n",
        "# fmt: on\n",
        "\n",
        "# Perform the text-to-speech request on the text input with the selected\n",
        "# voice parameters and audio file type\n",
        "response = client.synthesize_speech(\n",
        "    input=texttospeech.SynthesisInput(text=TEXT, prompt=PROMPT),\n",
        "    voice=voice,\n",
        "    # Select the type of audio file you want returned\n",
        "    audio_config=texttospeech.AudioConfig(\n",
        "        audio_encoding=texttospeech.AudioEncoding.MP3\n",
        "    ),\n",
        ")\n",
        "# play the generated audio\n",
        "display(Audio(response.audio_content))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e6GpFv9RR05P"
      },
      "outputs": [],
      "source": [
        "# @title modify text with expressive tags\n",
        "\n",
        "# NOTE: These tags are not strict syntax. Feel free to experiment with different\n",
        "# expressions and formats.\n",
        "\n",
        "PROMPT = \"Say the following with a sarcastic tone\"  # @param {type: \"string\"}\n",
        "# fmt: off\n",
        "TEXT = \"So.. [chuckling] tell me about this [coughs] AI thing.\"  # @param {type: \"string\"}\n",
        "# fmt: on\n",
        "\n",
        "# Perform the text-to-speech request on the text input with the selected\n",
        "# voice parameters and audio file type\n",
        "response = client.synthesize_speech(\n",
        "    input=texttospeech.SynthesisInput(text=TEXT, prompt=PROMPT),\n",
        "    voice=voice,\n",
        "    # Select the type of audio file you want returned\n",
        "    audio_config=texttospeech.AudioConfig(\n",
        "        audio_encoding=texttospeech.AudioEncoding.MP3\n",
        "    ),\n",
        ")\n",
        "# play the generated audio\n",
        "display(Audio(response.audio_content))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W90EPdax-H0D"
      },
      "source": [
        "## Multi-speaker (Dialog) Speech Synthesis\n",
        "\n",
        "You can create a dialog between two speakers. Using `multi_speaker_voice_config`, you can specify the speakers, and assign a custom speaker name to reference in the input text.\n",
        "\n",
        "There are two ways to structure the multi-speaker input"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bKN3rZXV9_k5"
      },
      "outputs": [],
      "source": [
        "# @title Explicit turn-based syntax\n",
        "\n",
        "SPEAKER_ALIAS_1 = \"Zizu\"  # @param {type: \"string\"}\n",
        "# fmt: off\n",
        "SPEAKER_1 = \"Fenrir\"  # @param [\"Achernar\", \"Achird\", \"Algenib\", \"Algieba\", \"Alnilam\", \"Aoede\", \"Autonoe\", \"Callirrhoe\", \"Charon\", \"Despina\", \"Enceladus\", \"Erinome\", \"Fenrir\", \"Gacrux\", \"Iapetus\", \"Kore\", \"Laomedeia\", \"Leda\", \"Orus\", \"Puck\", \"Pulcherrima\", \"Rasalgethi\", \"Sadachbia\", \"Sadaltager\", \"Schedar\", \"Sulafat\", \"Umbriel\", \"Vindemiatrix\", \"Zephyr\", \"Zubenelgenubi\"]\n",
        "\n",
        "SPEAKER_ALIAS_2 = \"Gary\"  # @param {type: \"string\"}\n",
        "SPEAKER_2 = \"Orus\"  # @param [\"Achernar\", \"Achird\", \"Algenib\", \"Algieba\", \"Alnilam\", \"Aoede\", \"Autonoe\", \"Callirrhoe\", \"Charon\", \"Despina\", \"Enceladus\", \"Erinome\", \"Fenrir\", \"Gacrux\", \"Iapetus\", \"Kore\", \"Laomedeia\", \"Leda\", \"Orus\", \"Puck\", \"Pulcherrima\", \"Rasalgethi\", \"Sadachbia\", \"Sadaltager\", \"Schedar\", \"Sulafat\", \"Umbriel\", \"Vindemiatrix\", \"Zephyr\", \"Zubenelgenubi\"]\n",
        "\n",
        "LANGUAGE_CODE = \"en-gb\"  # @param [\"am-et\", \"ar-001\", \"ar-eg\",  \"az-az\",  \"be-by\",  \"bg-bg\", \"bn-bd\", \"ca-es\", \"ceb-ph\", \"cs-cz\",  \"da-dk\",  \"de-de\",  \"el-gr\", \"en-au\", \"en-gb\", \"en-in\",  \"en-us\",  \"es-es\",  \"es-419\", \"es-mx\", \"es-us\", \"et-ee\", \"eu-es\",  \"fa-ir\",  \"fi-fi\",  \"fil-ph\", \"fr-fr\", \"fr-ca\", \"gl-es\", \"gu-in\",  \"hi-in\",  \"hr-hr\",  \"ht-ht\",  \"hu-hu\", \"af-za\", \"hy-am\", \"id-id\",  \"is-is\",  \"it-it\",  \"he-il\",  \"ja-jp\", \"jv-jv\", \"ka-ge\", \"kn-in\",  \"ko-kr\",  \"kok-in\", \"la-va\",  \"lb-lu\", \"lo-la\", \"lt-lt\", \"lv-lv\",  \"mai-in\", \"mg-mg\",  \"mk-mk\",  \"ml-in\", \"mn-mn\", \"mr-in\", \"ms-my\",  \"my-mm\",  \"nb-no\",  \"ne-np\",  \"nl-nl\", \"nn-no\", \"or-in\", \"pa-in\",  \"pl-pl\",  \"ps-af\",  \"pt-br\",  \"pt-pt\", \"ro-ro\", \"ru-ru\", \"sd-in\",  \"si-lk\",  \"sk-sk\",  \"sl-si\",  \"sq-al\", \"sr-rs\", \"sv-se\", \"sw-ke\",  \"ta-in\",  \"te-in\",  \"th-th\",  \"tr-tr\", \"uk-ua\", \"ur-pk\", \"vi-vn\",  \"cmn-cn\", \"cmn-tw\"]\n",
        "# fmt: on\n",
        "\n",
        "PROMPT = \"Read the following dialogue between two friends\"  # @param {type: \"string\"}\n",
        "\n",
        "multi_speaker_voice_config = texttospeech.MultiSpeakerVoiceConfig(\n",
        "    speaker_voice_configs=[\n",
        "        texttospeech.MultispeakerPrebuiltVoice(\n",
        "            speaker_alias=SPEAKER_ALIAS_1, speaker_id=SPEAKER_1\n",
        "        ),\n",
        "        texttospeech.MultispeakerPrebuiltVoice(\n",
        "            speaker_alias=SPEAKER_ALIAS_2, speaker_id=SPEAKER_2\n",
        "        ),\n",
        "    ]\n",
        ")\n",
        "\n",
        "multi_speaker_markup = texttospeech.MultiSpeakerMarkup(\n",
        "    turns=[\n",
        "        texttospeech.MultiSpeakerMarkup.Turn(\n",
        "            speaker=SPEAKER_ALIAS_1,\n",
        "            text=\"Have you tried the new multi-speaker feature on Gemini?\",\n",
        "        ),\n",
        "        texttospeech.MultiSpeakerMarkup.Turn(\n",
        "            speaker=SPEAKER_ALIAS_2, text=\"Yes! I am super excited about it\"\n",
        "        ),\n",
        "    ]\n",
        ")\n",
        "response = client.synthesize_speech(\n",
        "    input=texttospeech.SynthesisInput(\n",
        "        multi_speaker_markup=multi_speaker_markup, prompt=PROMPT\n",
        "    ),\n",
        "    voice=texttospeech.VoiceSelectionParams(\n",
        "        language_code=LANGUAGE_CODE,\n",
        "        model_name=MODEL,\n",
        "        multi_speaker_voice_config=multi_speaker_voice_config,\n",
        "    ),\n",
        "    audio_config=texttospeech.AudioConfig(\n",
        "        audio_encoding=texttospeech.AudioEncoding.LINEAR16\n",
        "    ),\n",
        ")\n",
        "# play the generated audio\n",
        "display(Audio(response.audio_content))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VqyvqrxK-_rX"
      },
      "outputs": [],
      "source": [
        "# @title Inline dialog text input\n",
        "\n",
        "multi_speaker_voice_config = texttospeech.MultiSpeakerVoiceConfig(\n",
        "    speaker_voice_configs=[\n",
        "        texttospeech.MultispeakerPrebuiltVoice(\n",
        "            speaker_alias=SPEAKER_ALIAS_1,\n",
        "            speaker_id=SPEAKER_1,\n",
        "        ),\n",
        "        texttospeech.MultispeakerPrebuiltVoice(\n",
        "            speaker_alias=SPEAKER_ALIAS_2,\n",
        "            speaker_id=SPEAKER_2,\n",
        "        ),\n",
        "    ]\n",
        ")\n",
        "response = client.synthesize_speech(\n",
        "    input=texttospeech.SynthesisInput(\n",
        "        text=\"Zizu: Have you tried the new multi-speaker feature on Gemini?\\nGary: Yes! I am super excited about it\",\n",
        "        prompt=PROMPT,\n",
        "    ),\n",
        "    voice=texttospeech.VoiceSelectionParams(\n",
        "        language_code=LANGUAGE_CODE,\n",
        "        model_name=MODEL,\n",
        "        multi_speaker_voice_config=multi_speaker_voice_config,\n",
        "    ),\n",
        "    audio_config=texttospeech.AudioConfig(\n",
        "        audio_encoding=texttospeech.AudioEncoding.LINEAR16\n",
        "    ),\n",
        ")\n",
        "# play the generated audio\n",
        "display(Audio(response.audio_content))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SXuEKjCvljqm"
      },
      "source": [
        "### Synthesize speech using streaming processing\n",
        "\n",
        "You can use the `StreamingSynthesizeRequest` method to get audio streamed back as soon as it is ready. This method is more suitable for real-time scenarios where fast response time is important for better user-experience.\n",
        "\n",
        "A function like `request_generator` can be used to stream text into the API, for example from an LLM which generates the text as a response to a user action.\n",
        "\n",
        "The audio stream will start after the client stops sending the text input, as indicated by half-close message. In the example below, the completion of the `request_generator` model implies the half-close operation.\n",
        "\n",
        "In real-time applications, the streaming responses are meant to be heard immediately as the responses are sent from the TTS server. For example, in a web server scenario, where the client is connected to your webserver via websockets, you could use `emit(\"audio\", response.audio_content)` to pass the audio to the client immediately."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "id": "m_tEMNPmlcYJ"
      },
      "outputs": [],
      "source": [
        "# @title Calling Streaming synthesize\n",
        "\n",
        "import datetime\n",
        "\n",
        "import numpy as np\n",
        "\n",
        "# fmt: off\n",
        "PROMPT = \"Say the following with a respectful tone\"  # @param {type: \"string\"} # fmt: skip\n",
        "TEXT = \"So.. tell me about this [coughs] AI thing. I would be super interested in learning the fundamentals and jump into the world of vibe coding\"  # @param {type: \"string\"} # fmt: skip\n",
        "# fmt: on\n",
        "\n",
        "\n",
        "config_request = texttospeech.StreamingSynthesizeRequest(\n",
        "    streaming_config=texttospeech.StreamingSynthesizeConfig(\n",
        "        voice=texttospeech.VoiceSelectionParams(\n",
        "            name=VOICE, language_code=LANGUAGE_CODE, model_name=MODEL\n",
        "        )\n",
        "    )\n",
        ")\n",
        "\n",
        "\n",
        "def request_generator():\n",
        "    yield config_request\n",
        "\n",
        "    yield texttospeech.StreamingSynthesizeRequest(\n",
        "        input=texttospeech.StreamingSynthesisInput(text=TEXT, prompt=PROMPT)\n",
        "    )\n",
        "\n",
        "\n",
        "request_start_time = datetime.datetime.now()\n",
        "streaming_responses = client.streaming_synthesize(request_generator())\n",
        "\n",
        "is_first_chunk_received = False\n",
        "final_audio_data = np.array([])\n",
        "num_chunks_received = 0\n",
        "for response in streaming_responses:\n",
        "    # just a simple progress indicator\n",
        "    num_chunks_received += 1\n",
        "    print(\".\", end=\"\")\n",
        "    if num_chunks_received % 40 == 0:\n",
        "        print(\"\")\n",
        "\n",
        "    # measuring time to first audio\n",
        "    if not is_first_chunk_received:\n",
        "        is_first_chunk_received = True\n",
        "        first_chunk_received_time = datetime.datetime.now()\n",
        "\n",
        "    # accumulating audio. In a web-server scenario, you would want to \"emit\" audio\n",
        "    # to the frontend as soon as it arrives.\n",
        "    #\n",
        "    # For example using flask socketio, you could do the following\n",
        "    # from flask_socketio import SocketIO, emit\n",
        "    # emit(\"audio\", response.audio_content)\n",
        "    # socketio.sleep(0)\n",
        "    audio_data = np.frombuffer(response.audio_content, dtype=np.int16)\n",
        "    final_audio_data = np.concatenate((final_audio_data, audio_data))\n",
        "\n",
        "time_to_first_audio = first_chunk_received_time - request_start_time\n",
        "time_to_completion = datetime.datetime.now() - request_start_time\n",
        "audio_duration = len(final_audio_data) / 24_000  # default sampling rate.\n",
        "\n",
        "print(\"\\n\")\n",
        "print(f\"Time to first audio: {time_to_first_audio.total_seconds()} seconds\")\n",
        "print(f\"Time to completion: {time_to_completion.total_seconds()} seconds\")\n",
        "print(f\"Audio duration: {audio_duration} seconds\")\n",
        "\n",
        "display(Audio(final_audio_data, rate=24_000, autoplay=False))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LU1a9_Y1VKe3"
      },
      "source": [
        "## Further details\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "N0PqTEalVc4B"
      },
      "source": [
        "Feel free to review the [Cloud Text-to-Speech Python SDK documentation](https://cloud.google.com/python/docs/reference/texttospeech/latest) to explore all available fields and options to customize the API behavior.\n",
        "\n",
        "\n",
        "To learn more about Gemini-TTS offering on Vertex AI, make sure to check out the [Gemini-TTS Guide](https://cloud.google.com/text-to-speech/docs/gemini-tts)."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "get_started_with_gemini_tts_voices.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
