{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "oXnEutuDQa9c"
   },
   "outputs": [],
   "source": [
    "# Copyright 2025 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JAPoU8Sm5E6e"
   },
   "source": [
    "# Getting Started with the Live API Native Audio\n",
    "\n",
    "\n",
    "<table align=\"left\">\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fmultimodal-live-api%2Fintro_live_api_native_audio.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\">\n",
    "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
    "    </a>\n",
    "  </td>\n",
    "</table>\n",
    "\n",
    "<div style=\"clear: both;\"></div>\n",
    "\n",
    "<p>\n",
    "<b>Share to:</b>\n",
    "\n",
    "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_live_api_native_audio.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
    "</a>\n",
    "</p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "84f0f73a0f76"
   },
   "source": [
    "| Authors |\n",
    "| --- |\n",
    "| [Eric Dong](https://github.com/gericdong) |\n",
    "| [Holt Skinner](https://github.com/holtskinner) |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tvgnzT1CKxrO"
   },
   "source": [
    "## Overview\n",
    "\n",
    "This notebook demonstrates how to connect to the Gemini Live API using the Google Gen AI SDK for Python, focusing on **Native Audio** features like **Proactive Audio** and **Affective Dialog**.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gPiTOAHURvTM"
   },
   "source": [
    "## Getting Started"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "CHRZUpfWSEpp"
   },
   "source": [
    "### Install Google Gen AI SDK for Python\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "sG3_LKsWSD3A"
   },
   "outputs": [],
   "source": [
    "%pip install --upgrade --quiet google-genai"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HlMVjiAWSMNX"
   },
   "source": [
    "### Authenticate your notebook environment (Colab only)\n",
    "\n",
    "If you are running this notebook on Google Colab, run the cell below to authenticate your environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "12fnq4V0SNV3"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "if \"google.colab\" in sys.modules:\n",
    "    from google.colab import auth\n",
    "\n",
    "    auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0Ef0zVX-X9Bg"
   },
   "source": [
    "### Import libraries\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "xBCH3hnAX9Bh"
   },
   "outputs": [],
   "source": [
    "from typing import Any, Dict, List, Optional\n",
    "\n",
    "from IPython.display import Audio, Markdown, display\n",
    "from google.genai.types import (\n",
    "    AudioTranscriptionConfig,\n",
    "    Content,\n",
    "    LiveConnectConfig,\n",
    "    Part,\n",
    "    ProactivityConfig,\n",
    ")\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LymmEN6GSTn-"
   },
   "source": [
    "### Set Google Cloud project information and create client\n",
    "\n",
    "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
    "\n",
    "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Nqwi-5ufWp_B"
   },
   "outputs": [],
   "source": [
    "# Use the environment variable if the user doesn't provide Project ID.\n",
    "import os\n",
    "\n",
    "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
    "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
    "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
    "\n",
    "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
    "\n",
    "from google import genai\n",
    "\n",
    "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ff2e5eefc586"
   },
   "source": [
    "## Using the Gemini 2.5 Flash Native Audio\n",
    "\n",
    "\n",
    "Gemini 2.5 Flash with Live API features native audio dialog capabilities.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "86b26e8aa6ad"
   },
   "outputs": [],
   "source": [
    "MODEL_ID = \"gemini-live-2.5-flash-preview-native-audio-09-2025\"  # @param {type: \"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0zJJHEJnNN4C"
   },
   "source": [
    "## Reusable Live API Modules\n",
    "\n",
    "The following functions are designed to manage the session configuration, handle a single conversational turn, and execute a multi-turn session.\n",
    "\n",
    "### `configure_session`\n",
    "\n",
    "This function creates a flexible `LiveConnectConfig` object to enable or disable features like system instruction, transcription, proactivity, and affective dialog."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "TrwmaJnEVOaz"
   },
   "outputs": [],
   "source": [
    "def configure_session(\n",
    "    system_instruction: Optional[str] = None,\n",
    "    enable_transcription: bool = True,\n",
    "    enable_proactivity: bool = False,\n",
    "    enable_affective_dialog: bool = False,\n",
    ") -> LiveConnectConfig:\n",
    "    \"\"\"\n",
    "    Creates a configuration object for the Live Connect session.\n",
    "    \"\"\"\n",
    "    input_transcription = AudioTranscriptionConfig() if enable_transcription else None\n",
    "    output_transcription = AudioTranscriptionConfig() if enable_transcription else None\n",
    "    # NOTE: Proactive Audio requires proactive_audio=True in ProactivityConfig\n",
    "    proactivity = (\n",
    "        ProactivityConfig(proactive_audio=True) if enable_proactivity else None\n",
    "    )\n",
    "\n",
    "    config = LiveConnectConfig(\n",
    "        response_modalities=[\"AUDIO\"],\n",
    "        system_instruction=system_instruction,\n",
    "        input_audio_transcription=input_transcription,\n",
    "        output_audio_transcription=output_transcription,\n",
    "        proactivity=proactivity,\n",
    "        enable_affective_dialog=enable_affective_dialog,\n",
    "    )\n",
    "\n",
    "    return config"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "n2Y1k4RlAvsC"
   },
   "source": [
    "### `send_and_receive_turn`\n",
    "\n",
    "This asynchronous function manages a single user turn: it sends the text, streams the audio and transcription messages back from the model, and displays the results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "CWqqbDY97tEz"
   },
   "outputs": [],
   "source": [
    "async def send_and_receive_turn(\n",
    "    session: genai.live.AsyncSession, text_input: str\n",
    ") -> Dict[str, Any]:\n",
    "    \"\"\"\n",
    "    Sends a single text turn to the Live Connect session and processes the streaming response.\n",
    "    \"\"\"\n",
    "    display(Markdown(\"\\n---\"))\n",
    "    display(Markdown(f\"**Input:** {text_input}\"))\n",
    "\n",
    "    # 1. Send the user's content\n",
    "    await session.send_client_content(\n",
    "        turns=Content(role=\"user\", parts=[Part(text=text_input)])\n",
    "    )\n",
    "\n",
    "    audio_data = []\n",
    "    input_transcriptions = []\n",
    "    output_transcriptions = []\n",
    "\n",
    "    # 2. Process the streaming response messages\n",
    "    async for message in session.receive():\n",
    "        # Collect input transcription (what the model heard the user say)\n",
    "        if (\n",
    "            message.server_content.input_transcription\n",
    "            and message.server_content.input_transcription.text\n",
    "        ):\n",
    "            input_transcriptions.append(message.server_content.input_transcription.text)\n",
    "\n",
    "        # Collect output transcription (the model's spoken response text)\n",
    "        if (\n",
    "            message.server_content.output_transcription\n",
    "            and message.server_content.output_transcription.text\n",
    "        ):\n",
    "            output_transcriptions.append(\n",
    "                message.server_content.output_transcription.text\n",
    "            )\n",
    "\n",
    "        # Collect audio data (the model's spoken response audio chunks)\n",
    "        if (\n",
    "            message.server_content.model_turn\n",
    "            and message.server_content.model_turn.parts\n",
    "        ):\n",
    "            for part in message.server_content.model_turn.parts:\n",
    "                if part.inline_data:\n",
    "                    # Assuming the audio data is always in np.int16 format (24000Hz rate)\n",
    "                    audio_data.append(\n",
    "                        np.frombuffer(part.inline_data.data, dtype=np.int16)\n",
    "                    )\n",
    "\n",
    "    # 3. Display the results\n",
    "    results = {\n",
    "        \"audio_data\": audio_data,\n",
    "        \"input_transcription\": \"\".join(input_transcriptions),\n",
    "        \"output_transcription\": \"\".join(output_transcriptions),\n",
    "    }\n",
    "\n",
    "    if results[\"input_transcription\"]:\n",
    "        display(Markdown(f\"**Input transcription >** {results['input_transcription']}\"))\n",
    "\n",
    "    if results[\"audio_data\"]:\n",
    "        # Concatenate all audio chunks into one array\n",
    "        full_audio = np.concatenate(results[\"audio_data\"])\n",
    "        display(\n",
    "            Audio(full_audio, rate=24000, autoplay=True)\n",
    "        )  # NOTE: 24000 is the required rate\n",
    "    else:\n",
    "        # This will be triggered on the turns where the model remains silent due to the system instruction\n",
    "        display(\n",
    "            Markdown(\n",
    "                \"**Model Response:** *No audio response received (filtered by system instruction).*\"\n",
    "            )\n",
    "        )\n",
    "\n",
    "    if results[\"output_transcription\"]:\n",
    "        display(\n",
    "            Markdown(f\"**Output transcription >** {results['output_transcription']}\")\n",
    "        )\n",
    "\n",
    "    return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jFA_gCdiBw3u"
   },
   "source": [
    "### `run_live_session`\n",
    "\n",
    "This function manages the full conversational context, establishing the connection and running a series of defined `turns`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "28Jh1_xTBHfm"
   },
   "outputs": [],
   "source": [
    "async def run_live_session(\n",
    "    model_id: str,\n",
    "    config: LiveConnectConfig,\n",
    "    turns: List[str],\n",
    "):\n",
    "    \"\"\"\n",
    "    Establishes the Live Connect session and runs a series of conversational turns.\n",
    "    \"\"\"\n",
    "    display(Markdown(\"## Starting Live Connect Session...\"))\n",
    "    system_instruction = config.system_instruction\n",
    "    display(Markdown(f\"**System Instruction:** *{system_instruction}*\"))\n",
    "\n",
    "    try:\n",
    "        # Use an asynchronous context manager to establish and manage the session lifecycle\n",
    "        async with client.aio.live.connect(\n",
    "            model=model_id,\n",
    "            config=config,\n",
    "        ) as session:\n",
    "            display(\n",
    "                Markdown(f\"**Status:** Session established with model: `{model_id}`\")\n",
    "            )\n",
    "\n",
    "            all_results = []\n",
    "            for turn in turns:\n",
    "                # Send each user input sequentially\n",
    "                result = await send_and_receive_turn(session, turn)\n",
    "                all_results.append(result)\n",
    "\n",
    "            display(Markdown(\"\\n---\"))\n",
    "            display(Markdown(\"**Status:** All turns complete. Session closed.\"))\n",
    "            return all_results\n",
    "    except Exception as e:\n",
    "        display(Markdown(f\"**Error:** Failed to connect or run session: {e}\"))\n",
    "        return []"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "48b169cdce9c"
   },
   "source": [
    "## Scenario 1: Proactive Audio (Chime-in Behavior)\n",
    "\n",
    "This example uses a **System Instruction** and **Proactive Audio** to test the model's ability to remain silent when the topic is off-subject (French cuisine) and chime in only when the conversation shifts to the instructed topic (Italian cooking).\n",
    "\n",
    "### Conversation Setup and Execution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "cdcuYAaJAu5H"
   },
   "outputs": [],
   "source": [
    "session_config = configure_session(\n",
    "    system_instruction=\"You are an AI assistant in Italian cooking, chime in only when the topic is about Italian cooking.\",\n",
    "    enable_proactivity=True,\n",
    ")\n",
    "\n",
    "conversation_turns = [\n",
    "    # Speaker A speaks, general topic, the model should be silent.\n",
    "    \"Hey, I was just thinking about my dinner plans. I really love cooking.\",\n",
    "    # Speaker B speaks, off-topic (French cuisine). The model should be silent.\n",
    "    \"Oh yes, me too. I love French cuisine, especially making a good coq au vin. I think I'll make that tonight.\",\n",
    "    # Speaker A speaks, shifts to Italian topic. The model should chime in.\n",
    "    \"Hmm, that sounds complicated. I prefer Italian food. Say, do you know how to make a simple Margherita pizza recipe?\",\n",
    "]\n",
    "\n",
    "results = await run_live_session(MODEL_ID, session_config, conversation_turns)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "645d33adaabd"
   },
   "source": [
    "## Scenario 2: Affective Dialog (Empathy)\n",
    "\n",
    "This scenario enables **Affective Dialog** (`enable_affective_dialog=True`) and uses a system instruction to create a senior technical advisor persona. The user's input is phrased to convey **frustration**, prompting an empathetic and helpful response from the model.\n",
    "\n",
    "### Configuration and Execution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "LHT8O-oNy8aQ"
   },
   "outputs": [],
   "source": [
    "affective_config = configure_session(\n",
    "    enable_transcription=False,\n",
    "    enable_proactivity=False,\n",
    "    enable_affective_dialog=True,\n",
    "    system_instruction=\"You are a senior technical advisor for a complex AI project.\",\n",
    ")\n",
    "\n",
    "affective_dialog_turns = [\n",
    "    \"I have been staring at this API docs for two hours now! It's so confusing and I can't even find where to start the streaming request. I'm completely stuck!\",\n",
    "    # A follow-up turn to see if the model maintains the helpful persona\n",
    "    \"Okay, thanks. I'm using Python. What is the single most important parameter I need to set up for a successful streaming connection?\",\n",
    "]\n",
    "\n",
    "results = await run_live_session(MODEL_ID, affective_config, affective_dialog_turns)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "usjiqTDXfk_6"
   },
   "source": [
    "## What's next\n",
    "\n",
    "- See the [Live API reference docs](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/multimodal-live).\n",
    "- Explore other notebooks in the [Google Cloud Generative AI GitHub repository](https://github.com/GoogleCloudPlatform/generative-ai)."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "intro_live_api_native_audio.ipynb",
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
