{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oXnEutuDQa9c"
      },
      "outputs": [],
      "source": [
        "# Copyright 2024 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Getting Started with Gemini Live API using WebSocket\n",
        "\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fmultimodal-live-api%2Fintro_multimodal_live_api.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| | |\n",
        "|-|-|\n",
        "| Author(s) |  [Eric Dong](https://github.com/gericdong), [Holt Skinner](https://github.com/holtskinner) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "The [Gemini Live API](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/live-api) enables low-latency bidirectional voice and video interactions with Gemini. The Live API can process text, audio, and video input, and it can provide text and audio output.\n",
        "\n",
        "This tutorial demonstrates how to get started with the Live API in Vertex AI using [WebSocket](https://en.wikipedia.org/wiki/WebSocket), a low-level approach to establish a standard WebSocket session and manage raw JSON payloads."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gPiTOAHURvTM"
      },
      "source": [
        "# Getting Started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CHRZUpfWSEpp"
      },
      "source": [
        "### Install libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sG3_LKsWSD3A"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet websockets"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0Ef0zVX-X9Bg"
      },
      "source": [
        "### Import libraries\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QNxC25Pg4Hfr"
      },
      "outputs": [],
      "source": [
        "import asyncio\n",
        "import base64\n",
        "import json\n",
        "import os\n",
        "import sys\n",
        "import wave\n",
        "from typing import Any\n",
        "\n",
        "import cv2\n",
        "import numpy as np\n",
        "import websockets\n",
        "from IPython.display import Audio, Markdown, Video, display"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HlMVjiAWSMNX"
      },
      "source": [
        "### Authenticate your notebook environment\n",
        "\n",
        "If you are running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "12fnq4V0SNV3"
      },
      "outputs": [],
      "source": [
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "41oBMp0YraPr"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "AR6vYdRTsSfv"
      },
      "outputs": [],
      "source": [
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = \"us-central1\"  # @param {type: \"string\", placeholder: \"global\"}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5M7EKckIYVFy"
      },
      "source": [
        "### Choose a Gemini model\n",
        "\n",
        "Select the appropriate model based on your interaction requirements. See [Live API Supported Models](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/live-api#supported_models)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-coEslfWPrxo"
      },
      "outputs": [],
      "source": [
        "MODEL_ID = (\n",
        "    \"gemini-live-2.5-flash-preview-native-audio-09-2025\"  # @param {type: \"string\"}\n",
        ")\n",
        "\n",
        "model = (\n",
        "    f\"projects/{PROJECT_ID}/locations/{LOCATION}/publishers/google/models/{MODEL_ID}\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q2vhyViANawJ"
      },
      "source": [
        "# Session Establishment\n",
        "\n",
        "Implementation of the Live API requires strict adherence to its WebSocket sub-protocol. The interaction is defined by a sequence of message exchanges: Handshake, Setup, Session Loop, and Termination."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2kD6d11gRNTN"
      },
      "source": [
        "### **Step 1**: Handshake\n",
        "\n",
        "The connection is established via a standard WebSocket handshake. The service URL uses a regional endpoint and OAuth 2.0 bearer tokens for authentication."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hoOzREkLL02U"
      },
      "outputs": [],
      "source": [
        "api_host = \"aiplatform.googleapis.com\"\n",
        "if LOCATION != \"global\":\n",
        "    api_host = f\"{LOCATION}-aiplatform.googleapis.com\"\n",
        "\n",
        "service_url = (\n",
        "    f\"wss://{api_host}/ws/google.cloud.aiplatform.v1.LlmBidiService/BidiGenerateContent\"\n",
        ")\n",
        "\n",
        "print(f\"Service URL: {service_url}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "N1mXI5aDB8vl"
      },
      "source": [
        "You can use `gcloud` command to generate an access token for the current Application Default Credential. The access token is passed in the WebSocket headers (e.g., Authorization: Bearer `<TOKEN>`). Note that the default access token lifetime is `3600` seconds."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nb_bHsEhe-37"
      },
      "outputs": [],
      "source": [
        "token_list = !gcloud auth application-default print-access-token\n",
        "\n",
        "headers = {\"Authorization\": f\"Bearer {token_list[0]}\"}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bIZkVoyeST9g"
      },
      "source": [
        "### **Step 2**: Setup\n",
        "\n",
        "Once the WebSocket connection is established, the client must send a configuration message `BidiGenerateContentSetup` immediately to initialize the session. The setup payload is a JSON object containing the `model`, `generation_config`, `system_instruction`, and `tools` definitions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "iEOhBMeRY8YF"
      },
      "outputs": [],
      "source": [
        "# Configration example\n",
        "system_instruction = {\n",
        "    \"parts\": [{\"text\": \"You are a helpful assistant and answer in a friendly tone.\"}]\n",
        "}\n",
        "\n",
        "config = {\n",
        "    \"response_modalities\": [\"audio\"],\n",
        "    \"speech_config\": {\n",
        "        \"language_code\": \"es-US\",\n",
        "        \"voice_config\": {\"prebuilt_voice_config\": {\"voice_name\": \"Kore\"}},\n",
        "    },\n",
        "}\n",
        "\n",
        "setup = {\n",
        "    \"setup\": {\n",
        "        \"model\": model,\n",
        "        \"system_instruction\": system_instruction,\n",
        "        \"generation_config\": config,\n",
        "    }\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OHYe1XbYaJdg"
      },
      "source": [
        "### **Step 3**: Session Loop\n",
        "\n",
        "After the setup phase, the session enters a bidirectional loop.\n",
        "\n",
        "This is one **example design pattern** for the implementation:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3wjV9SQ7tfXs"
      },
      "outputs": [],
      "source": [
        "async def main() -> None:\n",
        "    # Connect to the server\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "\n",
        "        # 1. Perform Setup (Handshake)\n",
        "        # This must be a single, awaited call before streaming starts.\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()  # Wait for setup completion/response\n",
        "\n",
        "        # 2. Define the Send Loop\n",
        "        async def send_loop():\n",
        "            try:\n",
        "                while True:\n",
        "                    # Logic to read microphone/video and send to WS\n",
        "                    # await ws.send(audio_chunk)\n",
        "                    await asyncio.sleep(0.02)  # Simulate 20ms audio chunks\n",
        "            except asyncio.CancelledError:\n",
        "                pass  # Handle clean exit\n",
        "\n",
        "        # 3. Define the Receive Loop\n",
        "        async def receive_loop():\n",
        "            try:\n",
        "                async for message in ws:\n",
        "                    # Logic to play audio or handle \"interrupted\"\n",
        "                    print(\"Received message\")\n",
        "                    # If message.interrupted: stop_playback()\n",
        "            except websockets.exceptions.ConnectionClosed:\n",
        "                print(\"Connection closed\")\n",
        "\n",
        "        # 4. Run both concurrently\n",
        "        # This allows sending and receiving to happen at the exact same time.\n",
        "        await asyncio.gather(send_loop(), receive_loop())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2RGI9YgpJf8r"
      },
      "source": [
        "# Multimodal Streaming\n",
        "\n",
        "The API can process text, audio, and video input, and provide text and audio output. The client sends `client_message` payloads, and the server responds with `server_message` payloads.\n",
        "\n",
        "**Client Messages**\n",
        "\n",
        "- `realtime_input`: Used for high-frequency streaming of audio and video chunks. This message type is designed for efficiency and low overhead. It contains media_chunks with Base64-encoded data.\n",
        "- `client_content`: Used for discrete \"turns\" or text input. This allows the client to inject text into the conversation (e.g., \"The user clicked a button\"). It is also used to provide context or conversation history. Sending client_content with \"turn_complete\": true signals the model to generate a response immediately.\n",
        "- `tool_response`: Sent by the client after executing a function call requested by the model. It contains the output of the function (e.g., the result of a database query or API call).\n",
        "\n",
        "**Server Messages**\n",
        "- `server_content`: The primary vehicle for the model's output. It contains model_turn data, which includes text parts and inline_data (the audio PCM bytes).4\n",
        "- `tool_call`: Sent when the model decides to invoke a tool. It contains the function name and arguments.\n",
        "- `turn_complete`: A boolean flag indicating that the model has finished its current generation turn. This is a signal to the client that the model is now waiting for input.\n",
        "- `interrupted`: A critical signal indicating that the server has detected user speech (Barge-in) and has ceased generation. This requires immediate handling by the client to stop playback.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VAK6pdUqwU5B"
      },
      "source": [
        "### **Text to Audio**\n",
        "\n",
        "This is a single turn text to audio conversation session example - send a text message, receive audio output and play the audio. **For demonstration purposes, it exits the session loop after a turn completes.**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FsU9Ub3KxEdy"
      },
      "outputs": [],
      "source": [
        "async def send_text(ws, text_input: str):\n",
        "    \"\"\"Sends a single text turn.\"\"\"\n",
        "    print(f\"Input: {text_input}\")\n",
        "\n",
        "    try:\n",
        "        msg = {\n",
        "            \"client_content\": {\n",
        "                \"turns\": [{\"role\": \"user\", \"parts\": [{\"text\": text_input}]}],\n",
        "                \"turn_complete\": True,\n",
        "            }\n",
        "        }\n",
        "        await ws.send(json.dumps(msg))\n",
        "    except Exception as e:\n",
        "        print(f\"Error sending text: {e}\")\n",
        "\n",
        "\n",
        "async def main() -> None:\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()\n",
        "\n",
        "        async def send_loop():\n",
        "            await send_text(ws, \"Hello? Gemini are you there?\")\n",
        "\n",
        "        audio_data = []\n",
        "\n",
        "        async def receive_loop():\n",
        "            async for message in ws:\n",
        "                response = json.loads(message.decode())\n",
        "                try:\n",
        "                    parts = response[\"serverContent\"][\"modelTurn\"][\"parts\"]\n",
        "                    for part in parts:\n",
        "                        if \"inlineData\" in part:\n",
        "                            pcm_data = base64.b64decode(part[\"inlineData\"][\"data\"])\n",
        "                            audio_data.append(np.frombuffer(pcm_data, dtype=np.int16))\n",
        "                except KeyError:\n",
        "                    pass\n",
        "                if response.get(\"serverContent\", {}).get(\"turnComplete\"):\n",
        "                    print(\"Turn complete.\")\n",
        "                    display(\n",
        "                        Audio(np.concatenate(audio_data), rate=24000, autoplay=True)\n",
        "                    )\n",
        "                    break  # Exit the loop\n",
        "\n",
        "        await asyncio.gather(send_loop(), receive_loop())\n",
        "\n",
        "\n",
        "await main()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jFKnWmxw103F"
      },
      "source": [
        "### **Audio Streaming**\n",
        "\n",
        "Implementing real-time audio requires strict adherence to sample rate specifications and careful buffer management to ensure low latency and natural interruptibility.\n",
        "\n",
        "The Live API supports the following audio formats:\n",
        "- **Input audio**: Raw 16-bit PCM audio at 16kHz, little-endian\n",
        "- **Output audio**: Raw 16-bit PCM audio at 24kHz, little-endian\n",
        "\n",
        "The following is a single turn audio to audio session example - send an audio file as input, receive audio output and play the audio. For demonstration purposes, it exits the session loop after a turn completes.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bwUgw2r6u7Sh"
      },
      "outputs": [],
      "source": [
        "# Download a sample audio input file\n",
        "audio_file = \"input.wav\"\n",
        "audio_file_url = \"https://storage.googleapis.com/cloud-samples-data/generative-ai/audio/tell-a-story.wav\"\n",
        "\n",
        "!wget -q $audio_file_url -O $audio_file\n",
        "\n",
        "with wave.open(audio_file, \"rb\") as wf:\n",
        "    frames = wf.readframes(wf.getnframes())\n",
        "    print(f\"Read audio: {len(frames)} bytes\")\n",
        "    print(f\"Channels: {wf.getnchannels()}\")\n",
        "    print(f\"Rate: {wf.getframerate()}Hz\")\n",
        "    print(f\"Width: {wf.getsampwidth()} bytes\")\n",
        "\n",
        "display(Audio(filename=audio_file, autoplay=True))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rGCqk_65qNMh"
      },
      "outputs": [],
      "source": [
        "MEDIA_CHUNK_SIZE = 4096  # Chunk size for streaming audio\n",
        "\n",
        "\n",
        "async def send_audio(ws, audio_file_path: str):\n",
        "    \"\"\"Streams an audio file in chunks.\"\"\"\n",
        "    print(f\"Input Audio File: {audio_file_path}\")\n",
        "\n",
        "    try:\n",
        "        # Send Input (Simulated from file)\n",
        "        # In production, this would be a microphone stream\n",
        "        with open(audio_file_path, \"rb\") as f:\n",
        "            while chunk := f.read(MEDIA_CHUNK_SIZE):\n",
        "                msg = {\n",
        "                    \"realtime_input\": {\n",
        "                        \"media_chunks\": [\n",
        "                            {\n",
        "                                \"mime_type\": \"audio/pcm;rate=16000\",\n",
        "                                \"data\": base64.b64encode(chunk).decode(\"utf-8\"),\n",
        "                            }\n",
        "                        ]\n",
        "                    }\n",
        "                }\n",
        "                await ws.send(json.dumps(msg))\n",
        "\n",
        "        print(\"Finished streaming audio.\")\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error streaming audio: {e}\")\n",
        "\n",
        "\n",
        "async def main() -> None:\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()\n",
        "\n",
        "        async def send_loop():\n",
        "            await send_audio(ws, audio_file)\n",
        "\n",
        "        audio_data = []\n",
        "\n",
        "        async def receive_loop():\n",
        "            async for message in ws:\n",
        "                response = json.loads(message.decode())\n",
        "                try:\n",
        "                    parts = response[\"serverContent\"][\"modelTurn\"][\"parts\"]\n",
        "                    for part in parts:\n",
        "                        if \"inlineData\" in part:\n",
        "                            pcm_data = base64.b64decode(part[\"inlineData\"][\"data\"])\n",
        "                            audio_data.append(np.frombuffer(pcm_data, dtype=np.int16))\n",
        "                except KeyError:\n",
        "                    pass\n",
        "                if response.get(\"serverContent\", {}).get(\"turnComplete\"):\n",
        "                    print(\"Turn complete.\")\n",
        "                    display(\n",
        "                        Audio(np.concatenate(audio_data), rate=24000, autoplay=True)\n",
        "                    )\n",
        "                    break  # Exit the loop\n",
        "\n",
        "        await asyncio.gather(send_loop(), receive_loop())\n",
        "\n",
        "\n",
        "await main()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cCcb4yhe5NyV"
      },
      "source": [
        "### **Video Streaming**\n",
        "\n",
        "Video streaming provides visual context (e.g., \"What is this holding?\"). Unlike a continuous video file (like.mp4), the Live API expects a sequence of discrete image frames. The Live API supports video frames input at 1FPS. For best results, use native 768x768 resolution at 1FPS.\n",
        "\n",
        "The following is a single turn video to audio session example - send a video file as input, receive audio output and play the audio. For demonstration purposes, it exits the session loop after a turn completes.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "l-g1rG9cFt2s"
      },
      "outputs": [],
      "source": [
        "video_file = \"dog_day2.mp4\"\n",
        "video_file_url = f\"https://storage.googleapis.com/cloud-samples-data/generative-ai/video/{video_file}\"\n",
        "\n",
        "!wget -q $video_file_url -O $video_file\n",
        "\n",
        "display(Video(video_file, width=300, height=200, embed=True))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4Odl_UxJIP4x"
      },
      "source": [
        "The client implementation should capture a frame from the video feed, encode it as a JPEG blob, Base64 encode the blob, and transmit it using the same `realtime_input` message structure as audio."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ca0REK1M5UXh"
      },
      "outputs": [],
      "source": [
        "DEFAULT_IMAGE_ENCODE_OPTIONS = [cv2.IMWRITE_JPEG_QUALITY, 90]\n",
        "\n",
        "\n",
        "def encode_image(image_data: np.ndarray, encode_options: list[int]) -> bytes:\n",
        "    \"\"\"Encodes a numpy array (image) into JPEG bytes.\"\"\"\n",
        "    success, encoded_image = cv2.imencode(\".jpg\", image_data, encode_options)\n",
        "    if not success:\n",
        "        raise ValueError(\"Could not encode image to JPEG\")\n",
        "    return encoded_image.tobytes()\n",
        "\n",
        "\n",
        "async def send_video(ws, video_file_path: str):\n",
        "    \"\"\"Streams a video file frame by frame.\"\"\"\n",
        "    print(f\"Input Video File: {video_file_path}\")\n",
        "    cap = None\n",
        "    try:\n",
        "        cap = cv2.VideoCapture(video_file_path)\n",
        "        if not cap.isOpened():\n",
        "            raise OSError(f\"Cannot open video file: {video_file_path}\")\n",
        "\n",
        "        fps = cap.get(cv2.CAP_PROP_FPS)\n",
        "        if fps <= 0:\n",
        "            print(\"Warning: Could not get valid FPS. Defaulting to 30 FPS.\")\n",
        "            fps = 30.0\n",
        "\n",
        "        frame_delay = 1 / fps\n",
        "        print(f\"Streaming video with {fps:.2f} FPS (delay: {frame_delay:.4f}s)\")\n",
        "\n",
        "        while cap.isOpened():\n",
        "            ret, video_data = cap.read()\n",
        "            if not ret:\n",
        "                print(\"End of video stream.\")\n",
        "                break\n",
        "\n",
        "            processed_jpeg = encode_image(video_data, DEFAULT_IMAGE_ENCODE_OPTIONS)\n",
        "            b64_data = base64.b64encode(processed_jpeg).decode(\"utf-8\")\n",
        "\n",
        "            msg = {\n",
        "                \"realtime_input\": {\n",
        "                    \"video\": {\"mime_type\": \"image/jpeg\", \"data\": b64_data}\n",
        "                }\n",
        "            }\n",
        "            await ws.send(json.dumps(msg))\n",
        "            await asyncio.sleep(frame_delay)\n",
        "\n",
        "        print(\"Signaling end of turn.\")\n",
        "        await ws.send(json.dumps({\"realtime_input\": {\"text\": \"\"}}))\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error processing video: {e}\")\n",
        "    finally:\n",
        "        if cap and cap.isOpened():\n",
        "            cap.release()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "D7PJYGoYF31C"
      },
      "outputs": [],
      "source": [
        "system_instruction = {\n",
        "    \"parts\": [\n",
        "        {\n",
        "            \"text\": \"You are a helpful assistant watching a video. Describe any animals you see.\"\n",
        "        }\n",
        "    ]\n",
        "}\n",
        "\n",
        "config = {\n",
        "    \"response_modalities\": [\"audio\"],\n",
        "}\n",
        "\n",
        "setup = {\n",
        "    \"setup\": {\n",
        "        \"model\": model,\n",
        "        \"system_instruction\": system_instruction,\n",
        "        \"generation_config\": config,\n",
        "    }\n",
        "}\n",
        "\n",
        "\n",
        "async def main() -> None:\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()\n",
        "\n",
        "        async def send_loop():\n",
        "            await send_video(ws, video_file)\n",
        "\n",
        "        audio_data = []\n",
        "\n",
        "        async def receive_loop():\n",
        "            async for message in ws:\n",
        "                response = json.loads(message.decode())\n",
        "                try:\n",
        "                    parts = response[\"serverContent\"][\"modelTurn\"][\"parts\"]\n",
        "                    for part in parts:\n",
        "                        if \"inlineData\" in part:\n",
        "                            pcm_data = base64.b64decode(part[\"inlineData\"][\"data\"])\n",
        "                            audio_data.append(np.frombuffer(pcm_data, dtype=np.int16))\n",
        "                except KeyError:\n",
        "                    pass\n",
        "                if response.get(\"serverContent\", {}).get(\"turnComplete\"):\n",
        "                    print(\"Turn complete.\")\n",
        "                    display(\n",
        "                        Audio(np.concatenate(audio_data), rate=24000, autoplay=True)\n",
        "                    )\n",
        "                    break  # Exit the loop\n",
        "\n",
        "        await asyncio.gather(send_loop(), receive_loop())\n",
        "\n",
        "\n",
        "await main()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KJs8tovPIwPu"
      },
      "source": [
        "# Tool Use\n",
        "\n",
        "The Live API seamlessly integrates tools like function calling and Google Search for more practical and dynamic interactions.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f214d0c3bee0"
      },
      "source": [
        "### **Function Calling**\n",
        "\n",
        "You can use function calling to create a description of a function, then pass that description to the model in a request. The response from the model includes the name of a function that matches the description and the arguments to call it with.\n",
        "\n",
        "**Notes**:\n",
        "\n",
        "- All functions must be declared at the start of the session by sending tool definitions as part of the `setup` message."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8a7595aee24a"
      },
      "outputs": [],
      "source": [
        "# Define function declaration\n",
        "get_temperature_declaration = {\n",
        "    \"name\": \"get_temperature\",\n",
        "    \"description\": \"Gets the current temperature for a given location.\",\n",
        "    \"parameters\": {\n",
        "        \"type\": \"object\",\n",
        "        \"properties\": {\"location\": {\"type\": \"string\"}},\n",
        "        \"required\": [\"location\"],\n",
        "    },\n",
        "}\n",
        "\n",
        "# Set tools\n",
        "tools = {\"function_declarations\": [get_temperature_declaration]}\n",
        "\n",
        "setup = {\"setup\": {\"model\": model, \"generation_config\": config, \"tools\": tools}}\n",
        "\n",
        "\n",
        "async def main() -> None:\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()\n",
        "\n",
        "        async def send_loop():\n",
        "            await send_text(ws, \"Get the current temperature in New York.\")\n",
        "\n",
        "        responses = []\n",
        "\n",
        "        async def receive_loop():\n",
        "            async for message in ws:\n",
        "                response = json.loads(message.decode())\n",
        "                if (tool_call := response.get(\"toolCall\")) is not None:\n",
        "                    for function_call in tool_call[\"functionCalls\"]:\n",
        "                        responses.append(f\"FunctionCall: {function_call!s}\\n\")\n",
        "                if (server_content := response.get(\"serverContent\")) is not None:\n",
        "                    if server_content.get(\"turnComplete\"):\n",
        "                        print(\"Turn complete.\")\n",
        "                        print(\"Response:\\n{}\".format(\"\\n\".join(responses)))\n",
        "                        break\n",
        "\n",
        "        await asyncio.gather(send_loop(), receive_loop())\n",
        "\n",
        "\n",
        "await main()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4RU_DV2s6yZQ"
      },
      "source": [
        "### **Google Search**\n",
        "\n",
        "The `google_search` tool lets the model conduct Google searches. For example, try asking it about events that are too recent to be in the training data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8XPQAzbC65bR"
      },
      "outputs": [],
      "source": [
        "# Define google search tool\n",
        "tools = {\"google_search\": {}}\n",
        "\n",
        "setup = {\"setup\": {\"model\": model, \"generation_config\": config, \"tools\": tools}}\n",
        "\n",
        "\n",
        "async def main() -> None:\n",
        "    async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "        print(\"Connected\")\n",
        "        await ws.send(json.dumps(setup))\n",
        "        await ws.recv()\n",
        "\n",
        "        async def send_loop():\n",
        "            await send_text(ws, \"What is the current weather in Toronto, Canada?\")\n",
        "\n",
        "        audio_data = []\n",
        "\n",
        "        async def receive_loop():\n",
        "            async for message in ws:\n",
        "                response = json.loads(message.decode())\n",
        "                try:\n",
        "                    parts = response[\"serverContent\"][\"modelTurn\"][\"parts\"]\n",
        "                    for part in parts:\n",
        "                        if \"inlineData\" in part:\n",
        "                            pcm_data = base64.b64decode(part[\"inlineData\"][\"data\"])\n",
        "                            audio_data.append(np.frombuffer(pcm_data, dtype=np.int16))\n",
        "                except KeyError:\n",
        "                    pass\n",
        "                if response.get(\"serverContent\", {}).get(\"turnComplete\"):\n",
        "                    print(\"Turn complete.\")\n",
        "                    display(\n",
        "                        Audio(np.concatenate(audio_data), rate=24000, autoplay=True)\n",
        "                    )\n",
        "                    break  # Exit the loop\n",
        "\n",
        "        await asyncio.gather(send_loop(), receive_loop())\n",
        "\n",
        "\n",
        "await main()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vP1dcrWhYd2W"
      },
      "source": [
        "# Capabilities\n",
        "\n",
        "This covers the key capabilities and configurations available with the Live API."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aMtPLdlUMPzI"
      },
      "source": [
        "## Reusable WebSocket Modules\n",
        "\n",
        "The following functions are designed to manage the session configuration, handle a single conversational turn, and execute a multi-turn session. Note that some required functions are defined in cells above."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "j7irydkaee62"
      },
      "outputs": [],
      "source": [
        "async def handle_response(\n",
        "    ws, timeout_seconds=5, print_incrementally: bool = False\n",
        ") -> dict[str, Any]:\n",
        "    \"\"\"Receives, processes, and displays the full streaming response for one turn\n",
        "    with timeout and error handling.\n",
        "\n",
        "    Args:\n",
        "        ws: The WebSocket connection object.\n",
        "        timeout_seconds: Seconds to wait for a message before timing out.\n",
        "        print_incrementally: If True, prints in/out transcriptions as they\n",
        "                             arrive. If False, prints a full summary at the end.\n",
        "    \"\"\"\n",
        "    output_audio_data = []\n",
        "    input_transcriptions = []\n",
        "    output_transcriptions = []\n",
        "\n",
        "    try:\n",
        "        while True:\n",
        "            try:\n",
        "                # Wait for a message with a timeout\n",
        "                raw_response = await asyncio.wait_for(\n",
        "                    ws.recv(decode=False), timeout_seconds\n",
        "                )\n",
        "                response = json.loads(raw_response.decode())\n",
        "                server_content = response.pop(\"serverContent\", None)\n",
        "\n",
        "                if server_content is None:\n",
        "                    # Keep listening if it's not a serverContent message\n",
        "                    continue\n",
        "\n",
        "                # Input Transcription\n",
        "                if (\n",
        "                    input_transcription := server_content.get(\"inputTranscription\")\n",
        "                ) is not None:\n",
        "                    if (text := input_transcription.get(\"text\")) is not None:\n",
        "                        input_transcriptions.append(text)\n",
        "                        if print_incrementally:\n",
        "                            display(Markdown(f\"**Input >** {text}\"))\n",
        "\n",
        "                # Output Transcription\n",
        "                if (\n",
        "                    output_transcription := server_content.get(\"outputTranscription\")\n",
        "                ) is not None:\n",
        "                    if (text := output_transcription.get(\"text\")) is not None:\n",
        "                        output_transcriptions.append(text)\n",
        "                        if print_incrementally:\n",
        "                            display(Markdown(f\"**Response >** {text}\"))\n",
        "\n",
        "                # Model Audio Output\n",
        "                if (model_turn := server_content.get(\"modelTurn\")) is not None:\n",
        "                    if (parts := model_turn.pop(\"parts\", None)) is not None:\n",
        "                        for part in parts:\n",
        "                            if \"inlineData\" in part:\n",
        "                                pcm_data = base64.b64decode(part[\"inlineData\"][\"data\"])\n",
        "                                output_audio_data.append(\n",
        "                                    np.frombuffer(pcm_data, dtype=np.int16)\n",
        "                                )\n",
        "\n",
        "                # End of Turn\n",
        "                if server_content.pop(\"turnComplete\", None):\n",
        "                    if print_incrementally:\n",
        "                        print(\"Turn complete received.\")\n",
        "                    break  # Successful exit from the loop\n",
        "\n",
        "            except asyncio.TimeoutError:\n",
        "                print(\n",
        "                    f\"Timeout: No response received in {timeout_seconds}s. Ending turn.\"\n",
        "                )\n",
        "                break  # Exit loop on timeout\n",
        "\n",
        "    except websockets.exceptions.ConnectionClosed as e:\n",
        "        print(f\"Connection closed unexpectedly: {e.code} {e.reason}\")\n",
        "    except Exception as e:\n",
        "        print(f\"An unexpected error occurred while receiving: {e}\")\n",
        "\n",
        "    finally:\n",
        "        # This block runs whether the loop broke successfully, timed out, or crashed\n",
        "\n",
        "        # --- Play Audio ---\n",
        "        if output_audio_data:\n",
        "            full_audio = np.concatenate(output_audio_data)\n",
        "            display(Audio(full_audio, rate=OUTPUT_AUDIO_RATE, autoplay=True))\n",
        "        else:\n",
        "            display(Markdown(\"**Model Response:** *No audio response received.*\"))\n",
        "\n",
        "        # --- Display Final Transcripts (if not printed incrementally) ---\n",
        "        if not print_incrementally:\n",
        "            final_input = \"\".join(input_transcriptions)\n",
        "            final_output = \"\".join(output_transcriptions)\n",
        "\n",
        "            if final_input:\n",
        "                display(Markdown(f\"**Final Input transcription >** {final_input}\"))\n",
        "            if final_output:\n",
        "                display(Markdown(f\"**Final Output transcription >** {final_output}\"))\n",
        "            print(\"Turn complete.\")\n",
        "\n",
        "    return {\n",
        "        \"output_audio_data\": output_audio_data,\n",
        "        \"input_transcription\": \"\".join(input_transcriptions),\n",
        "        \"output_transcription\": \"\".join(output_transcriptions),\n",
        "    }\n",
        "\n",
        "\n",
        "async def run_live_session(\n",
        "    model_path: str,\n",
        "    setup_config: dict[str, Any],\n",
        "    turns: list[str],\n",
        "    print_incrementally: bool = False,\n",
        "):\n",
        "    \"\"\"Establishes the WebSocket connection and runs a series of conversational turns.\"\"\"\n",
        "    display(Markdown(\"## Starting Live Connect Session...\"))\n",
        "    if \"system_instruction\" in setup_config:\n",
        "        display(\n",
        "            Markdown(f\"**System Instruction:** *{setup_config['system_instruction']}*\")\n",
        "        )\n",
        "    if setup_config.get(\"speech_config\", {}).get(\"language_code\"):\n",
        "        display(\n",
        "            Markdown(\n",
        "                f\"**Target Language:** `{setup_config['speech_config']['language_code']}`\"\n",
        "            )\n",
        "        )\n",
        "\n",
        "    headers = {\n",
        "        \"Content-Type\": \"application/json\",\n",
        "        \"Authorization\": f\"Bearer {token_list[0]}\",\n",
        "    }\n",
        "\n",
        "    full_setup_message = {\"setup\": {\"model\": model_path, **setup_config}}\n",
        "\n",
        "    try:\n",
        "        async with websockets.connect(service_url, additional_headers=headers) as ws:\n",
        "            # Setup the session\n",
        "            await ws.send(json.dumps(full_setup_message))\n",
        "\n",
        "            # Receive setup response\n",
        "            raw_response = await ws.recv(decode=False)\n",
        "            setup_response = json.loads(raw_response.decode())\n",
        "            display(\n",
        "                Markdown(\n",
        "                    f\"**Status:** Session established. Response: `{setup_response}`\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "            all_results = []\n",
        "            for turn in turns:\n",
        "                display(Markdown(\"\\n---\"))\n",
        "                # Send the user input (text, audio, or video)\n",
        "                if turn.lower().endswith(\".wav\") or turn.lower().endswith(\".pcm\"):\n",
        "                    await send_audio(ws, turn)\n",
        "                elif turn.lower().endswith(\".mp4\"):\n",
        "                    await send_video(ws, turn)\n",
        "                else:\n",
        "                    await send_text(ws, turn)\n",
        "\n",
        "                # Receive the model's response\n",
        "                result = await handle_response(\n",
        "                    ws, print_incrementally=print_incrementally\n",
        "                )\n",
        "                all_results.append(result)\n",
        "\n",
        "            display(Markdown(\"\\n---\"))\n",
        "            display(Markdown(\"**Status:** All turns complete. Session closed.\"))\n",
        "            return all_results\n",
        "\n",
        "    except websockets.exceptions.ConnectionClosed as e:\n",
        "        display(Markdown(f\"**Error:** Connection closed: {e}\"))\n",
        "        return []\n",
        "    except Exception as e:\n",
        "        display(Markdown(f\"**Error:** Failed to connect or run session: {e}\"))\n",
        "        return []"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7b9LWlLMYgSt"
      },
      "source": [
        "## **Audio Transcription**\n",
        "\n",
        "In addition to the model response, you can also receive transcriptions of both the audio input and the audio output.\n",
        "\n",
        "To receive transcriptions, you must update your session configuration with the `input_audio_transcription` and `output_audio_transcription` parameters added."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "33OP268iYrRP"
      },
      "outputs": [],
      "source": [
        "audio_transcription_config = {\n",
        "    \"generation_config\": {\"response_modalities\": [\"audio\"]},\n",
        "    \"input_audio_transcription\": {},\n",
        "    \"output_audio_transcription\": {},\n",
        "}\n",
        "\n",
        "conversation_turns = [\n",
        "    \"Hey, tell me a joke about rabbit\",\n",
        "]\n",
        "\n",
        "results = await run_live_session(model, audio_transcription_config, conversation_turns)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c1bec896c5fd"
      },
      "source": [
        "## **Voice Activity Detection (VAD)**\n",
        "\n",
        "Voice Activity Detection (VAD) allows the model to recognize when a person is speaking. This is essential for creating natural conversations, as it allows a user to interrupt the model at any time.\n",
        "\n",
        "- By default, the model automatically performs voice activity detection on a continuous audio input stream. Voice activity detection can be configured with the `realtime_input_config.automatic_activity_detection` field of the `setup` message.\n",
        "- When voice activity detection detects an interruption, the ongoing generation is canceled and discarded. Only the information already sent to the client is retained in the session history. The server then sends a message to report the interruption.\n",
        "- When the audio stream is paused for more than a second (for example, because the user switched off the microphone), an `audioStreamEnd` event should be sent to flush any cached audio. The client can resume sending audio data at any time."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d65f95c6bed8"
      },
      "outputs": [],
      "source": [
        "voice_activity_detection_config = {\n",
        "    \"generation_config\": {\"response_modalities\": [\"audio\"]},\n",
        "    \"realtime_input_config\": {\n",
        "        \"automatic_activity_detection\": {\n",
        "            \"disabled\": False,  # default\n",
        "            \"start_of_speech_sensitivity\": \"START_SENSITIVITY_HIGH\",\n",
        "            \"end_of_speech_sensitivity\": \"END_SENSITIVITY_HIGH\",\n",
        "            \"prefix_padding_ms\": 20,\n",
        "            \"silence_duration_ms\": 100,\n",
        "        },\n",
        "    },\n",
        "}\n",
        "\n",
        "audio_input_files = [\n",
        "    audio_file,\n",
        "]\n",
        "\n",
        "results = await run_live_session(\n",
        "    model, voice_activity_detection_config, audio_input_files\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "73abaaf93010"
      },
      "source": [
        "# Native Audio\n",
        "\n",
        "Native audio provides natural, realistic-sounding speech and improved multilingual performance. It also enables advanced features like, Affective Dialogue and Proactive Audio."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9904f4fbc2ab"
      },
      "source": [
        "## **Proactive Audio**\n",
        "\n",
        "\n",
        "When proactive audio is enabled, the model only responds when it's relevant. The model generates text transcripts and audio responses proactively only for queries directed to the device, and does not respond to non-device directed queries.\n",
        "\n",
        "This example uses a **System Instruction** and **Proactive Audio** to test the model's ability to remain silent when the topic is off-subject (French cuisine) and chime in only when the conversation shifts to the instructed topic (Italian cooking)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oorPiy2lB-k7"
      },
      "outputs": [],
      "source": [
        "proactive_audio_config = {\n",
        "    \"system_instruction\": {\n",
        "        \"parts\": [\n",
        "            {\n",
        "                \"text\": \"You are an AI assistant in Italian cooking, chime in only when the topic is about Italian cooking.\"\n",
        "            }\n",
        "        ]\n",
        "    },\n",
        "    \"proactivity\": {\n",
        "        \"proactive_audio\": True,  # Enable proactive audio\n",
        "    },\n",
        "    \"generation_config\": {\"response_modalities\": [\"audio\"]},\n",
        "    \"input_audio_transcription\": {},\n",
        "    \"output_audio_transcription\": {},\n",
        "}\n",
        "\n",
        "conversation_turns = [\n",
        "    # Speaker A speaks, general topic, the model should be silent.\n",
        "    \"Hey, I was just thinking about my dinner plans. I really love cooking.\",\n",
        "    # Speaker B speaks, off-topic (French cuisine). The model should be silent.\n",
        "    \"Oh yes, me too. I love French cuisine, especially making a good coq au vin. I think I'll make that tonight.\",\n",
        "    # Speaker A speaks, shifts to Italian topic. The model should chime in.\n",
        "    \"Hmm, that sounds complicated. I prefer Italian food. Say, do you know how to make a simple Margherita pizza recipe?\",\n",
        "]\n",
        "\n",
        "results = await run_live_session(model, proactive_audio_config, conversation_turns)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c25132281c4e"
      },
      "source": [
        "## **Affective Dialog**\n",
        "\n",
        "When affective dialog is enabled, the model can understand and respond appropriately to users' emotional expressions for more nuanced conversations.\n",
        "\n",
        "This scenario enables **Affective Dialog** (`enable_affective_dialog=True`) and uses a system instruction to create a senior technical advisor persona. The user's input is phrased to convey **frustration**, prompting an empathetic and helpful response from the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ae07905ba242"
      },
      "outputs": [],
      "source": [
        "affective_config = {\n",
        "    \"system_instruction\": {\n",
        "        \"parts\": [\n",
        "            {\"text\": \"You are a senior technical advisor for a complex AI project.\"}\n",
        "        ]\n",
        "    },\n",
        "    \"generation_config\": {\n",
        "        \"enable_affective_dialog\": True,  # Enable affective dialog\n",
        "        \"response_modalities\": [\"audio\"],\n",
        "    },\n",
        "}\n",
        "\n",
        "affective_dialog_turns = [\n",
        "    \"I have been staring at this API docs for two hours now! It's so confusing and I can't even find where to start the streaming request. I'm completely stuck!\",\n",
        "    # A follow-up turn to see if the model maintains the helpful persona\n",
        "    \"Okay, thanks. I'm using Python. What is the single most important parameter I need to set up for a successful streaming connection?\",\n",
        "]\n",
        "\n",
        "results = await run_live_session(model, affective_config, affective_dialog_turns)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "usjiqTDXfk_6"
      },
      "source": [
        "# What's Next\n",
        "\n",
        "\n",
        "- Try [Getting started with the Live API with the Gen AI SDK](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/multimodal-live-api/intro_multimodal_live_api_genai_sdk.ipynb)\n",
        "- Learn more about [demo apps and resources for using the Live API](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/live-api/demos)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "intro_multimodal_live_api.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
