{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Dabj4NLXsBFm"
      },
      "source": [
        "# Audio Injection with Magenta RT!\n",
        "\n",
        "<a href=\"https://colab.research.google.com/github/magenta/magenta-realtime/blob/main/notebooks/Magenta_RT_Audio_Injection.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
        "\n",
        "This notebook shows how live input audio can be “injected” into Magenta RT’s context to steer its generation. The model may repeat your input, or may *transform* it into something else! Try different prompts for different results.\n",
        "\n",
        "### Tutorial\n",
        "\n",
        "- [Watch walkthrough on YouTube](https://youtu.be/ZhGBBBYaAq8)\n",
        "\n",
        "### Tips for live mic input\n",
        "\n",
        "1. Use **wired** headphones! 🎧〰〰💻\n",
        "1. **Rerun** ⏱️ `Measure latency` after any system audio change, like opening or closing an app.\n",
        "1. Start by inputting something with a **clear rhythm**. You can tap along with the metronome, or count 1-2-3-4, or try other sounds!\n",
        "\n",
        "### Known issues\n",
        "\n",
        "- The model may drift off of the set tempo.\n",
        "- Latency measurement may not work properly when routing audio from another application."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-Oq0xSB8scXP"
      },
      "source": [
        "# Step 1: 😴 One-time setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mwRM5Mvef5Oq",
        "cellView": "form"
      },
      "outputs": [],
      "source": [
        "# @title Run this cell to install dependencies (~5 minutes)\n",
        "# @markdown Make sure you are running on **`v5e-1 TPU` runtime** via `Runtime > Change Runtime Type`\n",
        "\n",
        "# @markdown Colab may prompt you to restart session. **Wait until the cell finishes running to restart**!\n",
        "\n",
        "# Clone library\n",
        "!git clone https://github.com/magenta/magenta-realtime.git\n",
        "!git clone https://github.com/google-research/t5x.git\n",
        "\n",
        "# Install library and dependencies\n",
        "# If running on TPU (recommended, runs on free tier Colab TPUs):\n",
        "!pip install -e t5x/[tpu] && pip install -e magenta-realtime/[tpu] && pip install tf2jax==0.3.8\n",
        "\n",
        "# Uncomment if running on GPU (requires A100 via Colab Pro to be fast enough):\n",
        "# !patch t5x/setup.py < magenta-realtime/patch/t5x_setup.py.patch\n",
        "# !patch t5x/t5x/partitioning.py < magenta-realtime/patch/t5x_partitioning.py.patch\n",
        "# !pip install -e t5x/[gpu] && pip install -e magenta-realtime/[gpu] && pip install tf2jax==0.3.8\n",
        "\n",
        "!sed -i '/import tensorflow_text as tf_text/d' /usr/local/lib/python3.12/dist-packages/seqio/vocabularies.py\n",
        "!sed -i \"s|device_kind == 'TPU v4 lite'|device_kind == 'TPU v4 lite' or device_kind == 'TPU v5 lite'|g\" /usr/local/lib/python3.12/dist-packages/t5x/partitioning.py\n",
        "\n",
        "# Download the example of pre-recorded music\n",
        "GS_EXAMPLES_DIR = 'gs://magenta-rt-public/colab_examples'\n",
        "!gsutil cp -R $GS_EXAMPLES_DIR /content/ &> /dev/null"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "1OI3L16olYQs"
      },
      "outputs": [],
      "source": [
        "# @title Run this cell to initialize model (~5 minutes)\n",
        "\n",
        "import os\n",
        "\n",
        "import concurrent.futures\n",
        "import dataclasses\n",
        "import functools\n",
        "from google.colab import files\n",
        "import ipywidgets as ipw\n",
        "import IPython.display as ipd\n",
        "import jax\n",
        "import librosa\n",
        "import numpy as np\n",
        "import tempfile\n",
        "from typing import Optional, Sequence, Tuple\n",
        "import warnings\n",
        "\n",
        "from magenta_rt import audio as audio_lib\n",
        "from magenta_rt import musiccoca\n",
        "from magenta_rt import spectrostream\n",
        "from magenta_rt import system\n",
        "from magenta_rt import utils\n",
        "from magenta_rt.colab import prompt_types\n",
        "from magenta_rt.colab import utils as colab_utils\n",
        "from magenta_rt.colab import widgets\n",
        "\n",
        "\n",
        "# ========= Helper functions ===========\n",
        "\n",
        "def load_audio(audio_filename, sample_rate):\n",
        "  \"\"\"Loads an audio file.\n",
        "\n",
        "  Args:\n",
        "    audio_filename: File path to load.\n",
        "    sample_rate: The number of samples per second at which the audio will be\n",
        "        returned. Resampling will be performed if necessary.\n",
        "\n",
        "  Returns:\n",
        "    A numpy array of audio samples, sampled at the specified rate, in float32\n",
        "    format.\n",
        "  \"\"\"\n",
        "  y, unused_sr = librosa.load(audio_filename, sr=sample_rate, mono=False)\n",
        "  return y\n",
        "\n",
        "\n",
        "def wav_data_to_samples_librosa(audio_file, sample_rate):\n",
        "  \"\"\"Loads an in-memory audio file with librosa.\n",
        "\n",
        "  Use this instead of wav_data_to_samples if the wav is 24-bit, as that's\n",
        "  incompatible with wav_data_to_samples internal scipy call.\n",
        "\n",
        "  Will copy to a local temp file before loading so that librosa can read a file\n",
        "  path. Librosa does not currently read in-memory files.\n",
        "\n",
        "  It will be treated as a .wav file.\n",
        "\n",
        "  Args:\n",
        "    audio_file: Wav file to load.\n",
        "    sample_rate: The number of samples per second at which the audio will be\n",
        "        returned. Resampling will be performed if necessary.\n",
        "\n",
        "  Returns:\n",
        "    A numpy array of audio samples, single-channel (mono) and sampled at the\n",
        "    specified rate, in float32 format.\n",
        "  \"\"\"\n",
        "  with tempfile.NamedTemporaryFile(suffix='.wav') as wav_input_file:\n",
        "    wav_input_file.write(audio_file)\n",
        "    # Before copying the file, flush any contents\n",
        "    wav_input_file.flush()\n",
        "    # And back the file position to top (not need for Copy but for certainty)\n",
        "    wav_input_file.seek(0)\n",
        "    return load_audio(wav_input_file.name, sample_rate)\n",
        "\n",
        "\n",
        "def get_metronome_audio(\n",
        "    loop_samples: int,\n",
        "    beats_per_loop: int,\n",
        "    sample_rate: int,\n",
        "    chunk_samples: int):\n",
        "  \"\"\"Generates metronome audio.\n",
        "\n",
        "  Args:\n",
        "    loop_samples: The number of samples in a loop.\n",
        "    beats_per_loop: The number of beats in a loop.\n",
        "    sample_rate: The sample rate of the audio.\n",
        "    chunk_samples: The number of samples in a chunk.\n",
        "\n",
        "  Returns:\n",
        "    A numpy array of metronome audio samples.\n",
        "  \"\"\"\n",
        "  metronome_audio = np.zeros((loop_samples,))\n",
        "  BEEP_SECONDS = 0.04\n",
        "  BEEP_VOLUME = 0.25\n",
        "  beeps = []\n",
        "  for freq in (880, 440):\n",
        "    beeps.append(BEEP_VOLUME * np.sin(np.linspace(\n",
        "        0,\n",
        "        2 * np.pi * freq * BEEP_SECONDS,\n",
        "        int(BEEP_SECONDS * sample_rate))))\n",
        "  ramp_samples = 100\n",
        "  beep_envelope = np.concat(\n",
        "      [np.linspace(0, 1, ramp_samples),\n",
        "       np.ones((int(BEEP_SECONDS * sample_rate) - 2 * ramp_samples,)),\n",
        "       np.linspace(1, 0, ramp_samples)])\n",
        "  for i in range(len(beeps)):\n",
        "    beeps[i] *= beep_envelope\n",
        "  beat_length = loop_samples // beats_per_loop\n",
        "  for i in range(beats_per_loop):\n",
        "    beep = beeps[0 if i == 0 else 1]\n",
        "    metronome_audio[i * beat_length:i * beat_length + len(beep)] = beep\n",
        "  # Add an extra buffer to the metronome audio to make slicing easier later.\n",
        "  return np.concat([metronome_audio, metronome_audio[:chunk_samples]])\n",
        "\n",
        "\n",
        "# ================ Model System ======================\n",
        "class MagentaRTCFGTied(system.MagentaRTT5X):\n",
        "  \"\"\"Magenta RT T5X system with \"tied\" CFG controlling input and style.\"\"\"\n",
        "\n",
        "  # This method is mostly identical to system.MagentaRTT5X.generate_chunk, but\n",
        "  # adds \"tied\" CFG that acts jointly on input and style. Negative input tokens\n",
        "  # are passed as `context_tokens_orig`.\n",
        "  def generate_chunk(\n",
        "      self,\n",
        "      state: Optional[system.MagentaRTState] = None,\n",
        "      style: Optional[musiccoca.StyleEmbedding] = None,\n",
        "      seed: Optional[int] = None,\n",
        "      **kwargs,\n",
        "  ) -> Tuple[audio_lib.Waveform, system.MagentaRTState]:\n",
        "    \"\"\"Generates a chunk of audio and returns updated state.\n",
        "\n",
        "    Args:\n",
        "      state: The current state of the system.\n",
        "      style: The style embedding to use for the generation.\n",
        "      seed: The seed to use for the generation.\n",
        "      **kwargs: Additional keyword arguments for sampling params, e.g.\n",
        "        temperature, topk, guidance_weight, max_decode_frames.\n",
        "\n",
        "    Returns:\n",
        "      A tuple of the generated audio and the updated state.\n",
        "    \"\"\"\n",
        "    # Init state, style, and seed (if not provided)\n",
        "    if state is None:\n",
        "      state = self.init_state()\n",
        "    if seed is None:\n",
        "      seed = np.random.randint(0, 2**31)\n",
        "\n",
        "    context_tokens = {\n",
        "        \"orig\": kwargs.get(\"context_tokens_orig\", state.context_tokens),\n",
        "        \"mix\": state.context_tokens,\n",
        "    }\n",
        "    codec_tokens_lm = {}\n",
        "    for key, tokens in context_tokens.items():\n",
        "      # Prepare codec tokens for LLM\n",
        "      codec_tokens_lm[key] = np.where(\n",
        "          tokens >= 0,\n",
        "          utils.rvq_to_llm(\n",
        "              np.maximum(tokens, 0),\n",
        "              self.config.codec_rvq_codebook_size,\n",
        "              self.config.vocab_codec_offset,\n",
        "          ),\n",
        "          np.full_like(tokens, self.config.vocab_mask_token),\n",
        "      )\n",
        "      assert (\n",
        "          codec_tokens_lm[key].shape == self.config.context_tokens_shape\n",
        "      )  # (250, 16)\n",
        "      assert (\n",
        "          codec_tokens_lm[key].min() >= self.config.vocab_mask_token\n",
        "          and codec_tokens_lm[key].max()\n",
        "          < (self.config.vocab_codec_offset + self.config.vocab_codec_size)\n",
        "      )  # check range [1, 16386)\n",
        "\n",
        "    # Prepare style tokens for LLM\n",
        "    if style is None:\n",
        "      style_tokens_lm = np.full(\n",
        "          (self.config.encoder_style_rvq_depth,),\n",
        "          self.config.vocab_mask_token,\n",
        "          dtype=np.int32,\n",
        "      )\n",
        "    else:\n",
        "      if style.shape != (self.style_model.config.embedding_dim,):\n",
        "        raise ValueError(f\"Invalid style shape: {style.shape}\")\n",
        "      style_tokens = self.style_model.tokenize(style)\n",
        "      assert style_tokens.shape == (self.style_model.config.rvq_depth,)\n",
        "      style_tokens = style_tokens[: self.config.encoder_style_rvq_depth]\n",
        "      style_tokens_lm = utils.rvq_to_llm(\n",
        "          style_tokens,\n",
        "          self.config.style_rvq_codebook_size,\n",
        "          self.config.vocab_style_offset,\n",
        "      )\n",
        "      assert (\n",
        "          style_tokens_lm.min() >= self.config.vocab_style_offset\n",
        "          and style_tokens_lm.max()\n",
        "          < (self.config.vocab_style_offset + self.config.vocab_style_size)\n",
        "      )  # check range [17140, 23554)\n",
        "    assert style_tokens_lm.shape == (\n",
        "        self.config.encoder_style_rvq_depth,\n",
        "    )  # (6,)\n",
        "\n",
        "    # Prepare encoder input\n",
        "    batch_size, _, _ = self._device_params\n",
        "    encoder_inputs_pos = np.concatenate(\n",
        "        [codec_tokens_lm[\"mix\"][\n",
        "            :, :self.config.encoder_codec_rvq_depth].reshape(-1),\n",
        "         style_tokens_lm\n",
        "        ],\n",
        "        axis=0,\n",
        "    )\n",
        "    assert encoder_inputs_pos.shape == (1006,)\n",
        "\n",
        "    # Construct negative using original context tokens, and masking style.\n",
        "    encoder_inputs_neg = np.concatenate(\n",
        "        [codec_tokens_lm[\"orig\"][\n",
        "            :, :self.config.encoder_codec_rvq_depth].reshape(-1),\n",
        "         style_tokens_lm\n",
        "        ],\n",
        "        axis=0,\n",
        "    )\n",
        "    encoder_inputs_neg[-self.config.encoder_style_rvq_depth:] = (\n",
        "        self.config.vocab_mask_token)\n",
        "    assert encoder_inputs_neg.shape == (1006,)\n",
        "\n",
        "    encoder_inputs = np.stack([encoder_inputs_pos, encoder_inputs_neg], axis=0)\n",
        "    assert encoder_inputs.shape == (2, 1006)\n",
        "\n",
        "    # Generate tokens / NLL scores.\n",
        "    max_decode_frames = kwargs.get(\n",
        "        \"max_decode_frames\", self.config.chunk_length_frames\n",
        "    )\n",
        "    generated_tokens, _ = self._llm(\n",
        "        {\n",
        "            \"encoder_input_tokens\": encoder_inputs,\n",
        "            \"decoder_input_tokens\": np.zeros(\n",
        "                (\n",
        "                    batch_size,\n",
        "                    self.config.chunk_length_frames\n",
        "                    * self.config.decoder_codec_rvq_depth,\n",
        "                ),\n",
        "                dtype=np.int32,\n",
        "            ),\n",
        "        },\n",
        "        {\n",
        "            \"max_decode_steps\": np.array(\n",
        "                max_decode_frames * self.config.decoder_codec_rvq_depth,\n",
        "                dtype=np.int32,\n",
        "            ),\n",
        "            \"guidance_weight\": kwargs.get(\n",
        "                \"guidance_weight\", self._guidance_weight\n",
        "            ),\n",
        "            \"temperature\": kwargs.get(\"temperature\", self._temperature),\n",
        "            \"topk\": kwargs.get(\"topk\", self._topk),\n",
        "        },\n",
        "        jax.random.PRNGKey(seed + state.chunk_index),\n",
        "    )\n",
        "\n",
        "    # Process generated tokens\n",
        "    generated_tokens = np.array(generated_tokens)\n",
        "    assert generated_tokens.shape == (\n",
        "        batch_size,\n",
        "        self.config.chunk_length_frames * self.config.decoder_codec_rvq_depth,\n",
        "    )\n",
        "    generated_tokens = generated_tokens[:1]  # larger batch sizes unsupported\n",
        "    generated_tokens = generated_tokens.reshape(\n",
        "        self.config.chunk_length_frames, self.config.decoder_codec_rvq_depth\n",
        "    )  # (50, 16)\n",
        "    generated_tokens = generated_tokens[:max_decode_frames]  # (N, 16)\n",
        "    with warnings.catch_warnings():\n",
        "      warnings.simplefilter(\"ignore\")\n",
        "      generated_rvq_tokens = utils.llm_to_rvq(\n",
        "          generated_tokens,\n",
        "          self.config.codec_rvq_codebook_size,\n",
        "          self.config.vocab_codec_offset,\n",
        "          safe=False,\n",
        "      )\n",
        "\n",
        "    # Decode via SpectroStream using additional frame of samples for crossfading\n",
        "    # We want to generate a 2s chunk with an additional 40ms of crossfade, which\n",
        "    # is one additional codec frame. Caller is responsible for actually applying\n",
        "    # the crossfade.\n",
        "    xfade_frames = state.context_tokens[-self.config.crossfade_length_frames :]\n",
        "    if state.chunk_index == 0:\n",
        "      # NOTE: This will create 40ms of gibberish at the beginning but it's OK.\n",
        "      xfade_frames = np.zeros_like(xfade_frames)\n",
        "    assert xfade_frames.min() >= 0\n",
        "    xfade_tokens = np.concatenate([xfade_frames, generated_rvq_tokens], axis=0)\n",
        "    assert xfade_tokens.shape == (\n",
        "        self.config.crossfade_length_frames + max_decode_frames,\n",
        "        self.config.decoder_codec_rvq_depth,\n",
        "    )  # (N+1, 16)\n",
        "    waveform = self.codec.decode(xfade_tokens)\n",
        "    assert isinstance(waveform, audio_lib.Waveform)\n",
        "    assert waveform.samples.shape == (\n",
        "        self.config.crossfade_length_samples\n",
        "        + max_decode_frames * self.config.frame_length_samples,\n",
        "        self.num_channels,\n",
        "    )  # ((N+1)*1920, 2)\n",
        "\n",
        "    # NOTE: This Colab handles crossfade elsewhere, so these are unused\n",
        "    crossfade_samples = waveform[-self.config.crossfade_length_samples:]\n",
        "\n",
        "    # Update state\n",
        "    state.update(generated_rvq_tokens, crossfade_samples)\n",
        "\n",
        "    return (waveform, state)\n",
        "\n",
        "\n",
        "spectrostream_model = spectrostream.SpectroStreamJAX(lazy=False)\n",
        "\n",
        "SAMPLE_RATE = 48000\n",
        "CHUNK_SECONDS = 2.0\n",
        "CHUNK_SAMPLES = int(CHUNK_SECONDS * SAMPLE_RATE)\n",
        "INPUT_AUDIO = None\n",
        "\n",
        "# Fetch checkpoints and initialize model (may take up to 5 minutes)\n",
        "MRT = MagentaRTCFGTied(tag=\"large\", lazy=False)\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "werlZh98wPVY"
      },
      "source": [
        "# Step 2: 🎵 Stream music with audio injection! 🎤"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "4UEZRI4969AJ"
      },
      "outputs": [],
      "source": [
        "# @title Input settings\n",
        "\n",
        "# @markdown #### Select input source: 💾 Pre-recorded or 🎤 Live Mic\n",
        "\n",
        "input_source = 'Pre-recorded' # @param ['Live Mic', 'Pre-recorded']\n",
        "use_prerecorded_input = (input_source == 'Pre-recorded')\n",
        "\n",
        "# @markdown #### 💾 Choose pre-recorded input\n",
        "\n",
        "prerecorded_audio = \"Electronic Groove\" # @param [\"Electronic Groove\", \"Funk Guitar\", \"Upload your own\"]\n",
        "\n",
        "# @markdown #### 🎤 Live mic calibration\n",
        "# @markdown * Use wired headphones\n",
        "# @markdown * Take headphones off and bring close to mic\n",
        "# @markdown * Click `start` and wait ~10s for measurement\n",
        "# @markdown * Should show `4.0s` or `4.1s`\n",
        "\n",
        "\n",
        "# === Calibration and Audio Upload ===\n",
        "def load_example(fname):\n",
        "    return load_audio(\n",
        "        os.path.join('/content/colab_examples', fname),\n",
        "        sample_rate=SAMPLE_RATE,\n",
        "    )\n",
        "\n",
        "\n",
        "if use_prerecorded_input:\n",
        "  print('Using Pre-recorded Input!')\n",
        "\n",
        "  if prerecorded_audio == \"Electronic Groove\":\n",
        "    audio_samples = load_example(\"antoines_groove.mp3\")\n",
        "\n",
        "  if prerecorded_audio == \"Funk Guitar\":\n",
        "    audio_samples = load_example(\"jesses_funk_guitar.mp3\")\n",
        "\n",
        "  if prerecorded_audio == \"Upload your own\":\n",
        "    # Upload audio file\n",
        "    audio_data = list(files.upload().values())[0]\n",
        "    audio_samples = wav_data_to_samples_librosa(\n",
        "        audio_data,\n",
        "        sample_rate=SAMPLE_RATE,\n",
        "    )\n",
        "\n",
        "  # Postprocess\n",
        "  if audio_samples.ndim == 2:\n",
        "    audio_samples = audio_samples.T\n",
        "  else:\n",
        "    audio_samples = np.tile(audio_samples[:, None], 2)\n",
        "\n",
        "  print(\"First 10s of input audio:\")\n",
        "  ipd.display(ipd.Audio(audio_samples[:SAMPLE_RATE * 10].T, rate=SAMPLE_RATE))\n",
        "\n",
        "  # Add one buffer of looped audio to make slicing easier later.\n",
        "  INPUT_AUDIO = np.concat([audio_samples, audio_samples[:CHUNK_SAMPLES]])\n",
        "\n",
        "else:\n",
        "  print('Using Live Mic Input!')\n",
        "  # Calibrate latency.\n",
        "  latency_estimator = colab_utils.LatencyEstimator(\n",
        "    rate=SAMPLE_RATE,\n",
        "    buffer_size=int(SAMPLE_RATE * CHUNK_SECONDS),\n",
        "    duration=1,\n",
        "  )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "LVLb4ywQUYgx"
      },
      "outputs": [],
      "source": [
        "# @title Metronome and Loop Settings\n",
        "\n",
        "# @markdown <ul>\n",
        "# @markdown <li>🎤 <b>Live Mic</b>: You'll perform a few intro loops to a metronome.</li>\n",
        "# @markdown <li>💾 <b>Pre-recorded Input</b>: The model will join after a few intro loops.</li>\n",
        "# @markdown </ul>\n",
        "\n",
        "# @markdown Beats per minute (BPM) used for metronome:\n",
        "bpm = 120  # @param {type: \"number\"}\n",
        "\n",
        "# @markdown Mic audio is delayed by one loop of this many beats:\n",
        "beats_per_loop = 8  # @param {\"type\":\"slider\",\"min\":1,\"max\":16,\"step\":1}\n",
        "\n",
        "loop_seconds = beats_per_loop * 60 / bpm\n",
        "loop_samples = int(loop_seconds * SAMPLE_RATE)\n",
        "\n",
        "# @markdown How many loops before the model joins in:\n",
        "\n",
        "intro_loops = 4  # @param {\"type\":\"slider\",\"min\":1,\"max\":5,\"step\":1}\n",
        "\n",
        "\n",
        "METRONOME_AUDIO = get_metronome_audio(loop_samples, beats_per_loop, SAMPLE_RATE, CHUNK_SAMPLES)\n",
        "\n",
        "print(f'Model will join in after {intro_loops * loop_seconds:.2f} seconds')\n",
        "\n",
        "\n",
        "# === Advanced settings ===\n",
        "\n",
        "# Extra prefix frames to include in the input+output mix that will be encoded.\n",
        "# These may be used to update the model context.\n",
        "MIX_PREFIX_FRAMES = 16\n",
        "\n",
        "# Frames to trim from the left of the input+output mix before updating the\n",
        "# model context, to avoid edge artifacts.\n",
        "LEFT_EDGE_FRAMES_TO_REMOVE = 8\n",
        "assert LEFT_EDGE_FRAMES_TO_REMOVE <= MIX_PREFIX_FRAMES\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "ADSoHWwT7Q7P"
      },
      "outputs": [],
      "source": [
        "# @title Prompt Settings\n",
        "\n",
        "# @markdown Initial style prompts:\n",
        "initial_prompts = 'lofi hip hop beat, funk jam, acid house'  # @param {type: \"string\", placeholder: \"lofi hip hop beat, funk jam, acid house\"}\n",
        "\n",
        "initial_prompts = [prompt.strip() for prompt in initial_prompts.split(',')]\n",
        "\n",
        "# @markdown Include “live audio prompt” for stronger audio steering?\n",
        "\n",
        "live_audio_prompt = True  # @param {type: \"boolean\"}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "owxwmtM_wr5m"
      },
      "outputs": [],
      "source": [
        "# @title ▶️ Run this cell to start demo (~1 minute first time)\n",
        "\n",
        "if not use_prerecorded_input:\n",
        "  assert latency_estimator.done, \"It looks like you forgot to run step 2!\"\n",
        "  latency_samples = latency_estimator.get_latency()\n",
        "  assert 0 < latency_samples - 2 * CHUNK_SAMPLES < 0.3 * SAMPLE_RATE, (\n",
        "      \"Bad latency measurement. Rerun step 2.\")\n",
        "\n",
        "\n",
        "class AudioFade:\n",
        "  \"\"\"Handles the cross fade between audio chunks.\n",
        "\n",
        "  Args:\n",
        "    chunk_size: Number of audio samples per predicted frame (current\n",
        "      SpectroStream models produces 25Hz frames corresponding to 1920 audio\n",
        "      samples at 48kHz)\n",
        "    num_chunks: Number of audio chunks to fade between.\n",
        "    stereo: Whether the predicted audio is stereo or mono.\n",
        "  \"\"\"\n",
        "\n",
        "  def __init__(self, chunk_size: int, num_chunks: int, stereo: bool):\n",
        "    fade_size = chunk_size * num_chunks\n",
        "    self.fade_size = fade_size\n",
        "    self.num_chunks = num_chunks\n",
        "\n",
        "    self.previous_chunk = np.zeros(fade_size)\n",
        "    self.ramp = np.sin(np.linspace(0, np.pi / 2, fade_size)) ** 2\n",
        "\n",
        "    if stereo:\n",
        "      self.previous_chunk = self.previous_chunk[:, np.newaxis]\n",
        "      self.ramp = self.ramp[:, np.newaxis]\n",
        "\n",
        "  def reset(self):\n",
        "    self.previous_chunk = np.zeros_like(self.previous_chunk)\n",
        "\n",
        "  def __call__(self, chunk: np.ndarray) -> np.ndarray:\n",
        "    chunk[: self.fade_size] *= self.ramp\n",
        "    chunk[: self.fade_size] += self.previous_chunk\n",
        "    self.previous_chunk = chunk[-self.fade_size :] * np.flip(self.ramp)\n",
        "    return chunk[: -self.fade_size]\n",
        "\n",
        "\n",
        "@dataclasses.dataclass\n",
        "class AudioInjectionState:\n",
        "  \"\"\"State management for Audio Injection.\"\"\"\n",
        "  # The most recent context window (10s) of audio tokens corresponding to the\n",
        "  # model's predicted output. These are parallel to `state.context_tokens` but\n",
        "  # that context has the input audio mixed in.\n",
        "  context_tokens_orig: np.ndarray\n",
        "  # Stores all audio input (mono for live input, stereo for prerecorded input)\n",
        "  all_inputs: np.ndarray\n",
        "  # Stores all audio output (stereo)\n",
        "  all_outputs: np.ndarray\n",
        "  # How many chunks of audio have been generated\n",
        "  step: int\n",
        "\n",
        "\n",
        "# TODO(nconstant): Consider factoring out methods shared with MagentaRTStreamer\n",
        "# from Magenta_RT_Demo.ipynb\n",
        "class AudioInjectionStreamer:\n",
        "  \"\"\"Audio streamer class for Magenta RT model with Audio Injection.\n",
        "\n",
        "  This class holds a pretrained Magenta RT model, a cross fade state, a\n",
        "  generation state, audio injection state, and an asynchronous executor to\n",
        "  handle prompt embedding without interrupting the audio thread.\n",
        "\n",
        "  Args:\n",
        "    system: A MagentaRTBase instance.\n",
        "  \"\"\"\n",
        "\n",
        "  def __init__(\n",
        "      self,\n",
        "      system: system.MagentaRTBase,\n",
        "      sample_rate: int = 48000,\n",
        "      num_channels: int = 2,\n",
        "      buffer_size: int = 2 * 48000,\n",
        "      extra_buffering: int = 0\n",
        "  ):\n",
        "    config = system.config\n",
        "    self.system = system\n",
        "    self.audio_streamer = None\n",
        "    self.sample_rate = sample_rate\n",
        "    self.num_channels = num_channels\n",
        "    self.buffer_size = buffer_size\n",
        "    self.extra_buffering = extra_buffering\n",
        "    self.fade = AudioFade(\n",
        "        chunk_size=int(config.codec_sample_rate * config.crossfade_length),\n",
        "        num_chunks=1,\n",
        "        stereo=True\n",
        "    )\n",
        "    self.state = None\n",
        "    self.executor = concurrent.futures.ThreadPoolExecutor()\n",
        "    context_seconds = config.context_length\n",
        "    context_frames = int(context_seconds * config.codec_frame_rate)\n",
        "    context_samples = int(context_seconds * SAMPLE_RATE)\n",
        "    self.injection_state = AudioInjectionState(\n",
        "        context_tokens_orig=np.zeros(\n",
        "            (context_frames, config.decoder_codec_rvq_depth),\n",
        "            dtype=np.int32\n",
        "        ),\n",
        "        all_inputs=np.zeros(\n",
        "            (context_samples, 2)\n",
        "            if use_prerecorded_input else (context_samples,),\n",
        "            dtype=np.float32\n",
        "        ),\n",
        "        all_outputs=np.zeros((context_samples, 2), dtype=np.float32),\n",
        "        step=-1,  # This will be 0 after the warmup call.\n",
        "    )\n",
        "\n",
        "  @property\n",
        "  def warmup(self):\n",
        "    \"\"\"Returns whether to warm up the audio streamer.\"\"\"\n",
        "    return True\n",
        "\n",
        "  def on_stream_start(self):\n",
        "    \"\"\"Called when the UI starts streaming.\"\"\"\n",
        "    self.get_style_embedding(force_wait=False)\n",
        "    self.get_style_embedding(force_wait=True)\n",
        "    if self.audio_streamer is not None:\n",
        "      self.audio_streamer.reset_ring_buffer()\n",
        "\n",
        "  def on_stream_stop(self):\n",
        "    \"\"\"Called when the UI stops streaming.\"\"\"\n",
        "    pass\n",
        "\n",
        "  def reset(self):\n",
        "    self.state = None\n",
        "    self.fade.reset()\n",
        "    self.embed_style.cache_clear()\n",
        "    if self.audio_streamer is not None:\n",
        "      self.audio_streamer.reset_ring_buffer()\n",
        "\n",
        "  def start(self):\n",
        "    self.audio_streamer = colab_utils.AudioStreamer(\n",
        "        self,\n",
        "        rate=self.sample_rate,\n",
        "        buffer_size=self.buffer_size,\n",
        "        enable_input=True,\n",
        "        warmup=self.warmup,\n",
        "        raw_input_audio=True,\n",
        "        enable_automatic_gain_control_on_input=True,\n",
        "        num_output_channels=self.num_channels,\n",
        "        additional_buffered_samples=self.extra_buffering,\n",
        "        start_streaming_callback=self.on_stream_start,\n",
        "        stop_streaming_callback=self.on_stream_stop,\n",
        "    )\n",
        "    self.reset()\n",
        "\n",
        "  def stop(self):\n",
        "    self.executor.shutdown(wait=True)\n",
        "    if self.audio_streamer is not None:\n",
        "      del self.audio_streamer\n",
        "      self.audio_streamer = None\n",
        "\n",
        "  def global_ui_params(self):\n",
        "    return colab_utils.Parameters.get_values()\n",
        "\n",
        "  @functools.cache\n",
        "  def embed_style(self, style: str):\n",
        "    return self.executor.submit(self.system.embed_style, style)\n",
        "\n",
        "  @functools.cache\n",
        "  def embed_16k_audio(self, audio: tuple[float]):\n",
        "    \"\"\"Embed 16k audio asyncronously, returning a future.\"\"\"\n",
        "    audio = audio_lib.Waveform(np.asarray(audio), 16000)\n",
        "    return self.executor.submit(self.system.embed_style, audio)\n",
        "\n",
        "  def embed_48k_audio(self, audio: tuple[float]):\n",
        "    \"\"\"Embed 48k audio asyncronously, returning an embedding.\"\"\"\n",
        "    resampled_audio = resampy.resample(np.asarray(audio), 48000, 16000)\n",
        "    return self.embed_16k_audio(tuple(resampled_audio)).result()\n",
        "\n",
        "  def get_style_embedding(self, force_wait: bool = False):\n",
        "    prompts = self.get_prompts()\n",
        "    weighted_embedding = np.zeros((768,), dtype=np.float32)\n",
        "    total_weight = 0.0\n",
        "    for prompt_value, prompt_weight in prompts:\n",
        "      match type(prompt_value):\n",
        "        case prompt_types.TextPrompt:\n",
        "          if not prompt_value:\n",
        "            continue\n",
        "          embedding = self.embed_style(prompt_value)\n",
        "\n",
        "        case prompt_types.AudioPrompt:\n",
        "          embedding = self.embed_16k_audio(tuple(prompt_value.value))\n",
        "\n",
        "        case prompt_types.EmbeddingPrompt:\n",
        "          embedding = prompt_value.value\n",
        "\n",
        "        case _:\n",
        "          raise ValueError(f\"Unsupported prompt type: {type(prompt_value)}\")\n",
        "\n",
        "      if isinstance(embedding, concurrent.futures.Future):\n",
        "        if force_wait:\n",
        "          embedding.result()\n",
        "\n",
        "        if not embedding.done():\n",
        "          continue\n",
        "\n",
        "        embedding = embedding.result()\n",
        "\n",
        "      weighted_embedding += embedding * prompt_weight\n",
        "      total_weight += prompt_weight\n",
        "\n",
        "    if total_weight > 0:\n",
        "      weighted_embedding /= total_weight\n",
        "\n",
        "    return weighted_embedding\n",
        "\n",
        "  def get_prompts(self):\n",
        "    params = self.global_ui_params()\n",
        "    num_prompts = sum(map(lambda s: \"prompt_value\" in s, params.keys()))\n",
        "    prompts = []\n",
        "    for i in range(num_prompts):\n",
        "      prompt_weight = params[f\"prompt_weight_{i}\"]\n",
        "      prompt_value = params[f\"prompt_value_{i}\"]\n",
        "\n",
        "      if prompt_value is None or not prompt_weight:\n",
        "        continue\n",
        "\n",
        "      match type(prompt_value):\n",
        "        case prompt_types.TextPrompt:\n",
        "          prompt_value = prompt_value.strip()\n",
        "        case prompt_types.AudioPrompt:\n",
        "          pass\n",
        "        case prompt_types.EmbeddingPrompt:\n",
        "          pass\n",
        "        case _:\n",
        "          raise ValueError(f\"Unsupported prompt type: {type(prompt_value)}\")\n",
        "\n",
        "      prompts.append((prompt_value, prompt_weight))\n",
        "    return prompts\n",
        "\n",
        "  def generate(self, ui_params, inputs):\n",
        "    if use_prerecorded_input:\n",
        "      assert INPUT_AUDIO is not None, (\n",
        "          \"To use prerecorded input, first upload audio using step 4.\")\n",
        "      start = (self.injection_state.step * CHUNK_SAMPLES) % (\n",
        "          len(INPUT_AUDIO) - CHUNK_SAMPLES)\n",
        "      end = start + CHUNK_SAMPLES\n",
        "      inputs = INPUT_AUDIO[start:end]\n",
        "      inputs_mono = np.mean(inputs, axis=-1)\n",
        "\n",
        "    if LIVE_AUDIO_PROMPT is not None:\n",
        "      # Update live audio prompt with latest inputs.\n",
        "      LIVE_AUDIO_PROMPT.update_audio_input(\n",
        "          inputs_mono if use_prerecorded_input else inputs)\n",
        "\n",
        "    # Add this input chunk to the end of `all_inputs`.\n",
        "    self.injection_state.all_inputs = np.concatenate(\n",
        "        [self.injection_state.all_inputs, inputs], axis=0\n",
        "    )\n",
        "\n",
        "    # Pass an extra prefix of mixed audio to the encoder so we can throw away\n",
        "    # the earliest frames, which have edge artifacts.\n",
        "    mix_samples = (\n",
        "        CHUNK_SAMPLES + MIX_PREFIX_FRAMES\n",
        "        * self.system.config.frame_length_samples\n",
        "    )\n",
        "\n",
        "    # Input audio will be delayed by one loop before being mixed with model\n",
        "    # output.\n",
        "    beats_per_loop = ui_params[\"beats_per_loop\"]\n",
        "    loop_seconds = beats_per_loop * 60 / bpm\n",
        "    loop_samples = int(loop_seconds * SAMPLE_RATE)\n",
        "\n",
        "    # \"I/O offset\" is the number of samples by which we shift the inputs\n",
        "    # before mixing them with the outputs.\n",
        "    io_offset = CHUNK_SAMPLES - int(\n",
        "        streamer.system.config.crossfade_length * SAMPLE_RATE)\n",
        "    if not use_prerecorded_input:\n",
        "      io_offset += loop_samples - latency_samples\n",
        "      # TODO(nconstant): Support \"free\" mode by padding with silence instead of\n",
        "      # failing here.\n",
        "      assert io_offset >= 0, (\n",
        "          \"Increase `beats_per_loop` in the previous cell and rerun it.\")\n",
        "\n",
        "    # Select a window of input audio for mixing.\n",
        "    inputs_to_mix = self.injection_state.all_inputs[\n",
        "        -(io_offset + mix_samples):-io_offset]\n",
        "\n",
        "    # Select a window of output audio for mixing.\n",
        "    outputs_to_mix = self.injection_state.all_outputs[-mix_samples:]\n",
        "    outputs_to_mix *= ui_params.get(\"model_feedback\")\n",
        "\n",
        "    # Silence the last `input_gap_ms` ms of `inputs_to_mix`, to discourage\n",
        "    # copying the input verbatim.\n",
        "    input_gap_samples = int(SAMPLE_RATE * ui_params.get(\"input_gap\") / 1000)\n",
        "    ramp_samples = 100\n",
        "    ramp = np.linspace(1, 0, min(ramp_samples, input_gap_samples))\n",
        "    if use_prerecorded_input:\n",
        "      ramp = np.stack([ramp, ramp], axis=-1)\n",
        "    envelope = np.concat(\n",
        "        [np.ones_like(inputs_to_mix[input_gap_samples:]),\n",
        "         ramp,\n",
        "         np.zeros_like(inputs_to_mix[:max(0, input_gap_samples - ramp_samples)])\n",
        "        ]\n",
        "    )\n",
        "    inputs_to_mix = inputs_to_mix * envelope\n",
        "\n",
        "    # Mix input and output audio.\n",
        "    if not use_prerecorded_input:\n",
        "      inputs_to_mix = inputs_to_mix[:, None]\n",
        "    mix_audio = audio_lib.Waveform(\n",
        "        inputs_to_mix + outputs_to_mix,\n",
        "        sample_rate=SAMPLE_RATE\n",
        "    )\n",
        "    # Encode mix audio to tokens, and throw away a prefix.\n",
        "    mix_tokens = spectrostream_model.encode(mix_audio)[\n",
        "        LEFT_EDGE_FRAMES_TO_REMOVE:]\n",
        "\n",
        "    if self.state is not None:\n",
        "      self.injection_state.context_tokens_orig = self.state.context_tokens\n",
        "      self.state.context_tokens[-len(mix_tokens):] = mix_tokens[\n",
        "          :, :self.system.config.decoder_codec_rvq_depth]\n",
        "\n",
        "    max_decode_frames = round(\n",
        "        CHUNK_SECONDS * self.system.config.codec_frame_rate)\n",
        "\n",
        "    chunk, self.state = self.system.generate_chunk(\n",
        "        state=self.state,\n",
        "        style=self.get_style_embedding(),\n",
        "        seed=None,\n",
        "        max_decode_frames=max_decode_frames,\n",
        "        context_tokens_orig=self.injection_state.context_tokens_orig,\n",
        "        **ui_params,\n",
        "    )\n",
        "\n",
        "    # Add this chunk (before cross-fading) to the end of `all_outputs`.\n",
        "    # Note, we ignore the first frame of the chunk, which will be used for\n",
        "    # cross-fading.\n",
        "    self.injection_state.all_outputs = np.concatenate(\n",
        "        [self.injection_state.all_outputs,\n",
        "         chunk.samples[self.fade.fade_size:]]\n",
        "    )\n",
        "\n",
        "    chunk = self.fade(chunk.samples)\n",
        "    chunk *= ui_params.get(\"model_volume\")\n",
        "\n",
        "    if ui_params.get(\"metronome\"):\n",
        "      # Add metronome audio to output.\n",
        "      start = (self.injection_state.step * CHUNK_SAMPLES) % loop_samples\n",
        "      end = start + CHUNK_SAMPLES\n",
        "      metronome_chunk = METRONOME_AUDIO[start:end]\n",
        "      chunk += metronome_chunk[:, None]\n",
        "\n",
        "    if use_prerecorded_input:\n",
        "      chunk += inputs * ui_params.get(\"input_volume\")\n",
        "\n",
        "    # When intro loops are over, raise model volume and feedback.\n",
        "    if self.injection_state.step + 1 == int(\n",
        "        intro_loops * loop_samples / CHUNK_SAMPLES):\n",
        "      colab_utils.Parameters._UI_ELEMENTS[\"model_feedback\"].value = 0.95\n",
        "      colab_utils.Parameters._UI_ELEMENTS[\"model_volume\"].value = (\n",
        "          0.6 if use_prerecorded_input else 0.95)\n",
        "      if not use_prerecorded_input:\n",
        "         # Turn off metronome, and raise input gap.\n",
        "        colab_utils.Parameters._UI_ELEMENTS[\"metronome\"].value = False\n",
        "        colab_utils.Parameters._UI_ELEMENTS[\"input_gap\"].value = 400\n",
        "\n",
        "    self.injection_state.step += 1\n",
        "\n",
        "    return chunk\n",
        "\n",
        "  def __call__(self, inputs):\n",
        "    return self.generate(self.global_ui_params(), inputs)\n",
        "\n",
        "\n",
        "streamer = AudioInjectionStreamer(MRT, buffer_size=CHUNK_SAMPLES)\n",
        "\n",
        "\n",
        "def build_prompt_ui(\n",
        "    default_prompts: Sequence[str],\n",
        "    num_audio_prompt: int,\n",
        "    include_live_audio_prompt: bool = False):\n",
        "  \"\"\"Add interactive prompt widgets and register them.\"\"\"\n",
        "  prompts = []\n",
        "\n",
        "  if include_live_audio_prompt:\n",
        "    live_audio_prompt = widgets.LiveAudioPrompt(\n",
        "        streamer.embed_48k_audio,\n",
        "        sample_rate=48000,\n",
        "        trigger_embedding_every_n_seconds=0)\n",
        "    live_audio_prompt.slider.value = 0.1\n",
        "    prompts.append(live_audio_prompt)\n",
        "    prompts[-1].slider.value = 0.1\n",
        "  else:\n",
        "    live_audio_prompt = None\n",
        "\n",
        "  for p in default_prompts:\n",
        "    prompts.append(widgets.Prompt())\n",
        "    prompts[-1].text.value = p\n",
        "  prompts[-len(default_prompts)].slider.value = 1.0\n",
        "\n",
        "  # Add audio prompts\n",
        "  for _ in range(num_audio_prompt):\n",
        "    prompts.append(widgets.AudioPrompt())\n",
        "    prompts[-1].slider.value = 0.0\n",
        "\n",
        "  colab_utils.Parameters.register_ui_elements(\n",
        "      display=False,\n",
        "      **{f\"prompt_weight_{i}\": p.slider for i, p in enumerate(prompts)},\n",
        "      **{f\"prompt_value_{i}\": p.prompt_value for i, p in enumerate(prompts)},\n",
        "  )\n",
        "  return live_audio_prompt, [p.get_widget() for p in prompts]\n",
        "\n",
        "\n",
        "def build_sampling_option_ui():\n",
        "  \"\"\"Add interactive sampling option widgets and register them.\"\"\"\n",
        "  options = {\n",
        "      \"temperature\": ipw.FloatSlider(\n",
        "          min=0.0,\n",
        "          max=4.0,\n",
        "          step=0.01,\n",
        "          value=1.2,\n",
        "          description=\"temperature\",\n",
        "      ),\n",
        "      \"topk\": ipw.IntSlider(\n",
        "          min=0,\n",
        "          max=1024,\n",
        "          step=1,\n",
        "          value=30,\n",
        "          description=\"topk\",\n",
        "      ),\n",
        "      \"guidance_weight\": ipw.FloatSlider(\n",
        "          min=0.0,\n",
        "          max=10.0,\n",
        "          step=0.01,\n",
        "          value=1.5 if use_prerecorded_input else 0.8,\n",
        "          description=\"guidance\",\n",
        "      ),\n",
        "  }\n",
        "  if use_prerecorded_input:\n",
        "    options.update({\n",
        "        \"model_volume\": ipw.FloatSlider(\n",
        "            min=0.0,\n",
        "            max=1.0,\n",
        "            step=0.05,\n",
        "            value=0.0,\n",
        "            description=\"model vol.\",\n",
        "        ),\n",
        "        \"input_volume\": ipw.FloatSlider(\n",
        "            value=1.0,\n",
        "            min=0.0,\n",
        "            max=1.0,\n",
        "            step=0.05,\n",
        "            orientation=\"horizontal\",\n",
        "            description=\"input volume\",\n",
        "        ),\n",
        "    })\n",
        "  else:\n",
        "    options.update({\n",
        "        \"metronome\": ipw.Checkbox(\n",
        "            value=True,\n",
        "            description=\"metronome\",\n",
        "            indent=False,\n",
        "        ),\n",
        "    })\n",
        "\n",
        "  colab_utils.Parameters.register_ui_elements(display=False, **options)\n",
        "\n",
        "  return list(options.values())\n",
        "\n",
        "\n",
        "def build_hidden_option_ui():\n",
        "  \"\"\"Add interactive hidden option widgets and register them.\"\"\"\n",
        "  options = {\n",
        "      \"input_gap\": ipw.IntSlider(\n",
        "          value=0,\n",
        "          min=0,\n",
        "          max=2000,\n",
        "          step=100,\n",
        "          orientation=\"horizontal\",\n",
        "          description=\"input gap\",\n",
        "      ),\n",
        "      \"beats_per_loop\": ipw.IntSlider(\n",
        "          min=0,\n",
        "          max=16,\n",
        "          step=1,\n",
        "          value=beats_per_loop,\n",
        "          description=\"beats/loop\",\n",
        "      ),\n",
        "      \"model_feedback\": ipw.FloatSlider(\n",
        "          min=0.0,\n",
        "          max=1.0,\n",
        "          step=0.05,\n",
        "          value=0.0,\n",
        "          description=\"model feedback\",\n",
        "      ),\n",
        "  }\n",
        "  if not use_prerecorded_input:\n",
        "    options.update({\n",
        "        \"model_volume\": ipw.FloatSlider(\n",
        "            min=0.0,\n",
        "            max=1.0,\n",
        "            step=0.05,\n",
        "            value=0.0,\n",
        "            description=\"model volume\",\n",
        "        ),\n",
        "    })\n",
        "\n",
        "  colab_utils.Parameters.register_ui_elements(display=False, **options)\n",
        "\n",
        "  return list(options.values())\n",
        "\n",
        "\n",
        "colab_utils.Parameters.reset()\n",
        "\n",
        "\n",
        "def _reset_state(*args, **kwargs):\n",
        "  del args, kwargs\n",
        "  streamer.reset()\n",
        "\n",
        "\n",
        "reset_button = ipw.Button(description=\"reset\")\n",
        "reset_button.on_click(_reset_state)\n",
        "\n",
        "# Hidden injection sliders.\n",
        "build_hidden_option_ui()\n",
        "\n",
        "LIVE_AUDIO_PROMPT, prompts = build_prompt_ui(\n",
        "    initial_prompts,\n",
        "    num_audio_prompt=1,\n",
        "    include_live_audio_prompt=live_audio_prompt,\n",
        ")\n",
        "\n",
        "# Building interactive UI\n",
        "ipd.display(\n",
        "    ipw.VBox([\n",
        "        widgets.area(\n",
        "            \"sampling options\",\n",
        "            *build_sampling_option_ui(),\n",
        "            reset_button,\n",
        "        ),\n",
        "        widgets.area(\n",
        "            \"prompts\",\n",
        "            *prompts,\n",
        "        ),\n",
        "    ])\n",
        ")\n",
        "\n",
        "streamer.start()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QCXoLFXV83Po"
      },
      "source": [
        "# Step 3: ⬇️ Download your session\n",
        "\n",
        "Run the cells below to replay or download your input and the model output (as left and right channels)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "mapmcT6_AuEP"
      },
      "outputs": [],
      "source": [
        "# @title Align saved input and output\n",
        "\n",
        "context_seconds = streamer.system.config.context_length\n",
        "context_samples = int(\n",
        "    context_seconds * streamer.system.config.codec_sample_rate)\n",
        "\n",
        "all_inputs = streamer.injection_state.all_inputs[context_samples:]\n",
        "all_outputs = streamer.injection_state.all_outputs[context_samples:]\n",
        "\n",
        "delay_samples = int(streamer.system.config.crossfade_length * SAMPLE_RATE)\n",
        "if not use_prerecorded_input:\n",
        "  delay_samples += latency_samples\n",
        "delayed_inputs = np.concat(\n",
        "    [all_inputs[delay_samples:],\n",
        "     np.zeros_like(all_inputs[:delay_samples])\n",
        "    ])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "l2yOL56Wxmng"
      },
      "outputs": [],
      "source": [
        "# @title Input audio\n",
        "print(\"\\n\")\n",
        "if use_prerecorded_input:\n",
        "  ipd.display(ipd.Audio(delayed_inputs.T, rate=SAMPLE_RATE))\n",
        "else:\n",
        "  ipd.display(ipd.Audio(delayed_inputs, rate=SAMPLE_RATE))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "mLi39BQ6w4TR"
      },
      "outputs": [],
      "source": [
        "# @title Output left\n",
        "print(\"\\n\")\n",
        "ipd.display(ipd.Audio(all_outputs[:, 0].T, rate=SAMPLE_RATE))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "iQ7K9WXvxDoQ"
      },
      "outputs": [],
      "source": [
        "# @title Output right\n",
        "print(\"\\n\")\n",
        "ipd.display(ipd.Audio(all_outputs[:, 1].T, rate=SAMPLE_RATE))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xmvBk8UOphdL"
      },
      "source": [
        "# License and terms\n",
        "\n",
        "Magenta RealTime is offered under a combination of licenses: the codebase is\n",
        "licensed under\n",
        "[Apache 2.0](https://github.com/magenta/magenta-realtime/blob/main/LICENSE), and\n",
        "the model weights under\n",
        "[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode).\n",
        "\n",
        "In addition, we specify the following usage terms:\n",
        "\n",
        "Copyright 2025 Google LLC\n",
        "\n",
        "Use these materials responsibly and do not generate content, including outputs,\n",
        "that infringe or violate the rights of others, including rights in copyrighted\n",
        "content.\n",
        "\n",
        "Google claims no rights in outputs you generate using Magenta RealTime. You and\n",
        "your users are solely responsible for outputs and their subsequent uses.\n",
        "\n",
        "Unless required by applicable law or agreed to in writing, all software and\n",
        "materials distributed here under the Apache 2.0 or CC-BY licenses are\n",
        "distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,\n",
        "either express or implied. See the licenses for the specific language governing\n",
        "permissions and limitations under those licenses. You are solely responsible for\n",
        "determining the appropriateness of using, reproducing, modifying, performing,\n",
        "displaying or distributing the software and materials, and any outputs, and\n",
        "assume any and all risks associated with your use or distribution of any of the\n",
        "software and materials, and any outputs, and your exercise of rights and\n",
        "permissions under the licenses."
      ]
    }
  ],
  "metadata": {
    "accelerator": "TPU",
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
