{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Tce3stUlHN0L"
      },
      "source": [
        "##### Copyright 2024 Google LLC."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "tuOe1ymfHZPu"
      },
      "outputs": [],
      "source": [
        "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c4b7e9a1"
      },
      "source": [
        "# Building a RAG Application with Firebase Genkit, Ollama and Gemma\n",
        "\n",
        "\n",
        "\n",
        "In this comprehensive tutorial, you will learn how to build a **Retrieval-Augmented Generation (RAG)** application using cutting-edge technologies:\n",
        "\n",
        "[**Genkit**](https://firebase.google.com/docs/genkit) is a framework designed to help you build AI-powered applications and features. It provides open-source libraries for Node.js and Go, plus developer tools for testing and debugging.\n",
        "\n",
        "[**Gemma**](https://ai.google.dev/gemma) is a family of lightweight, state-of-the-art open language models from Google. Built from the same research and technology used to create the Gemini models, Gemma models are text-to-text, decoder-only large language models (LLMs) available in English, with open weights, pre-trained variants, and instruction-tuned variants.\n",
        "\n",
        "[**Ollama**](https://ollama.ai/) is a tool that simplifies running language models locally. It allows you to manage and serve multiple models efficiently, making it easier to deploy and test AI models on your machine. With Ollama, you can switch between different models and versions seamlessly, providing flexibility in development and experimentation.\n",
        "\n",
        "[**Firebase**](https://firebase.google.com/) is a comprehensive app development platform by Google that provides services like real-time databases, authentication, cloud storage, hosting, and machine learning. In this tutorial, you will utilize the **Cloud Firestore**, a scalable, flexible NoSQL cloud database to store and sync data for client- and server-side development.\n",
        "\n",
        "[**Gradio**](https://gradio.app/) is an open-source Python library for creating user-friendly web interfaces to interact with machine learning models. It allows you to quickly create customizable UI components to interact with your models and also generate shareable web apps that anyone can use.\n",
        "\n",
        "By integrating these technologies, you will build a powerful RAG application capable of providing accurate and contextually relevant responses based on your custom data.\n",
        "\n",
        "## What you'll learn\n",
        "\n",
        "- **Setting Up the Development Environment**: Install and configure Node.js, Genkit, Firebase, Ollama, and Gradio within a Colab notebook.\n",
        "- **Managing Prompts with Dotprompt**: Modularize your prompts into separate `.prompt` files using **Dotprompt** for better organization and maintainability.\n",
        "- **Indexing Documents with Genkit Flows**: Use Genkit's flows to embed and index your data, making it retrievable for your RAG application.\n",
        "- **Building a Chatbot Interface**: Create a user-friendly chatbot interface with Gradio to interact with your app.\n",
        "\n",
        "\n",
        "Let's get started on building your RAG application!\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "</table>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "78f2d1e5"
      },
      "source": [
        "## **Setup**\n",
        "\n",
        "Before you begin, make sure you have:\n",
        "\n",
        "- A basic Google Cloud account.\n",
        "- Basic knowledge of Node.js and TypeScript.\n",
        "- Familiarity with Colab notebooks.\n",
        "- The latest version of **Google Cloud SDK** installed."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "setup-colab-runtime"
      },
      "source": [
        "## Select the Colab Runtime\n",
        "\n",
        "In this section, you'll configure Google Colab and set up the tools needed for this project. You'll be using Google Colab as the environment to run your code, so make sure to follow these steps carefully.\n",
        "\n",
        "1. **Open Google Colab** and create a new notebook.\n",
        "2. In the upper-right corner of the Colab window, click on the **▾ (Additional connection options)** button.\n",
        "3. Select **Change runtime type**.\n",
        "4. Under **Hardware accelerator**, choose **GPU**.\n",
        "5. Ensure that the **GPU type** is set to **T4**.\n",
        "\n",
        "This setup will give you enough computing power to run the Gemma model smoothly.\n",
        "\n",
        "**Once you've completed these steps, you're ready to move on to the next section where you'll set up environment variables in your Colab environment.**"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "configure-credentials"
      },
      "source": [
        "### Configure Your Credentials\n",
        "\n",
        "First, get your Google API key from: https://aistudio.google.com/app/apikey\n",
        "\n",
        "You need to set up credentials for Google AI Studio in Google Colab. This allows you to authenticate and securely interact with different services, such as Google AI Studio.\n",
        "\n",
        "\n",
        "1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
        "2. **Add Google API Key**:\n",
        "   - Create a new secret named `GOOGLE_API_KEY`.\n",
        "   - Paste your Google API Key into the Value input box.\n",
        "   - Toggle the button to allow notebook access to the secret."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0c9e3f25"
      },
      "source": [
        "## **Install dependencies**\n",
        "\n",
        "To build the RAG application, you need to install various tools and libraries. Let's get started with installing the dependencies."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d7328eb3"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.4/50.4 kB\u001b[0m \u001b[31m533.4 kB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m42.3/42.3 MB\u001b[0m \u001b[31m17.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m319.8/319.8 kB\u001b[0m \u001b[31m14.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m94.7/94.7 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.4/76.4 kB\u001b[0m \u001b[31m6.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.0/78.0 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m436.6/436.6 kB\u001b[0m \u001b[31m27.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m141.9/141.9 kB\u001b[0m \u001b[31m11.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m10.9/10.9 MB\u001b[0m \u001b[31m57.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m63.7/63.7 kB\u001b[0m \u001b[31m4.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m73.3/73.3 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m130.2/130.2 kB\u001b[0m \u001b[31m9.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h>>> Installing ollama to /usr/local\n",
            ">>> Downloading Linux amd64 bundle\n",
            "############################################################################################# 100.0%\n",
            ">>> Creating ollama user...\n",
            ">>> Adding ollama user to video group...\n",
            ">>> Adding current user to ollama group...\n",
            ">>> Creating ollama systemd service...\n",
            "WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.\n",
            ">>> The Ollama API is now available at 127.0.0.1:11434.\n",
            ">>> Install complete. Run \"ollama\" from the command line.\n",
            "\u001b[38;5;79m2024-10-17 12:00:38 - Installing pre-requisites\u001b[0m\n",
            "Get:1 https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/ InRelease [3,626 B]\n",
            "Get:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64  InRelease [1,581 B]\n",
            "Hit:3 http://archive.ubuntu.com/ubuntu jammy InRelease\n",
            "Get:4 http://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB]\n",
            "Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB]\n",
            "Get:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64  Packages [1,031 kB]\n",
            "Ign:7 https://r2u.stat.illinois.edu/ubuntu jammy InRelease\n",
            "Get:8 https://r2u.stat.illinois.edu/ubuntu jammy Release [5,713 B]\n",
            "Get:9 https://r2u.stat.illinois.edu/ubuntu jammy Release.gpg [793 B]\n",
            "Get:10 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB]\n",
            "Get:11 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1,162 kB]\n",
            "Get:12 https://r2u.stat.illinois.edu/ubuntu jammy/main all Packages [8,396 kB]\n",
            "Get:13 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1,451 kB]\n",
            "Get:14 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [2,372 kB]\n",
            "Get:15 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease [18.1 kB]\n",
            "Get:16 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [3,200 kB]\n",
            "Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [3,278 kB]\n",
            "Hit:18 https://ppa.launchpadcontent.net/graphics-drivers/ppa/ubuntu jammy InRelease\n",
            "Get:19 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [2,648 kB]\n",
            "Hit:20 https://ppa.launchpadcontent.net/ubuntugis/ppa/ubuntu jammy InRelease\n",
            "Get:21 https://r2u.stat.illinois.edu/ubuntu jammy/main amd64 Packages [2,598 kB]\n",
            "Get:22 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [33.7 kB]\n",
            "Get:23 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy/main amd64 Packages [33.9 kB]\n",
            "Fetched 26.6 MB in 3s (9,194 kB/s)\n",
            "Reading package lists... Done\n",
            "W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?)\n",
            "Reading package lists... Done\n",
            "Building dependency tree... Done\n",
            "Reading state information... Done\n",
            "ca-certificates is already the newest version (20240203~22.04.1).\n",
            "curl is already the newest version (7.81.0-1ubuntu1.18).\n",
            "gnupg is already the newest version (2.2.27-3ubuntu2.1).\n",
            "gnupg set to manually installed.\n",
            "The following NEW packages will be installed:\n",
            "  apt-transport-https\n",
            "0 upgraded, 1 newly installed, 0 to remove and 52 not upgraded.\n",
            "Need to get 1,510 B of archives.\n",
            "After this operation, 170 kB of additional disk space will be used.\n",
            "Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.13 [1,510 B]\n",
            "Fetched 1,510 B in 0s (11.0 kB/s)\n",
            "Selecting previously unselected package apt-transport-https.\n",
            "(Reading database ... 123629 files and directories currently installed.)\n",
            "Preparing to unpack .../apt-transport-https_2.4.13_all.deb ...\n",
            "Unpacking apt-transport-https (2.4.13) ...\n",
            "Setting up apt-transport-https (2.4.13) ...\n",
            "Hit:1 https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/ InRelease\n",
            "Hit:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64  InRelease\n",
            "Get:3 https://deb.nodesource.com/node_20.x nodistro InRelease [12.1 kB]\n",
            "Hit:4 http://archive.ubuntu.com/ubuntu jammy InRelease\n",
            "Hit:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease\n",
            "Ign:6 https://r2u.stat.illinois.edu/ubuntu jammy InRelease\n",
            "Hit:7 http://archive.ubuntu.com/ubuntu jammy-backports InRelease\n",
            "Hit:8 http://security.ubuntu.com/ubuntu jammy-security InRelease\n",
            "Get:9 https://deb.nodesource.com/node_20.x nodistro/main amd64 Packages [9,254 B]\n",
            "Hit:10 https://r2u.stat.illinois.edu/ubuntu jammy Release\n",
            "Hit:11 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease\n",
            "Hit:13 https://ppa.launchpadcontent.net/graphics-drivers/ppa/ubuntu jammy InRelease\n",
            "Hit:14 https://ppa.launchpadcontent.net/ubuntugis/ppa/ubuntu jammy InRelease\n",
            "Fetched 21.4 kB in 1s (20.7 kB/s)\n",
            "Reading package lists... Done\n",
            "W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?)\n",
            "\u001b[1;34m2024-10-17 12:00:51 - Repository configured successfully.\u001b[0m\n",
            "\u001b[38;5;79m2024-10-17 12:00:51 - To install Node.js, run: apt-get install nodejs -y\u001b[0m\n",
            "\u001b[38;5;79m2024-10-17 12:00:51 - You can use N|solid Runtime as a node.js alternative\u001b[0m\n",
            "\u001b[1;32m2024-10-17 12:00:51 - To install N|solid Runtime, run: apt-get install nsolid -y \n",
            "\u001b[0m\n",
            "Reading package lists... Done\n",
            "Building dependency tree... Done\n",
            "Reading state information... Done\n",
            "The following NEW packages will be installed:\n",
            "  nodejs\n",
            "0 upgraded, 1 newly installed, 0 to remove and 52 not upgraded.\n",
            "Need to get 31.8 MB of archives.\n",
            "After this operation, 197 MB of additional disk space will be used.\n",
            "Get:1 https://deb.nodesource.com/node_20.x nodistro/main amd64 nodejs amd64 20.18.0-1nodesource1 [31.8 MB]\n",
            "Fetched 31.8 MB in 1s (49.2 MB/s)\n",
            "debconf: unable to initialize frontend: Dialog\n",
            "debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 78, <> line 1.)\n",
            "debconf: falling back to frontend: Readline\n",
            "debconf: unable to initialize frontend: Readline\n",
            "debconf: (This frontend requires a controlling tty.)\n",
            "debconf: falling back to frontend: Teletype\n",
            "dpkg-preconfigure: unable to re-open stdin: \n",
            "Selecting previously unselected package nodejs.\n",
            "(Reading database ... 123633 files and directories currently installed.)\n",
            "Preparing to unpack .../nodejs_20.18.0-1nodesource1_amd64.deb ...\n",
            "Unpacking nodejs (20.18.0-1nodesource1) ...\n",
            "Setting up nodejs (20.18.0-1nodesource1) ...\n",
            "Processing triggers for man-db (2.10.2-1) ...\n",
            "\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "added 257 packages in 19s\n",
            "\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K53 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\n",
            "added 201 packages in 15s\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K24 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\n",
            "added 253 packages, and audited 455 packages in 25s\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K37 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\n",
            "found \u001b[32m\u001b[1m0\u001b[22m\u001b[39m vulnerabilities\n",
            "\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "added 2 packages, and audited 457 packages in 2s\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K37 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "found \u001b[32m\u001b[1m0\u001b[22m\u001b[39m vulnerabilities\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "added 12 packages, and audited 469 packages in 3s\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K38 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\n",
            "found \u001b[32m\u001b[1m0\u001b[22m\u001b[39m vulnerabilities\n",
            "\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K⠹\u001b[1G\u001b[0K⠸\u001b[1G\u001b[0K⠼\u001b[1G\u001b[0K⠴\u001b[1G\u001b[0K⠦\u001b[1G\u001b[0K⠧\u001b[1G\u001b[0K⠇\u001b[1G\u001b[0K⠏\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K\n",
            "added 1 package, and audited 470 packages in 1s\n",
            "\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K\n",
            "\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K38 packages are looking for funding\n",
            "\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K  run `npm fund` for details\n",
            "\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K\n",
            "found \u001b[32m\u001b[1m0\u001b[22m\u001b[39m vulnerabilities\n",
            "\u001b[1G\u001b[0K⠋\u001b[1G\u001b[0K"
          ]
        }
      ],
      "source": [
        "# Install Gradio\n",
        "!pip install -q gradio\n",
        "\n",
        "# Install Ollama\n",
        "!curl -fsSL https://ollama.ai/install.sh | sh\n",
        "\n",
        "# Install Node.js\n",
        "!curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -\n",
        "!sudo apt-get install -y nodejs\n",
        "\n",
        "# Install Genkit CLI and plugins\n",
        "!npm i -g genkit\n",
        "!npm i --save genkitx-ollama\n",
        "!npm i --save @genkit-ai/firebase\n",
        "!npm i --save @genkit-ai/googleai\n",
        "!npm i --save @genkit-ai/dotprompt\n",
        "!npm i llm-chunk"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5e0f4df2"
      },
      "source": [
        "## Run Gemma using Ollama\n",
        "\n",
        "You will use `Ollama` to run the Gemma language model locally. This tool will allow you to interact with the AI and use it in your RAG chatbot application."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hA8ua09eXKzf"
      },
      "source": [
        "First, start the Ollama server. This will run in the background and allow you to call different AI models."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5E5J_0wZXKQ0"
      },
      "outputs": [],
      "source": [
        "import subprocess\n",
        "import time\n",
        "\n",
        "ollama_serve_process = subprocess.Popen(\"OLLAMA_KEEP_ALIVE=-1 ollama serve\", shell=True)\n",
        "time.sleep(5)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "01bfdd4c"
      },
      "source": [
        "Ollama provides a library of pre-configured models, including Gemma 2 models. You can browse the available Gemma 2 models at the [Ollama Gemma 2 Model Catalog](https://ollama.com/library/gemma2). This allows you to switch between different Gemma 2 models easily. In this notebook, you'll use the [gemma2:2b](https://ollama.com/library/gemma2:2b) model.\n",
        "\n",
        "To test if the Gemma model is running correctly, use the following command to ask the model a simple question:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bfad771e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "The capital of China is **Beijing**. \n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "ollama_run_process = subprocess.Popen(\n",
        "  \"ollama run gemma2:2b 'What is the capital of China?'\",\n",
        "  shell=True, stdout=subprocess.PIPE, text=True\n",
        ")\n",
        "\n",
        "print(ollama_run_process.communicate()[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e0a71a35"
      },
      "source": [
        "You should see the model's response in the output."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "85c1bb7b"
      },
      "source": [
        "##  Set up the Firebase project\n",
        "\n",
        "Firebase will be used to store and manage your data. In this case, you'll use Cloud Firestore, a NoSQL database that makes it easy to store and retrieve the information that will be used by the chatbot.\n",
        "\n",
        "Before you can continue, you need to set up a Firebase project:\n",
        "\n",
        "1.  If you haven't already, create a Firebase project: In the [Firebase console](https://console.firebase.google.com/), click Add project, then follow the on-screen instructions to create a Firebase project or to add Firebase services to an existing GCP project.\n",
        "<img src=\"https://i.imgur.com/B8njkTG.png\" alt=\"Welcome to Firebase\" width=50%>\n",
        "\n",
        "2. Then, open your project and go to the **Project settings** page, create a service account and download the service account key file using **Generate new private key**. Keep this file safe, since it grants administrator access to your project.  \n",
        "<img src=\"https://i.imgur.com/J20U7lz.png\" alt=\"Project Overview\" width=50%>     \n",
        "<img src=\"https://i.imgur.com/46FOyMm.png\" alt=\"Service accounts\" width=50%>\n",
        "\n",
        "\n",
        "3. Upload the JSON service account key file and set its location in `GOOGLE_APPLICATION_CREDENTIALS` environmental variable.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yuPcUoQRZtf3"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from google.colab import files\n",
        "\n",
        "uploaded = files.upload()\n",
        "\n",
        "for fn in uploaded.keys():\n",
        "  print('User uploaded file \"{name}\" with length {length} bytes'.format(\n",
        "      name=fn, length=len(uploaded[fn])))\n",
        "\n",
        "  with open('/content/' + fn, 'wb') as f:\n",
        "    f.write(uploaded[fn])\n",
        "\n",
        "  os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = '/content/' + fn"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NOGTSWz_a1kZ"
      },
      "source": [
        "This will allow you to authenticate to Firebase and use its services."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "etqnpcRgaexr"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "my-genkit-gemma-firebase-demo\n"
          ]
        }
      ],
      "source": [
        "# Get the project ID\n",
        "import json\n",
        "\n",
        "with open(os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"], 'r') as f:\n",
        "    data = json.load(f)\n",
        "    PROJECT_ID = data['project_id']\n",
        "    print(PROJECT_ID)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QCfndtotoec-"
      },
      "source": [
        "## Create a Cloud Firestore database\n",
        "\n",
        "Now that you've set up Firebase and your development environment, let's move on to creating the Firestore database.\n",
        "\n",
        "* Navigate to the **Cloud Firestore** section of the [Firebase console](https://console.firebase.google.com/project/_/firestore). You'll be prompted to select an existing Firebase project. Follow the database creation workflow.  \n",
        "<img src=\"https://i.imgur.com/WNmcCXa.png\" alt=\"Service accounts\" width=50% height=50%>\n",
        "\n",
        "* To simplify our demo, let's select a starting mode for your **Cloud Firestore Security Rules**. You'll pick **Test mode** to get quickly started.\n",
        "\n",
        "* Pick a location and this setting will be your project's default Google Cloud Platform (GCP) resource location.\n",
        "\n",
        "**Note**: Test mode allows open access to your database, which is insecure for production environments. Remember to update your security rules before deploying your application."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5bOgqUuAxTxQ"
      },
      "source": [
        "## Create a vector index for Firestore\n",
        "\n",
        "Before you can perform a nearest neighbor search with your vector embeddings, you must create a corresponding index."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TsBVtzpMx5Lt"
      },
      "source": [
        "To do this, let's first authenticate the Google Cloud SDK to simplify its creation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jeIxgHH6eQO4"
      },
      "outputs": [],
      "source": [
        "!gcloud auth login --no-launch-browser"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JIH4LCxzpEAu"
      },
      "source": [
        "Follow the authentication flow in your browser, and copy the authentication code back into the Colab notebook when prompted."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KSP8HmW9rB-L"
      },
      "source": [
        "Firestore depends on indexes to provide fast and efficient querying on collections.\n",
        "\n",
        "**Note**: \"index\" here refers to database indexes, and not Genkit's indexer and retriever abstractions.\n",
        "\n",
        "The tutorial requires the embedding field to be indexed to work.\n",
        "\n",
        "Run the following `gcloud` command as described in the [Firestore docs](https://firebase.google.com/docs/firestore/vector-search?authuser=0#create_and_manage_vector_indexes) to create a single-field vector index.\n",
        "\n",
        "* `collection-group` is the ID of the collection group.\n",
        "* `vector-field` is the name of the field that contains the vector embedding.\n",
        "* `field-config` includes the vector configuration (vector dimension and index type). The dimension is an integer up to 2048. The index type must be flat. You also specify the `field-path` here which is `embedding`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zJRRcwttoS2k"
      },
      "outputs": [],
      "source": [
        "%%bash -s \"$PROJECT_ID\"\n",
        "\n",
        "# Set the current project ID\n",
        "gcloud config set project $1\n",
        "\n",
        "# Create a vector index\n",
        "gcloud alpha firestore indexes composite create \\\n",
        "  --project=$1 \\\n",
        "  --collection-group=merch \\\n",
        "  --query-scope=COLLECTION \\\n",
        "  --field-config=vector-config='{\"dimension\":\"768\",\"flat\": \"{}\"}',field-path=embedding"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-SIj1nF6sqj6"
      },
      "source": [
        "## Retrieval-Augmented Generation (RAG)\n",
        "\n",
        "**Firebase Genkit** provides abstractions that help you build retrieval-augmented generation (RAG) flows, as well as plugins that provide integrations with related tools.\n",
        "\n",
        "What is RAG?\n",
        "Retrieval-augmented generation is a technique used to incorporate external sources of information into an LLM’s responses. It's important to be able to do so because, while LLMs are typically trained on a broad body of material, practical use of LLMs often requires specific domain knowledge (for example, you might want to use an LLM to answer customers' questions about your company’s products).\n",
        "\n",
        "The core Genkit framework offers the abstractions to help you do RAG:\n",
        "\n",
        "* **Indexers**: add documents to an `\"index\"`.\n",
        "* **Embedders**: transforms documents into a vector representation\n",
        "* **Retrievers**: retrieve documents from an `\"index\"`, given a query.\n",
        "These definitions are broad on purpose because Genkit is un-opinionated about what an `\"index\"` is or how exactly documents are retrieved from it. Genkit only provides a `Document` format and everything else is defined by the retriever or indexer implementation provider.\n",
        "\n",
        "\n",
        "You'll soon learn how it's possible to ingest a collection of product descriptions into a vector database and retrieve them for use in a flow that determines what items are available. You can even ask general questions about your custom data and Gemma should be able to make sense out of the relevant context that's retrieved from a user query."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7c59aeab"
      },
      "source": [
        "###  Genkit Project Setup\n",
        "\n",
        "Create a new project directory and initialize an NPM project."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4a3c2f56"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/content/genkit-gemma-sample\n",
            "\u001b[1G\u001b[0KWrote to /content/genkit-gemma-sample/package.json:\n",
            "\n",
            "{\n",
            "  \"name\": \"genkit-gemma-sample\",\n",
            "  \"version\": \"1.0.0\",\n",
            "  \"main\": \"index.js\",\n",
            "  \"scripts\": {\n",
            "    \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n",
            "  },\n",
            "  \"keywords\": [],\n",
            "  \"author\": \"\",\n",
            "  \"license\": \"ISC\",\n",
            "  \"description\": \"\"\n",
            "}\n",
            "\n",
            "\n",
            "\n",
            "\u001b[1G\u001b[0K⠙\u001b[1G\u001b[0K"
          ]
        }
      ],
      "source": [
        "# Create project directory\n",
        "!mkdir genkit-gemma-sample\n",
        "%cd genkit-gemma-sample\n",
        "\n",
        "# Initialize NPM project\n",
        "!npm init -y"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e6cc9098"
      },
      "source": [
        "Initialize a Genkit project and create a sample Ollama project that uses the older Gemma model. You'll update this to use the latest model later.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d8e38328"
      },
      "outputs": [],
      "source": [
        "!genkit init --model ollama --non-interactive"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0a4a0b1a"
      },
      "source": [
        "### Prepare Data for RAG\n",
        "\n",
        "Create a file named `products.txt` that contains descriptions of the products you want the chatbot to know about."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6aeff45c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing products.txt\n"
          ]
        }
      ],
      "source": [
        "%%writefile products.txt\n",
        "\n",
        "**Google Pixel 8 Pro - Obsidian - Google Store**\n",
        "\n",
        "The Google Pixel 8 Pro is Google's latest flagship smartphone, featuring a 6.7-inch LTPO OLED display with a 120Hz refresh rate for smooth and vibrant visuals.\n",
        "Powered by the Google Tensor G3 chip, it offers exceptional performance, advanced AI capabilities, and enhanced security with the Titan M2 security chip.\n",
        "The Pixel 8 Pro boasts a versatile triple-camera system, including a 50 MP main sensor, a 48 MP telephoto lens, and a 48 MP ultra-wide lens, enabling you to capture high-quality photos and videos in various lighting conditions.\n",
        "Innovative camera features like Magic Eraser, Night Sight, and Super Res Zoom enhance your photography experience. The device supports 5G connectivity, has an all-day battery life with fast charging and wireless charging capabilities, and runs on the latest Android OS with guaranteed software updates.\n",
        "\n",
        "* **Price:** Starting at $999\n",
        "* **Reviews:** 4.8 out of 5 stars based on customer reviews on the Google Store\n",
        "\n",
        "---\n",
        "\n",
        "**Samsung Galaxy Watch5 Pro - 45mm Bluetooth Smartwatch - Black Titanium - Samsung**\n",
        "\n",
        "The Samsung Galaxy Watch5 Pro is a premium smartwatch designed for outdoor enthusiasts and fitness aficionados.\n",
        "Featuring a durable Titanium case and Sapphire Crystal Glass, it's built to withstand tough conditions.\n",
        "The watch includes advanced health monitoring features like ECG, blood pressure measurement, and body composition analysis. It offers GPS route tracking, turn-by-turn navigation, and has a battery life of up to 80 hours.\n",
        "The Galaxy Watch5 Pro runs on Wear OS Powered by Samsung, providing access to a wide range of apps.\n",
        "\n",
        "* **Price:** $449.99\n",
        "* **Reviews:** 4.5 out of 5 stars based on customer reviews on the Samsung website\n",
        "\n",
        "---\n",
        "\n",
        "**Dell XPS 13 Laptop - 13.4-inch FHD+ Display - Platinum Silver - Dell**\n",
        "\n",
        "The Dell XPS 13 is a compact and powerful laptop featuring a 13.4-inch InfinityEdge FHD+ display.\n",
        "Powered by up to the 11th Gen Intel Core processors, it delivers excellent performance for multitasking and creative work.\n",
        "The laptop boasts a sleek design with a machined aluminum chassis and a carbon fiber palm rest.\n",
        "It includes up to 16 GB of RAM and up to 1 TB of SSD storage. With a long battery life and Wi-Fi 6 connectivity, the XPS 13 is ideal for professionals on the go.\n",
        "\n",
        "* **Price:** Starting at $999.99\n",
        "* **Reviews:** 4.6 out of 5 stars based on customer reviews on Dell's website\n",
        "\n",
        "---\n",
        "\n",
        "**Bose QuietComfort 45 Wireless Noise-Cancelling Headphones - Black - Bose**\n",
        "\n",
        "The Bose QuietComfort 45 headphones offer world-class noise cancellation with two modes: Quiet and Aware.\n",
        "They deliver high-fidelity audio with a balanced performance at any volume. The headphones are lightweight and feature synthetic leather ear cushions for all-day comfort.\n",
        "With up to 24 hours of battery life on a single charge, they are perfect for long flights or extended listening sessions. The headphones support Bluetooth 5.1 for a strong and reliable wireless connection.\n",
        "\n",
        "* **Price:** $329\n",
        "* **Reviews:** 4.8 out of 5 stars based on customer reviews on Bose's website\n",
        "\n",
        "---\n",
        "\n",
        "**Canon EOS R6 Mirrorless Camera Body - Canon Online Store**\n",
        "\n",
        "The Canon EOS R6 is a full-frame mirrorless camera designed for both enthusiasts and professionals.\n",
        "It features a 20.1 MP CMOS sensor and the DIGIC X image processor, providing excellent image quality and high-speed performance.\n",
        "The camera offers up to 12 fps mechanical shutter and 20 fps electronic (silent) shutter, making it ideal for action photography.\n",
        "It includes 4K UHD video recording, in-body image stabilization, and Dual Pixel CMOS AF II for fast and accurate autofocus.\n",
        "The EOS R6 has built-in Wi-Fi and Bluetooth for easy sharing and remote control.\n",
        "\n",
        "* **Price:** $2,499\n",
        "* **Reviews:** 4.7 out of 5 stars based on customer reviews on the Canon Online Store\n",
        "\n",
        "---\n",
        "\n",
        "**Apple AirPods Pro (2nd Generation) - White - Apple Store**\n",
        "\n",
        "The Apple AirPods Pro (2nd Generation) offer superior sound quality with Active Noise Cancellation and Adaptive Transparency.\n",
        "Equipped with the H2 chip, they deliver high-fidelity audio with personalized Spatial Audio features.\n",
        "The earbuds come with four sizes of silicone ear tips for a customizable fit and include touch controls for media playback and volume adjustment.\n",
        "With improved battery life, you get up to 6 hours of listening time on a single charge and up to 30 hours with the MagSafe Charging Case.\n",
        "\n",
        "* **Price:** $249\n",
        "* **Reviews:** 4.7 out of 5 stars based on customer reviews on the Apple Store\n",
        "\n",
        "---\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "775e4609"
      },
      "source": [
        "### Create the Prompt File with **Dotprompt**\n",
        "\n",
        "Firebase Genkit provides the Dotprompt plugin and text format to help you write and organize your generative AI prompts.\n",
        "\n",
        "Dotprompt helps you organize and manage the prompts used by the language model. Create a .prompt file to define how the model should interact with the data and users. This makes it easier to maintain and version your prompts, similar to how you manage code.\n",
        "\n",
        "\n",
        "Create `assistant.prompt` in the `src/prompts` directory."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "98Ztvuuc_fLh"
      },
      "outputs": [],
      "source": [
        "# Create a `prompts` directory to store your Dotprompts\n",
        "!mkdir -p prompts"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2ee08e36"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing prompts/assistant.prompt\n"
          ]
        }
      ],
      "source": [
        "%%writefile prompts/assistant.prompt\n",
        "---\n",
        "model: ollama/gemma2:2b\n",
        "config:\n",
        "  temperature: 0.8\n",
        "input:\n",
        "  schema:\n",
        "    data(array): string\n",
        "    question: string\n",
        "output:\n",
        "  format: text\n",
        "---\n",
        "You are acting as a helpful AI assistant that can answer questions using the data that's available.\n",
        "\n",
        "Use only the context provided to answer the question.\n",
        "If you don't know, do not make up an answer.\n",
        "\n",
        "Context:\n",
        "{{#each data~}}\n",
        "- {{this}}\n",
        "{{~/each}}\n",
        "\n",
        "Question:\n",
        "{{question}}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "429e0b4d"
      },
      "source": [
        "### Chunking, Embedding and Indexing\n",
        "\n",
        "You will then use Genkit to embed and index these product descriptions into Firestore so that the chatbot can retrieve them when answering questions by running the following steps:\n",
        "\n",
        "* **Chunking**: Next, use `llm-chunk` to break these product descriptions into smaller, manageable chunks. Chunking the data helps ensure that the content is in a suitable size for embedding, making it more effective when working with vector representations. The `llm-chunk` library provides a simple way to split the text into segments that can be vectorized.\n",
        "\n",
        "* **Embedding**: An embedder is a function that takes content (text, images, audio, etc.) and creates a numeric vector that encodes the semantic meaning of the original content. To populate your Firestore collection, use the `Gecko embeddings` from Google AI along with the `Firebase Admin SDK`.\n",
        "\n",
        "* **Indexing**: Once the embeddings are created, index them into Firestore so that they can be used later for similarity searches. Store both the text and its embedding in Firestore.\n",
        "\n",
        "Create `embedFlow.ts` in the `src/flows` directory."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "b223519a"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing src/embedFlow.ts\n"
          ]
        }
      ],
      "source": [
        "%%writefile src/embedFlow.ts\n",
        "\n",
        "import { configureGenkit } from \"@genkit-ai/core\";\n",
        "import { embed } from \"@genkit-ai/ai/embedder\";\n",
        "import { defineFlow, run } from \"@genkit-ai/flow\";\n",
        "import { textEmbeddingGecko001, googleAI } from \"@genkit-ai/googleai\";\n",
        "import { FieldValue, getFirestore } from \"firebase-admin/firestore\";\n",
        "import { chunk } from \"llm-chunk\";\n",
        "import * as z from \"zod\";\n",
        "import { readFile } from \"fs/promises\";\n",
        "import path from \"path\";\n",
        "\n",
        "// Configuration for indexing process\n",
        "const indexConfig = {\n",
        "  collection: \"merch\",  // Firestore collection to store the data\n",
        "  contentField: \"text\", // Field name for the text content\n",
        "  vectorField: \"embedding\", // Field name for the embedding vector\n",
        "  embedder: textEmbeddingGecko001, // Embedder model to use\n",
        "};\n",
        "\n",
        "// Configure Genkit with Google AI plugin\n",
        "// Firebase Genkit has a configuration and plugin system.\n",
        "// Every Genkit app starts with configuration where you specify the plugins\n",
        "// you want to use and configure various subsystems.\n",
        "configureGenkit({\n",
        "  plugins: [googleAI({ apiVersion: ['v1', 'v1beta'] })],\n",
        "  enableTracingAndMetrics: false,\n",
        "});\n",
        "\n",
        "// Initialize Firestore instance\n",
        "const firestore = getFirestore();\n",
        "\n",
        "// Create chunking config\n",
        "// This example uses the llm-chunk library which provides a simple text\n",
        "// splitter to break up documents into segments that can be vectorized.\n",
        "const chunkingConfig = {\n",
        "  minLength: 1000,\n",
        "  maxLength: 2000,\n",
        "  splitter: 'sentence',\n",
        "  overlap: 100,\n",
        "  //  Split text into chunks using '---' as delimiter\n",
        "  delimiters: '---',\n",
        "} as any;\n",
        "\n",
        "// Define embed flow\n",
        "export const embedFlow = defineFlow(\n",
        "  {\n",
        "    name: \"embedFlow\", // Name of the flow\n",
        "    inputSchema: z.void(), // No input is expected\n",
        "    outputSchema: z.void(), // No output is returned\n",
        "  },\n",
        "  async () => {\n",
        "    // Read text data from file\n",
        "    const filePath = path.resolve('products.txt');\n",
        "    const textData = await run(\"extract-text\", () => extractText(filePath));\n",
        "\n",
        "    // Divide the text into segments.\n",
        "    const chunks = await run('chunk-it', async () =>\n",
        "      chunk(textData, chunkingConfig)\n",
        "    );\n",
        "\n",
        "    // Index chunks into Firestore.\n",
        "    await run(\"index-chunks\", async () => indexToFirestore(chunks));\n",
        "  }\n",
        ");\n",
        "\n",
        "// Function to index chunks into Firestore\n",
        "async function indexToFirestore(data: string[]) {\n",
        "  for (const text of data) {\n",
        "    // Generate embedding for the text chunk\n",
        "    const embedding = await embed({\n",
        "      embedder: indexConfig.embedder,\n",
        "      content: text,\n",
        "    });\n",
        "\n",
        "    // Add the text and embedding to Firestore\n",
        "    await firestore.collection(indexConfig.collection).add({\n",
        "      [indexConfig.vectorField]: FieldValue.vector(embedding),\n",
        "      [indexConfig.contentField]: text,\n",
        "    });\n",
        "  }\n",
        "}\n",
        "\n",
        "// Function to read text content from a file\n",
        "async function extractText(filePath: string) {\n",
        "  const f = path.resolve(filePath);\n",
        "  return await readFile(f, 'utf-8');\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rwO8RRmDdJmw"
      },
      "source": [
        "By storing both the text and its embedding, you can later perform similarity searches to find relevant product descriptions based on user queries. This makes it possible for the chatbot to retrieve and provide accurate, contextually relevant answers."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "80ab1d4e"
      },
      "source": [
        "### Configuration and plugins\n",
        "\n",
        "Firebase Genkit has a configuration and plugin system. Every Genkit app starts with configuration where you specify the plugins you want to use and configure various subsystems.\n",
        "\n",
        "Create `config.ts` in the `src` directory."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "f83f1c3b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing src/config.ts\n"
          ]
        }
      ],
      "source": [
        "%%writefile src/config.ts\n",
        "\n",
        "import { configureGenkit } from '@genkit-ai/core';\n",
        "import { firebase } from '@genkit-ai/firebase';\n",
        "import { googleAI } from '@genkit-ai/googleai';\n",
        "import { ollama } from 'genkitx-ollama';\n",
        "import { dotprompt } from '@genkit-ai/dotprompt';\n",
        "import { initializeApp, applicationDefault } from 'firebase-admin/app';\n",
        "import { getFirestore } from 'firebase-admin/firestore';\n",
        "\n",
        "// Initialize Firebase Admin SDK\n",
        "const app = initializeApp({\n",
        "  credential: applicationDefault(),\n",
        "});\n",
        "\n",
        "export const firestore = getFirestore(app);\n",
        "\n",
        "// Configure Genkit\n",
        "configureGenkit({\n",
        "  plugins: [\n",
        "    firebase(),\n",
        "    googleAI({ apiVersion: ['v1', 'v1beta'] }),\n",
        "    ollama({\n",
        "      // Ollama provides an interface to many generative models. Here,\n",
        "      // you specify Google's Gemma 2 model. The models you specify must already\n",
        "      // be downloaded and available to the Ollama server.\n",
        "      models: [{ name: 'gemma2:2b' }],\n",
        "      // The address of your Ollama API server. This is often a different host\n",
        "      // from your app backend (which runs Genkit), in order to run Ollama on\n",
        "      // a GPU-accelerated machine.\n",
        "      serverAddress: 'http://127.0.0.1:11434',\n",
        "    }),\n",
        "    dotprompt(),\n",
        "  ],\n",
        "  // Log debug output to tbe console.\n",
        "  logLevel: 'debug',\n",
        "  // Perform OpenTelemetry instrumentation and enable trace collection.\n",
        "  enableTracingAndMetrics: true,\n",
        "});"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "82f727e4"
      },
      "source": [
        "### Defining a RAG Flow\n",
        "\n",
        "Next, create a flow named `chatbotFlow` that will allow the chatbot to interact with the data you indexed earlier. This flow will combine a retriever (to get relevant information from Firebase) with a prompt that helps format responses. A retriever is a concept that encapsulates logic related to any kind of document retrieval. The most popular retrieval cases typically include retrieval from vector stores; however, in Genkit, a retriever can be any function that returns data. In this case, the retriever is responsible for finding the most relevant product descriptions from Firestore based on the user's question. It uses the **embeddings** and **cosine similarity** to find the closest matches, ensuring that the retrieved information is highly relevant to the query.\n",
        "\n",
        "Create `memory.ts` and `chatbotFlow.ts` in the `src` directory."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RPbtRN6lOTk6"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing src/memory.ts\n"
          ]
        }
      ],
      "source": [
        "%%writefile src/memory.ts\n",
        "\n",
        "import { MessageData } from '@genkit-ai/ai/model';\n",
        "\n",
        "const chatHistory: Record<string, MessageData[]> = {};\n",
        "\n",
        "export interface HistoryStore {\n",
        "  load(id: string): Promise<MessageData[] | undefined>;\n",
        "  save(id: string, history: MessageData[]): Promise<void>;\n",
        "}\n",
        "\n",
        "// You'll also use an in-memory store to store the chat history.\n",
        "export function inMemoryStore(): HistoryStore {\n",
        "  return {\n",
        "    async load(id: string): Promise<MessageData[] | undefined> {\n",
        "      return chatHistory[id];\n",
        "    },\n",
        "    async save(id: string, history: MessageData[]) {\n",
        "      chatHistory[id] = history;\n",
        "    },\n",
        "  };\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cf6b3579"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Writing src/chatbotFlow.ts\n"
          ]
        }
      ],
      "source": [
        "%%writefile src/chatbotFlow.ts\n",
        "\n",
        "import { defineFlow, run } from '@genkit-ai/flow';\n",
        "import { defineFirestoreRetriever } from '@genkit-ai/firebase';\n",
        "import { retrieve } from '@genkit-ai/ai/retriever';\n",
        "import { textEmbeddingGecko001 } from '@genkit-ai/googleai';\n",
        "import { z } from 'zod';\n",
        "\n",
        "import { firestore } from './config';\n",
        "import { inMemoryStore } from './memory.js';\n",
        "\n",
        "import { promptRef } from '@genkit-ai/dotprompt';\n",
        "\n",
        "// Define Firestore retriever\n",
        "const retrieverRef = defineFirestoreRetriever({\n",
        "  name: \"merchRetriever\",\n",
        "  firestore,\n",
        "  collection: \"merch\",  // Collection containing merchandise data\n",
        "  contentField: \"text\",  // Field for product descriptions\n",
        "  vectorField: \"embedding\", // Field for embeddings\n",
        "  embedder: textEmbeddingGecko001, // Embedding model\n",
        "  distanceMeasure: \"COSINE\", // Similarity metric\n",
        "});\n",
        "\n",
        "// Define the prompt reference\n",
        "const assistantPrompt = promptRef('assistant');\n",
        "\n",
        "// To store the chat history\n",
        "const historyStore = inMemoryStore();\n",
        "\n",
        "// Define chatbot flow\n",
        "export const chatbotFlow = defineFlow(\n",
        "  {\n",
        "    name: \"chatbotFlow\",\n",
        "    inputSchema: z.string(),\n",
        "    outputSchema: z.string(),\n",
        "  },\n",
        "  async (question) => {\n",
        "    const conversationId = '0';\n",
        "\n",
        "    // Retrieve conversation history.\n",
        "    const history = await run(\n",
        "      'retrieve-history',\n",
        "      conversationId,\n",
        "      async () => {\n",
        "        return (await historyStore?.load(conversationId)) || [];\n",
        "      }\n",
        "    );\n",
        "\n",
        "    // Retrieve relevant documents\n",
        "    const docs = await retrieve({\n",
        "      retriever: retrieverRef,\n",
        "      query: question,\n",
        "      options: { limit: 5 },\n",
        "    });\n",
        "\n",
        "    // Run the prompt\n",
        "    const mainResp = await assistantPrompt.generate({\n",
        "      history: history,\n",
        "      input: {\n",
        "        data: docs.map((doc) => doc.content[0].text || \"\"),\n",
        "        question: question,\n",
        "      },\n",
        "    });\n",
        "\n",
        "    // Save history.\n",
        "    await run(\n",
        "      'save-history',\n",
        "      {\n",
        "        conversationId: conversationId,\n",
        "        history: mainResp.toHistory(),\n",
        "      },\n",
        "      async () => {\n",
        "        await historyStore?.save(conversationId, mainResp.toHistory());\n",
        "      }\n",
        "    );\n",
        "\n",
        "    // Handle the response from the model API. In this sample, we just convert\n",
        "    // it to a string, but more complicated flows might coerce the response into\n",
        "    // structured output or chain the response into another LLM call, etc.\n",
        "    return mainResp.text();\n",
        "  }\n",
        ");"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gpdXL-fWeLK5"
      },
      "source": [
        "The retriever works with the LLM (Gemma 2 via Ollama) to create a Retrieval-Augmented Generation (RAG) flow. The retriever fetches relevant documents, which are then used by the language model to generate accurate responses, combining general knowledge with specific, relevant information from your custom data.\n",
        "\n",
        "The chatbotFlow consists of several key steps:\n",
        "\n",
        "* **Firestore Retriever**: The `retrieverRef` specifies how to fetch data from Firestore, using fields like `contentField` (product descriptions) and `vectorField` (embeddings) to locate relevant information.\n",
        "\n",
        "* **Prompt Reference**: The `assistantPrompt` references the prompt you created earlier using Dotprompt, determining how the assistant should format responses.\n",
        "\n",
        "* **Retrieve and Generate Response**: The chatbot flow retrieves relevant documents and uses them as context to generate a response. It utilizes historical context to provide a coherent and contextually relevant answer."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a7e30620"
      },
      "source": [
        "Finally, wrap up the Genkit app by defining the chatbotFlow and embedFlow in `src/index.ts`. This script starts a flow server that exposes your flows as HTTP endpoints, allowing you to interact with the flows you have defined:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "00e69014"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Overwriting src/index.ts\n"
          ]
        }
      ],
      "source": [
        "%%writefile src/index.ts\n",
        "\n",
        "import { startFlowsServer } from '@genkit-ai/flow';\n",
        "import { chatbotFlow } from './chatbotFlow';\n",
        "import { embedFlow } from './embedFlow';\n",
        "\n",
        "// Start a flow server, which exposes your flows as HTTP endpoints. This call\n",
        "// must come last, after all of your plug-in configuration and flow definitions.\n",
        "// You can optionally specify a subset of flows to serve, and configure some\n",
        "// HTTP server options, but by default, the flow server serves all defined flows.\n",
        "startFlowsServer({\n",
        "  flows: [chatbotFlow, embedFlow],\n",
        "});"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZsXPjWZR21KK"
      },
      "source": [
        "### Start the Genkit server\n",
        "\n",
        "Automatically press `Enter` or `\\n` to accept the following terms.\n",
        "\n",
        "\n",
        "> The Genkit CLI and Developer UI use cookies and similar technologies from Google to deliver and enhance the quality of its services and to analyze usage. Learn more at https://policies.google.com/technologies/cookies"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qdT8HV46UPAD"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from google.colab import userdata\n",
        "\n",
        "os.environ['GOOGLE_API_KEY'] = userdata.get('GOOGLE_API_KEY')\n",
        "\n",
        "command = [\n",
        "    \"genkit\", \"start\", \"-o\", \"--port\", \"8081\"\n",
        "]\n",
        "\n",
        "# Create a file to write logs\n",
        "with open(\"genkit.log\", \"w\") as logfile:\n",
        "  # Use subprocess.Popen to run the command with nohup-like behavior\n",
        "  genkit_process = subprocess.Popen(\n",
        "    command,\n",
        "    stdout=logfile,\n",
        "    stderr=subprocess.STDOUT,\n",
        "    stdin=subprocess.PIPE,\n",
        "    start_new_session=True  # This is similar to nohup behavior, detaches from terminal\n",
        "  )\n",
        "  # Send an Enter key (\\n) to the process to accept the terms\n",
        "  genkit_process.stdin.write(b'\\n')\n",
        "  genkit_process.stdin.flush()\n",
        "\n",
        "# Sleep for 60 seconds\n",
        "time.sleep(60)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fPmbRUhZ7IAZ"
      },
      "source": [
        "## Expose the Genkit Tools Web API\n",
        "\n",
        "Use Colab's proxy to expose the server's Tools API endpoint. You can access the web interface this way in case you need to debug any issues."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xhteLvOh7MyO"
      },
      "outputs": [],
      "source": [
        "# Uncomment the following code to access the web interface\n",
        "\n",
        "# from google.colab.output import eval_js\n",
        "# proxy_url = eval_js(\"google.colab.kernel.proxyPort(8081)\")\n",
        "\n",
        "# print(f\"The Genkit Tools UI is accessible at: {proxy_url}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a68ccaa9"
      },
      "source": [
        "## Use `embedFlow` to Index Documents\n",
        "\n",
        "Now, run `embedFlow` using an HTTP POST curl request to index the documents inside the `.txt` file"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ee1c9bf1"
      },
      "outputs": [],
      "source": [
        "!curl -X POST \"http://127.0.0.1:3400/embedFlow\" \\\n",
        "  -H \"Content-Type: application/json\" \\\n",
        "  -d '{}'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a74e7493"
      },
      "source": [
        "You should see a message indicating that the documents have been indexed successfully."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0S2F5WQg7ZX4"
      },
      "source": [
        "## Use `chatbotFlow` to try the RAG out\n",
        "\n",
        "Finally, you can query the RAG chatbot to ask some simple questions about your data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "AU3nUgtq7Zra"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{\"result\":\"The price of the Pixel 8 Pro starts at $999. \\n\"}"
          ]
        }
      ],
      "source": [
        "!curl -X POST \"http://127.0.0.1:3400/chatbotFlow\" \\\n",
        "  -H \"Content-Type: application/json\" \\\n",
        "  -d '{\"data\": \"What is the price of the Pixel 8 Pro?\"}'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ac1827e8"
      },
      "source": [
        "## (Optional) Chat using the Gradio Chatbot Interface\n",
        "\n",
        "Create a simple web interface using **Gradio**.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7e137fc4"
      },
      "outputs": [],
      "source": [
        "import gradio as gr\n",
        "import requests\n",
        "\n",
        "\n",
        "def chat(question, history):\n",
        "    try:\n",
        "        response = requests.post(\n",
        "            \"http://127.0.0.1:3400/chatbotFlow\",\n",
        "            headers={\"Content-Type\": \"application/json\"},\n",
        "            json={\"data\": question}\n",
        "        )\n",
        "\n",
        "        # Check for HTTP request errors\n",
        "        response.raise_for_status()\n",
        "\n",
        "        json_response = response.json()\n",
        "\n",
        "        if 'result' in json_response:\n",
        "            return json_response['result']\n",
        "    except Exception as e:\n",
        "        print(f\"An unexpected error occurred: {e}\")\n",
        "        return \"Sorry, an unexpected error occurred.\"\n",
        "\n",
        "\n",
        "gr.ChatInterface(\n",
        "  chat,\n",
        "  chatbot=gr.Chatbot(\n",
        "    show_copy_button=True,\n",
        "    elem_id=\"chatbot\",\n",
        "    render=False,\n",
        "    render_markdown=True,\n",
        "    height=300\n",
        "  ),\n",
        "  textbox=gr.Textbox(placeholder=\"Ask me a question\", container=False, scale=7),\n",
        "  title=\"Firebase Genkit RAG Chatbot\",\n",
        "  description=\"Ask any question about products.\",\n",
        "  theme=\"soft\",\n",
        "  examples=[\n",
        "      {\"text\": \"What is the price of the Pixel 8 Pro?\"},\n",
        "      {\"text\": \"Tell me about the battery life of the Samsung Galaxy Watch5 Pro.\"},\n",
        "      {\"text\": \"Does the Google Pixel 8 Pro support 5G connectivity?\"}\n",
        "  ],\n",
        "  show_progress=True\n",
        ").launch(debug=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "98f76e0e"
      },
      "source": [
        "This will generate a public URL you can use to access the chatbot interface."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "201c6035"
      },
      "source": [
        "Now that your documents are indexed and the Gradio interface is running, you can start interacting with your RAG application.\n",
        "\n",
        "Open the Gradio interface using the URL provided and ask questions about the data, such as:\n",
        "\n",
        "- \"What is the price of the Pixel 8 Pro?\"\n",
        "- \"Tell me about the battery life of the Samsung Galaxy Watch5 Pro.\"\n",
        "- \"Does the Google Pixel 8 Pro support 5G connectivity?\"\n",
        "\n",
        "You should receive answers based on the data you provided in the `.txt` file."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ga6bfaC4VjHN"
      },
      "source": [
        "## Cleanup\n",
        "\n",
        "Let's clean up everything as you've approached the end of the tutorial."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wWnfVKqoVkSC"
      },
      "outputs": [],
      "source": [
        "# Terminate all processes\n",
        "ollama_serve_process.terminate()\n",
        "ollama_run_process.terminate()\n",
        "genkit_process.terminate()\n",
        "\n",
        "# Delete Firebase project (press Y to confirm)\n",
        "!gcloud projects delete \"$PROJECT_ID\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fba92753"
      },
      "source": [
        "Congratulations! You have successfully built a RAG application using **Genkit**, **Firebase**, **Ollama**, **Gemma**, **Dotprompt**, and **Gradio**, all within a Colab notebook."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "[Gemma_2]Using_with_Firebase_Genkit_and_Ollama.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
