{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "TQ1nQmSbn9co"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "skeDTD9moBK5"
      },
      "source": [
        "# GraphRAG on Google Cloud With Spanner and Vertex AI Agent Engine"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LAoUEttbd9pg"
      },
      "source": [
        "|Author(s) | [Tristan Li](https://github.com/codingphun), [Smitha Venkat](https://github.com/smitha-google)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tC8nObmLw9P0"
      },
      "source": [
        "## Overview\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BpH_CYLqPzLw"
      },
      "source": [
        "Spanner Graph now [integrates seamlessly with LangChain](https://cloud.google.com/python/docs/reference/langchain-google-spanner/latest#spanner-graph-store-usage), making it easier to build GraphRAG applications.\n",
        "\n",
        "Instead of simply retrieving relevant text snippets based on keyword similarity, GraphRAG takes a more sophisticated, structured approach to Retrieval Augmented Generation. It involves creating a knowledge graph from the text, organizing it hierarchically, summarizing key concepts, and then using this structured information to enhance the accuracy and depth of responses.\n",
        "\n",
        "\n",
        "### Objectives\n",
        "\n",
        "In this tutorial, you will see a complete walkthrough of building a question-answering system using the GraphRAG method. You'll learn how to create a knowledge graph from scratch, store it efficiently in Spanner Graph, a functional FAQ system with Langchain agent."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZyXBLYuod9pj"
      },
      "source": [
        "## Before you begin"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C5wHkD06RzZN"
      },
      "source": [
        "1. In the Google Cloud console, on the project selector page, select or [create a Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects).\n",
        "1. [Make sure that billing is enabled for your Google Cloud project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#console).\n",
        "1. [Make sure Cloud Spanner API is enabled](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
        "\n",
        "### Required roles\n",
        "\n",
        "To get the permissions that you need to complete the tutorial, ask your administrator to grant you the [Owner](https://cloud.google.com/iam/docs/understanding-roles#owner) (`roles/owner`) IAM role on your project. For more information about granting roles, see [Manage access](https://cloud.google.com/iam/docs/granting-changing-revoking-access).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "arJM4CK4r6cj"
      },
      "source": [
        "### Install Python Libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yhWJlYAmXSUq"
      },
      "outputs": [],
      "source": [
        "%pip install --quiet json-repair==0.30.2 networkx==3.3 langchain-core==0.3.59 langchain-google-vertexai==2.0.22 langchain-experimental==0.3.4 langchain-community==0.3.24 langchain-text-splitters==0.3.8\n",
        "%pip install --quiet google-cloud-resource-manager==1.13.1 pydantic==2.9.2\n",
        "%pip install --quiet google-cloud-spanner==3.48.0\n",
        "%pip install --quiet langchain-google-spanner==0.8.2\n",
        "%pip install --quiet google-adk==0.5.0\n",
        "%pip install --quiet google-cloud-aiplatform==1.91.0"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DoK1gW9dsRxB"
      },
      "source": [
        "### Authenticating your notebook environment\n",
        "* If you are using **Colab** to run this notebook, uncomment the cell below and continue.\n",
        "* If you are using **Vertex AI Workbench**, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "os3H39sGXugN"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "3.11.12 (main, Apr  9 2025, 08:55:54) [GCC 11.4.0]\n"
          ]
        }
      ],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth as google_auth\n",
        "\n",
        "    google_auth.authenticate_user()\n",
        "print(sys.version)\n",
        "# If using local jupyter instance, uncomment and run:\n",
        "# !gcloud auth login"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CzaWjgsqsuuu"
      },
      "source": [
        "## Initialize and Import"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "zjz6i4vAXvZg"
      },
      "outputs": [],
      "source": [
        "# fmt: off\n",
        "GCP_PROJECT_ID = \"\"  # @param {type:\"string\"}\n",
        "REGION = \"us-central1\"  # @param {type:\"string\"}\n",
        "MODEL_NAME = \"gemini-2.0-flash-001\"  # @param {type:\"string\"}\n",
        "EMBEDDING_MODEL_NAME = \"text-embedding-004\"  # @param {type:\"string\"}\n",
        "TASK_TYPE = \"SEMANTIC_SIMILARITY\"  # @param {type:\"string\"}\n",
        "ANSWER_TASK_TYPE = \"RETRIEVAL_DOCUMENT\"  # @param {type:\"string\"}\n",
        "SPANNER_INSTANCE_ID = \"graphrag-instance\"  # @param {type:\"string\"}\n",
        "SPANNER_DATABASE_ID = \"graphrag\"  # @param {type:\"string\"}\n",
        "SPANNER_GRAPH_NAME = \"wikigraph\"  # @param {type:\"string\"}\n",
        "# fmt: on"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cXDp5uwyd9pk"
      },
      "outputs": [],
      "source": [
        "# Set the project id\n",
        "!gcloud config set project {GCP_PROJECT_ID} --quiet\n",
        "%env GOOGLE_CLOUD_PROJECT={GCP_PROJECT_ID}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GHyZn3Lns9YM"
      },
      "source": [
        "### Import Packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "mCQuCvfrXxUM"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "from langchain_core.documents import Document\n",
        "from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
        "from langchain_google_vertexai import VertexAI\n",
        "from langchain_text_splitters import RecursiveCharacterTextSplitter"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Axb7c8Y1YmQ8"
      },
      "source": [
        "## Create Spanner Instance and Database\n",
        "\n",
        "To prepare for future queries, we'll now store our newly created knowledge graph in a Google Cloud Spanner database. We'll also store the accompanying embeddings in Spanner's Vector Database to enable efficient semantic search.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "27gTtXr4m2n2"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Updated property [core/project].\n"
          ]
        }
      ],
      "source": [
        "!gcloud config set project {GCP_PROJECT_ID}\n",
        "!gcloud services enable spanner.googleapis.com\n",
        "!gcloud spanner instances create {SPANNER_INSTANCE_ID} --config=regional-us-central1 --description=\"Graph RAG Instance\" --nodes=1 --edition=ENTERPRISE"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "jsoqBPlDthc7"
      },
      "outputs": [],
      "source": [
        "# prompt: create a spanner database and table to store the graph with nodes and edges created in graph\n",
        "\n",
        "\n",
        "def create_database(project_id, instance_id, database_id):\n",
        "    \"\"\"Creates a database and tables for sample data.\"\"\"\n",
        "    from google.cloud import spanner\n",
        "    from google.cloud.spanner_admin_database_v1.types import spanner_database_admin\n",
        "\n",
        "    spanner_client = spanner.Client(project_id)\n",
        "    database_admin_api = spanner_client.database_admin_api\n",
        "\n",
        "    request = spanner_database_admin.CreateDatabaseRequest(\n",
        "        parent=database_admin_api.instance_path(spanner_client.project, instance_id),\n",
        "        create_statement=f\"CREATE DATABASE `{database_id}`\",\n",
        "        extra_statements=[\n",
        "            \"\"\"CREATE TABLE KgNode (\n",
        "            DocId        INT64 NOT NULL,\n",
        "            Name STRING(1024),\n",
        "            DOC STRING(1024),\n",
        "            DocEmbedding ARRAY<FLOAT64>\n",
        "            ) PRIMARY KEY (DocId)\"\"\"\n",
        "        ],\n",
        "    )\n",
        "\n",
        "    operation = database_admin_api.create_database(request=request)\n",
        "\n",
        "    print(\"Waiting for operation to complete...\")\n",
        "    OPERATION_TIMEOUT_SECONDS = 60\n",
        "    database = operation.result(OPERATION_TIMEOUT_SECONDS)\n",
        "\n",
        "    print(\n",
        "        f\"Created database {database.name} on instance {database_admin_api.instance_path(spanner_client.project, instance_id)}\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7mVK78UWt2OJ"
      },
      "outputs": [],
      "source": [
        "from google.cloud import spanner\n",
        "\n",
        "create_database(GCP_PROJECT_ID, SPANNER_INSTANCE_ID, SPANNER_DATABASE_ID)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xSuWYhb62UJS"
      },
      "source": [
        "## Create a Knowledge Graph With LangChain and Gemini\n",
        "\n",
        "These texts extracted from Wikipedia are about [Larry Page](https://en.wikipedia.org/wiki/Larry_Page), co-founder of Google. These texts will be used to create a knowledge graph about Larry Page as well as embedding vectors for semantic search."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "NIokGkZii43z"
      },
      "outputs": [],
      "source": [
        "text = \"\"\"Lawrence Edward Page (born March 26, 1973) is an American businessman and computer scientist best known for co-founding Google with Sergey Brin.\n",
        "Lawrence Edward Page was chief executive officer of Google from 1997 until August 2001 when he stepped down in favor of Eric Schmidt,\n",
        "and then again from April 2011 until July 2015 when he became CEO of its newly formed parent organization Alphabet Inc.\n",
        "He held that post until December 4, 2019, when he and Brin stepped down from all executive positions and day-to-day roles within the company.\n",
        "He remains an Alphabet board member, employee, and controlling shareholder. Lawrence Edward Page has an estimated net worth of $156 billion as of June 2024,\n",
        "according to the Bloomberg Billionaires Index, and $145.2 billion according to Forbes, making him the fifth-richest person in the world.\n",
        "He has also invested in flying car startups Kitty Hawk and Opener. Like his Google co-founder, Sergey Brin, Page attended Montessori schools until he entered high school.\n",
        "They both cite the educational method of Maria Montessori as the major influence in how they designed Google's work systems.\n",
        "Maria Montessori believed that the liberty of the child was of utmost importance. In some sense, I feel like music training led to the high-speed legacy of Google for me\"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YDpwl48k27HU"
      },
      "source": [
        "We will use Gemini and Langchain LLMGraphTransformer to parse the texts and generate a knowledge graph.\n",
        "\n",
        "Leveraging Gemini's capabilities, Langchain will use them to identify and extract key information from the text, such as people, countries, and their nationalities, to construct a comprehensive knowledge graph from the texts based on the nodes and relationships we define."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "ZIcYPheujsmc"
      },
      "outputs": [],
      "source": [
        "documents = [Document(page_content=text)]\n",
        "\n",
        "llm = VertexAI(model_name=MODEL_NAME, project=GCP_PROJECT_ID, location=REGION)\n",
        "\n",
        "llm_transformer_filtered = LLMGraphTransformer(\n",
        "    llm=llm,\n",
        "    allowed_nodes=[\"Person\", \"Country\", \"Organization\", \"Asset\"],\n",
        "    allowed_relationships=[\n",
        "        \"NATIONALITY\",\n",
        "        \"LOCATED_IN\",\n",
        "        \"WORKED_AT\",\n",
        "        \"SPOUSE\",\n",
        "        \"NET_WORTH\",\n",
        "        \"INVESTMENT\",\n",
        "        \"INFLUENCED_BY\",\n",
        "    ],\n",
        ")\n",
        "graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents(\n",
        "    documents\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QHXfcXadd9pl"
      },
      "source": [
        "## Store the Knowledge Graph in Spanner\n",
        "\n",
        "Now that the Spanner database is created, we will store the Knowledge Graph in the Spanner Graph Store"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "JrGjHvf0j7vu"
      },
      "outputs": [],
      "source": [
        "from langchain_google_spanner import SpannerGraphStore\n",
        "\n",
        "graph_store = SpannerGraphStore(\n",
        "    instance_id=SPANNER_INSTANCE_ID,\n",
        "    database_id=SPANNER_DATABASE_ID,\n",
        "    graph_name=SPANNER_GRAPH_NAME,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "t1fV5Y2Ad9pl"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Waiting for DDL operations to complete...\n",
            "Waiting for DDL operations to complete...\n",
            "Insert nodes of type `Asset`...\n",
            "Insert nodes of type `Organization`...\n",
            "Insert nodes of type `Person`...\n",
            "Insert nodes of type `Country`...\n",
            "Insert edges of type `Person_WORKED_AT_Organization`...\n",
            "Insert edges of type `Person_NATIONALITY_Country`...\n",
            "Insert edges of type `Person_NET_WORTH_Asset`...\n",
            "Insert edges of type `Person_INVESTMENT_Organization`...\n",
            "Insert edges of type `Person_INFLUENCED_BY_Person`...\n"
          ]
        }
      ],
      "source": [
        "# Uncomment the line below, if you want to cleanup from previous iterations.\n",
        "# BeWARE - THIS COULD REMOVE DATA FROM YOUR DATABASE !!!\n",
        "graph_store.cleanup()\n",
        "\n",
        "for graph_document in graph_documents_filtered:\n",
        "    graph_store.add_graph_documents([graph_document])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aXIongn2d9pl"
      },
      "source": [
        "## Build a QnA Agent\n",
        "\n",
        "Let's build a QnA agent and ask quick questions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "id": "cydr7r-gd9pl"
      },
      "outputs": [],
      "source": [
        "from langchain_google_spanner import SpannerGraphQAChain\n",
        "from langchain_google_vertexai import ChatVertexAI\n",
        "\n",
        "# Initialize llm object\n",
        "llm = ChatVertexAI(model=MODEL_NAME, temperature=0)\n",
        "\n",
        "# Initialize GraphQAChain\n",
        "chain = SpannerGraphQAChain.from_llm(\n",
        "    llm,\n",
        "    graph=graph_store,\n",
        "    allow_dangerous_requests=True,\n",
        "    verbose=False,\n",
        "    return_intermediate_steps=True,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "SVJEGkmZd9pm"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "'Lawrence Edward Page invests in Kitty Hawk and Opener.\\n'"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# fmt: off\n",
        "question = \"What businesses does Lawrence Edward Page invest in?\"  # @param {type:\"string\"}\n",
        "# fmt: on\n",
        "response = chain.invoke(\"query=\" + question)\n",
        "response[\"result\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RrLJ3I-9aULZ"
      },
      "source": [
        "**Important: Magic cell below only works on Colab**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "AeLNbYT0erih"
      },
      "outputs": [],
      "source": [
        "%%spanner_graph --project {GCP_PROJECT_ID} --instance {SPANNER_INSTANCE_ID} --database {SPANNER_DATABASE_ID}\n",
        "\n",
        "GRAPH wikigraph\n",
        "MATCH p = (a)-[e]->(b)\n",
        "RETURN TO_JSON(p) AS path_json\n",
        "LIMIT 50"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e4lDWxd8Uzvr"
      },
      "source": [
        "## Enhance Search Capability"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "38aY1a-Dd9pm"
      },
      "source": [
        "Now if we rephrase the question with \"Larry Page\" instead of his legal name \" Lawrence Edward Page\", it would fail because it relies on exact keyword matching. Semantic search, using embeddings and vector search, overcomes this by understanding the meaning and relationships between words, thus recognizing both names refer to the same person! On the flip side, if we only use semantic search with vector similarity search, the results returned are not always accurate. See examples below on the challenges of either approach."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "52o_qCYpb1Ut"
      },
      "source": [
        "#### Generate the embeddings"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "id": "x7XGJrb2d9pm"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[0.04362960532307625, -0.015628624707460403, -0.03651438280940056, 0.07635214179754257, 0.025854211300611496, 0.016094792634248734, 0.032845646142959595, -0.025009311735630035, 0.033768218010663986, 0.0557292141020298] Lawrence E Lawrence Edward Page, Sergey Brin, Eric Schmidt\n",
            "[0.05250556766986847, 0.0059433588758111, -0.053659338504076004, 0.0366397500038147, 0.027847468852996826, 0.006960400380194187, 0.0055380454286932945, -0.03739484027028084, 0.05096553638577461, 0.06589022278785706] He held th Brin, Lawrence Edward Page\n",
            "[0.00012151900591561571, -0.04247371852397919, -0.06008194014430046, 0.08208870142698288, 0.022667380049824715, 0.015254673548042774, 0.03223690763115883, -0.016212889924645424, 0.02410755306482315, 0.021432554349303246] He has als Sergey Brin, Page, Maria Montessori\n"
          ]
        }
      ],
      "source": [
        "import json\n",
        "\n",
        "import vertexai\n",
        "from vertexai.generative_models import GenerationConfig, GenerativeModel\n",
        "from vertexai.language_models import TextEmbeddingInput, TextEmbeddingModel\n",
        "\n",
        "# init the vertexai package\n",
        "vertexai.init(project=GCP_PROJECT_ID, location=REGION)\n",
        "\n",
        "\n",
        "def get_embedding(text, task_type, model):\n",
        "    try:\n",
        "        text_embedding_input = TextEmbeddingInput(task_type=task_type, text=text)\n",
        "        embeddings = model.get_embeddings([text_embedding_input])\n",
        "        return embeddings[0].values\n",
        "    except:\n",
        "        return []\n",
        "\n",
        "\n",
        "embedding_model = TextEmbeddingModel.from_pretrained(EMBEDDING_MODEL_NAME)\n",
        "text_model = GenerativeModel(MODEL_NAME)\n",
        "documents = [Document(page_content=text)]\n",
        "\n",
        "spanner_embedding_values = []\n",
        "\n",
        "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
        "\n",
        "splitter = RecursiveCharacterTextSplitter(\n",
        "    chunk_size=500, chunk_overlap=100, separators=[\"\\n\\n\", \"\\n\", r\"(?<=\\. )\", \" \", \"\"]\n",
        ")\n",
        "splitted_text = splitter.split_documents(documents)\n",
        "for chunk in splitted_text:\n",
        "    chunk_content = chunk.page_content\n",
        "    embedding = get_embedding(chunk_content, ANSWER_TASK_TYPE, embedding_model)\n",
        "    user_prompt_content = (\n",
        "        \"Find person's names but ignore any pronoun in the following sentence \\n\"\n",
        "        + chunk_content\n",
        "    )\n",
        "    response = text_model.generate_content(\n",
        "        user_prompt_content,\n",
        "        generation_config=GenerationConfig(\n",
        "            temperature=0,\n",
        "            response_mime_type=\"application/json\",\n",
        "            response_schema={\n",
        "                \"type\": \"OBJECT\",\n",
        "                \"properties\": {\"nodes\": {\"type\": \"STRING\"}},\n",
        "            },\n",
        "        ),\n",
        "    )\n",
        "    response_content = json.loads(response.candidates[0].content.parts[0].text)[\"nodes\"]\n",
        "    print(embedding[:10], chunk_content[:10], response_content)\n",
        "    spanner_embedding_values.append([embedding, chunk_content, response_content])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pADGrRrKnB4n"
      },
      "source": [
        "#### Knowledge graph search only - no result found"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {
        "id": "Vr6aAs2Qd9pm"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "\"I don't know the answer.\""
            ]
          },
          "execution_count": 15,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# fmt: off\n",
        "question = \"What businesses does Larry Page invest in?\"  # @param {type:\"string\"}\n",
        "# fmt: on\n",
        "response = chain.invoke(\"query=\" + question)\n",
        "response[\"result\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NUZTMinum163"
      },
      "source": [
        "#### Semantic search only - top similarity match is incorrect"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "id": "4HWdbakKeJvi"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "chunk: Lawrence Edward Page (born March 26, 1973) is an American businessman and computer scientist best known for co-founding Google with Sergey Brin.\n",
            "Lawrence Edward Page was chief executive officer of Google from 1997 until August 2001 when he stepped down in favor of Eric Schmidt,\n",
            "and then again from April 2011 until July 2015 when he became CEO of its newly formed parent organization Alphabet Inc. \n",
            " similarity: [[0.78622893]] \n",
            "\n",
            "chunk: He held that post until December 4, 2019, when he and Brin stepped down from all executive positions and day-to-day roles within the company.\n",
            "He remains an Alphabet board member, employee, and controlling shareholder. Lawrence Edward Page has an estimated net worth of $156 billion as of June 2024,\n",
            "according to the Bloomberg Billionaires Index, and $145.2 billion according to Forbes, making him the fifth-richest person in the world. \n",
            " similarity: [[0.81792139]] \n",
            "\n",
            "chunk: He has also invested in flying car startups Kitty Hawk and Opener. Like his Google co-founder, Sergey Brin, Page attended Montessori schools until he entered high school.\n",
            "They both cite the educational method of Maria Montessori as the major influence in how they designed Google's work systems.\n",
            "Maria Montessori believed that the liberty of the child was of utmost importance. In some sense, I feel like music training led to the high-speed legacy of Google for me \n",
            " similarity: [[0.78612123]] \n",
            "\n"
          ]
        }
      ],
      "source": [
        "from sklearn.metrics.pairwise import cosine_similarity\n",
        "\n",
        "# fmt: off\n",
        "QUESTION = \"What businesses does Larry Page invest in?\"  # @param {type:\"string\"}\n",
        "# fmt: on\n",
        "\n",
        "q_emb = get_embedding(QUESTION, ANSWER_TASK_TYPE, embedding_model)\n",
        "\n",
        "for emb in spanner_embedding_values:\n",
        "    print(f\"chunk: {emb[1]} \\n similarity: {cosine_similarity([q_emb], [emb[0]])} \\n\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4OVOZZ1Ld9pm"
      },
      "source": [
        "#### Save the embeddings into Spanner"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "2K4tCvssd9pm"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "1 record(s) inserted.\n",
            "1 record(s) inserted.\n",
            "1 record(s) inserted.\n"
          ]
        }
      ],
      "source": [
        "spanner_client = spanner.Client(GCP_PROJECT_ID)\n",
        "instance = spanner_client.instance(SPANNER_INSTANCE_ID)\n",
        "database = instance.database(SPANNER_DATABASE_ID)\n",
        "\n",
        "\n",
        "def insert_values(transaction):\n",
        "    value1 = 0\n",
        "    for sub_list in spanner_embedding_values:\n",
        "        table_name = \"KgNode\"\n",
        "        col_name1 = \"docID\"\n",
        "        col_name2 = \"Name\"\n",
        "        col_name3 = \"Doc\"\n",
        "        col_name4 = \"DocEmbedding\"\n",
        "        value1 += 1\n",
        "        value2 = sub_list[2]\n",
        "        value3 = sub_list[1]\n",
        "        value4 = sub_list[0]\n",
        "        # print(col_name1, col_name2, col_name3, col_name4, value1, value2, value3, value4[:10])\n",
        "        row_ct1 = transaction.execute_update(\n",
        "            f\"INSERT INTO {table_name} ({col_name1}, {col_name2}, {col_name3}, {col_name4}) VALUES (@value1, @value2, @value3, @value4)\",\n",
        "            params={\n",
        "                \"value1\": value1,\n",
        "                \"value2\": value2,\n",
        "                \"value3\": value3,\n",
        "                \"value4\": value4,\n",
        "            },\n",
        "            param_types={\n",
        "                \"value1\": spanner.param_types.INT64,\n",
        "                \"value2\": spanner.param_types.STRING,\n",
        "                \"value3\": spanner.param_types.STRING,\n",
        "                \"value4\": spanner.param_types.Array(spanner.param_types.FLOAT64),\n",
        "            },\n",
        "        )  # Adjust types if needed\n",
        "\n",
        "        print(f\"{row_ct1} record(s) inserted.\")\n",
        "\n",
        "\n",
        "# print(insert_values)  # This just prints the function object, remove this line\n",
        "database.run_in_transaction(insert_values)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ycefZwNfYTVK"
      },
      "source": [
        "#### Combine both semantic search and graph search. Ask the question again - correct answer!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "id": "nD6gyU1bd9pm"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "What businesses does Lawrence Edward Page invest in?\n"
          ]
        },
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "'Lawrence Edward Page invests in Kitty Hawk and Opener.\\n'"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import json\n",
        "\n",
        "import vertexai\n",
        "from vertexai.generative_models import GenerationConfig, GenerativeModel\n",
        "from vertexai.language_models import TextEmbeddingModel\n",
        "\n",
        "# fmt: off\n",
        "QUESTION = \"What businesses does Larry Page invest in?\"  # @param {type:\"string\"}\n",
        "# fmt: on\n",
        "q_emb = get_embedding(QUESTION, TASK_TYPE, embedding_model)\n",
        "\n",
        "spanner_client = spanner.Client(GCP_PROJECT_ID)\n",
        "instance = spanner_client.instance(SPANNER_INSTANCE_ID)\n",
        "database = instance.database(SPANNER_DATABASE_ID)\n",
        "kgnodename = \"\"\n",
        "with database.snapshot() as snapshot:\n",
        "    results = snapshot.execute_sql(\n",
        "        \"\"\"SELECT DocId, NAME, Doc FROM KgNode ORDER BY COSINE_DISTANCE(DocEmbedding, @q_emb) limit 1\"\"\",\n",
        "        params={\"q_emb\": q_emb},\n",
        "        param_types={\n",
        "            \"q_emb\": spanner.param_types.Array(spanner.param_types.FLOAT64)\n",
        "        },  # Adjust FLOAT64 if needed\n",
        "    )\n",
        "    for row in results:\n",
        "        kgnodename = str(row[1])\n",
        "\n",
        "text_model = GenerativeModel(MODEL_NAME)\n",
        "user_prompt_content = (\n",
        "    \"Find and replace entities such as person's name, place, nationality in the following sentence \\n\"\n",
        "    + QUESTION\n",
        "    + \"with entities defined below \\n\"\n",
        "    + kgnodename\n",
        "    + \"\\n only replace matching person's name \\n output the best replacement in a string\"\n",
        ")\n",
        "# print (user_prompt_content)\n",
        "response = text_model.generate_content(\n",
        "    user_prompt_content,\n",
        "    generation_config=GenerationConfig(\n",
        "        temperature=0,\n",
        "        response_mime_type=\"application/json\",\n",
        "        response_schema={\n",
        "            \"type\": \"OBJECT\",\n",
        "            \"properties\": {\"sentence\": {\"type\": \"STRING\"}},\n",
        "        },\n",
        "    ),\n",
        ")\n",
        "response_content = json.loads(response.candidates[0].content.parts[0].text)[\"sentence\"]\n",
        "print(response_content)\n",
        "response = chain.invoke(\"query=\" + response_content)\n",
        "response[\"result\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FBI5WJcgIYUM"
      },
      "source": [
        "## Agent Development Kit\n",
        "\n",
        "The [Agent Development Kit (ADK)](https://google.github.io/adk-docs/) is a flexible, modular framework for developing and deploying AI agents, designed to be model-agnostic, deployment-agnostic, and compatible with other frameworks, despite its optimization for Gemini and the Google ecosystem.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "id": "PizppsQqJN52"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "\n",
        "import vertexai\n",
        "from google.adk.agents import Agent\n",
        "from google.cloud import spanner\n",
        "from langchain_google_spanner import SpannerGraphStore\n",
        "from vertexai.generative_models import (\n",
        "    GenerationConfig,\n",
        "    GenerativeModel,\n",
        ")\n",
        "from vertexai.language_models import TextEmbeddingModel\n",
        "\n",
        "os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = \"1\"\n",
        "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = GCP_PROJECT_ID\n",
        "os.environ[\"GOOGLE_CLOUD_LOCATION\"] = REGION"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5I_sVhDrLfO8"
      },
      "source": [
        "#### Create helper functions for function calling\n",
        "\n",
        "We will create a couple python functions for function calling for the agent. The first function is to generate embeddings of the user query. The second function is to query the Spanner database combining both semantic search and graph search."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {
        "id": "jcbBn4teLqVz"
      },
      "outputs": [],
      "source": [
        "def ask_graph(query: str) -> dict:\n",
        "    from google.cloud import spanner\n",
        "    from langchain_google_spanner import SpannerGraphQAChain, SpannerGraphStore\n",
        "    from langchain_google_vertexai import ChatVertexAI\n",
        "\n",
        "    GCP_PROJECT_ID = \"\"\n",
        "    REGION = \"us-central1\"\n",
        "    MODEL_NAME = \"gemini-2.0-flash-001\"\n",
        "    EMBEDDING_MODEL_NAME = \"text-embedding-004\"\n",
        "    TASK_TYPE = \"SEMANTIC_SIMILARITY\"\n",
        "    SPANNER_INSTANCE_ID = \"graphrag-instance\"\n",
        "    SPANNER_DATABASE_ID = \"graphrag\"\n",
        "    SPANNER_GRAPH_NAME = \"wikigraph\"\n",
        "\n",
        "    graph_store = SpannerGraphStore(\n",
        "        instance_id=SPANNER_INSTANCE_ID,\n",
        "        database_id=SPANNER_DATABASE_ID,\n",
        "        graph_name=SPANNER_GRAPH_NAME,\n",
        "    )\n",
        "\n",
        "    # Initialize llm object\n",
        "    llm = ChatVertexAI(model=MODEL_NAME, temperature=0)\n",
        "\n",
        "    # Initialize GraphQAChain\n",
        "    chain = SpannerGraphQAChain.from_llm(\n",
        "        llm,\n",
        "        graph=graph_store,\n",
        "        allow_dangerous_requests=True,\n",
        "        verbose=False,\n",
        "        return_intermediate_steps=True,\n",
        "    )\n",
        "\n",
        "    embedding_model = TextEmbeddingModel.from_pretrained(EMBEDDING_MODEL_NAME)\n",
        "    text_embedding_input = TextEmbeddingInput(task_type=TASK_TYPE, text=query)\n",
        "    q_emb = embedding_model.get_embeddings([text_embedding_input])[0].values\n",
        "    spanner_client = spanner.Client(GCP_PROJECT_ID)\n",
        "    instance = spanner_client.instance(SPANNER_INSTANCE_ID)\n",
        "    database = instance.database(SPANNER_DATABASE_ID)\n",
        "    kgnodename = \"\"\n",
        "    with database.snapshot() as snapshot:\n",
        "        results = snapshot.execute_sql(\n",
        "            \"\"\"SELECT DocId, NAME, Doc FROM KgNode ORDER BY COSINE_DISTANCE(DocEmbedding, @q_emb) limit 1\"\"\",\n",
        "            params={\"q_emb\": q_emb},\n",
        "            param_types={\n",
        "                \"q_emb\": spanner.param_types.Array(spanner.param_types.FLOAT64)\n",
        "            },  # Adjust FLOAT64 if needed\n",
        "        )\n",
        "        for row in results:\n",
        "            kgnodename = str(row[1])\n",
        "\n",
        "    text_model = GenerativeModel(MODEL_NAME)\n",
        "    user_prompt_content = (\n",
        "        \"Find and replace entities such as person's name, place, nationality in the following sentence \\n\"\n",
        "        + query\n",
        "        + \"with entities defined below \\n\"\n",
        "        + kgnodename\n",
        "        + \"\\n only replace matching person's name \\n output the best replacement in a string\"\n",
        "    )\n",
        "    response = text_model.generate_content(\n",
        "        user_prompt_content,\n",
        "        generation_config=GenerationConfig(\n",
        "            temperature=0,\n",
        "            response_mime_type=\"application/json\",\n",
        "            response_schema={\n",
        "                \"type\": \"OBJECT\",\n",
        "                \"properties\": {\"sentence\": {\"type\": \"STRING\"}},\n",
        "            },\n",
        "        ),\n",
        "    )\n",
        "    response_content = json.loads(response.candidates[0].content.parts[0].text)[\n",
        "        \"sentence\"\n",
        "    ]\n",
        "    response = chain.invoke(\"query=\" + response_content)\n",
        "    response[\"result\"]\n",
        "    return response[\"result\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2O_mNmCLLvu1"
      },
      "source": [
        "#### Create an ADK Agent\n",
        "\n",
        "Using the Google Agent Development Kit, we will create an agent and give it high level instructions on how to interact with users. ADK does have [built in tools](https://google.github.io/adk-docs/tools/built-in-tools/#how-to-use) for leveraging Google Search, Code Execution and Vertex AI Search.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "id": "M6KOhbcNL18b"
      },
      "outputs": [],
      "source": [
        "from google.adk.tools import agent_tool, google_search\n",
        "\n",
        "search_agent = Agent(\n",
        "    model=\"gemini-2.0-flash-001\",\n",
        "    name=\"SearchAgent\",\n",
        "    instruction=\"\"\"\n",
        "    You're a specialist in Google Search\n",
        "    \"\"\",\n",
        "    tools=[google_search],\n",
        ")\n",
        "\n",
        "root_agent = Agent(\n",
        "    name=\"graph_rag_agent\",\n",
        "    model=\"gemini-2.0-flash-001\",\n",
        "    description=(\n",
        "        \"Agent to answer questions from a graph database and google search if information not present in the database.\"\n",
        "    ),\n",
        "    instruction=(\n",
        "        \"\"\"You are a helpful information retrieval agent that can answer user's query from the\n",
        "        knowledge graph and do a broader search if you cant find answer in the graph database.\n",
        "          - After you get the user query, always check the graph database first.\n",
        "          - If the query can be answered from the graph, then call the ask_graph tool.\n",
        "          - If you are not able to find the answer in the graph, ask the user if they would like to do\n",
        "          a broader search.\n",
        "          - If the user says yes, then call the google_search tool.\n",
        "          - If the user says no, then ask them if there is anything else they would like to know.\n",
        "          - Always be courteous and dont assume anything.\n",
        "          - If you dont know an answer, please say I dont know the answer.\"\"\"\n",
        "    ),\n",
        "    tools=[agent_tool.AgentTool(agent=search_agent), ask_graph],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6r8KU3XXMCL2"
      },
      "source": [
        "#### Test the ADK Agent locally\n",
        "\n",
        "We will create a session for the agent to run in"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {
        "id": "0yq6gc5NMEUd"
      },
      "outputs": [],
      "source": [
        "from google.adk.agents import Agent\n",
        "from google.adk.runners import Runner\n",
        "from google.adk.sessions import InMemorySessionService\n",
        "from google.adk.tools import google_search\n",
        "from google.genai import types\n",
        "\n",
        "APP_NAME = \"google_search_agent\"\n",
        "USER_ID = \"user1234\"\n",
        "SESSION_ID = \"1234\"\n",
        "\n",
        "session_service = InMemorySessionService()\n",
        "session = session_service.create_session(\n",
        "    app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID\n",
        ")\n",
        "runner = Runner(agent=root_agent, app_name=APP_NAME, session_service=session_service)\n",
        "\n",
        "# Agent Interaction\n",
        "\n",
        "\n",
        "def call_agent(query):\n",
        "    \"\"\"Helper function to call the agent with a query.\"\"\"\n",
        "    content = types.Content(role=\"user\", parts=[types.Part(text=query)])\n",
        "    events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)\n",
        "\n",
        "    for event in events:\n",
        "        if event.is_final_response():\n",
        "            final_response = event.content.parts[0].text\n",
        "            print(\"Agent Response: \", final_response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "498mkJMSMLM3"
      },
      "source": [
        "Let's test out the agent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "id": "ImdPXhJTMN8X"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "WARNING:google_genai.types:Warning: there are non-text parts in the response: ['function_call'],returning concatenated text result from text parts,check out the non text parts for full response from model.\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Agent Response:  Lawrence Edward Page invests in Kitty Hawk and Opener. Would you like me to do a broader search to see if he invests in any other businesses?\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# Test your Agent\n",
        "call_agent(\"What businesses does Larry Page invest in?\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1cwr_UavM8-k"
      },
      "source": [
        "#### Google Search Grounding\n",
        "\n",
        "Currently, the agent retrieves answers from the Spanner-backed knowledge graph.  For queries beyond the knowledge graph's scope, we can augement it with Google Search."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {
        "id": "To-V-eLXMod7"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "WARNING:google_genai.types:Warning: there are non-text parts in the response: ['function_call'],returning concatenated text result from text parts,check out the non text parts for full response from model.\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Agent Response:  Lawrence Edward Page did not invest in Deepseek. Would you like me to do a broader search to see if there is any information about this?\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# Calling the agent with another question that cannot be answered from knowledge graph\n",
        "call_agent(\"Did Larry Page invest in Deepseek?\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 27,
      "metadata": {
        "id": "pEFgBw98Mw8M"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "WARNING:google_genai.types:Warning: there are non-text parts in the response: ['function_call'],returning concatenated text result from text parts,check out the non text parts for full response from model.\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Agent Response:  Although there's no direct evidence that Larry Page has invested in DeepSeek, DeepSeek's AI models have impacted the stock market and, as a result, have indirectly affected the wealth of tech figures like Larry Page. Would you like me to provide you with more information?\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# Answer yes to ask your agent to perform a Google Search\n",
        "call_agent(\"Yes\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dwHCEAJlNn3H"
      },
      "source": [
        "## Vertex AI Agent Engine\n",
        "\n",
        "The [Vertex AI Agent Engine](https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/overview) is a fully managed Google Cloud service that enables developers to deploy, manage, and scale AI agents efficiently in production. By abstracting away infrastructure complexities, it allows development teams to focus entirely on creating intelligent and impactful agentic applications."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 28,
      "metadata": {
        "id": "DaLJ8uqGOtWt"
      },
      "outputs": [],
      "source": [
        "from vertexai.preview import reasoning_engines\n",
        "\n",
        "app = reasoning_engines.AdkApp(\n",
        "    agent=root_agent,\n",
        "    enable_tracing=True,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JfjX70JcTwUa"
      },
      "source": [
        "Let's test it again locally"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {
        "id": "XzeW49hISLF1"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "WARNING:google_genai.types:Warning: there are non-text parts in the response: ['function_call'],returning concatenated text result from text parts,check out the non text parts for full response from model.\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{'content': {'parts': [{'function_call': {'id': 'adk-4a7274d7-65d9-4861-bccb-f4e816dfb792', 'args': {'query': 'What businesses does Larry Page invest in?'}, 'name': 'ask_graph'}}], 'role': 'model'}, 'invocation_id': 'e-0530bebf-d3d5-4b9a-ab4d-0b23a57593a3', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'long_running_tool_ids': set(), 'id': 'Trb8qyRS', 'timestamp': 1747067830.149277}\n",
            "{'content': {'parts': [{'function_response': {'id': 'adk-4a7274d7-65d9-4861-bccb-f4e816dfb792', 'name': 'ask_graph', 'response': {'result': 'Lawrence Edward Page invests in Kitty Hawk and Opener.\\n'}}}], 'role': 'user'}, 'invocation_id': 'e-0530bebf-d3d5-4b9a-ab4d-0b23a57593a3', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'id': '1z7a3pyJ', 'timestamp': 1747067836.319501}\n",
            "{'content': {'parts': [{'text': 'Lawrence Edward Page invests in Kitty Hawk and Opener. Would you like me to do a broader search to find other businesses he invests in?\\n'}], 'role': 'model'}, 'invocation_id': 'e-0530bebf-d3d5-4b9a-ab4d-0b23a57593a3', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'id': 'XtJGNJ87', 'timestamp': 1747067836.321095}\n"
          ]
        }
      ],
      "source": [
        "for event in app.stream_query(\n",
        "    user_id=\"43\",\n",
        "    message=\"What businesses does Larry Page invest in?\",\n",
        "):\n",
        "    print(event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DzcrUKHfYRql"
      },
      "source": [
        "#### Deploy to Vertex AI Agent Engine\n",
        "\n",
        "Give service-PROJECT_NUMBER@gcp-sa-aiplatform-re.iam.gserviceaccount.com Cloud Spanner Database User role or Vertex AI Reasoning Engine will not have access to the Spanner database"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_n71HAPNYaoi"
      },
      "outputs": [],
      "source": [
        "!gcloud projects add-iam-policy-binding {GCP_PROJECT_ID} \\\n",
        "      --member='serviceAccount:service-{GCP_PROJECT_NUMBER}@gcp-sa-aiplatform-re.iam.gserviceaccount.com' \\\n",
        "      --role='roles/spanner.databaseUser'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FtlUZjeYYcm7"
      },
      "source": [
        "Deploy the package"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6451BsttOvbo"
      },
      "outputs": [],
      "source": [
        "from vertexai import agent_engines\n",
        "\n",
        "STAGING_BUCKET = f\"gs://{GCP_PROJECT_ID}-vertexai-staging\"\n",
        "\n",
        "vertexai.init(\n",
        "    project=GCP_PROJECT_ID,\n",
        "    location=REGION,\n",
        "    staging_bucket=STAGING_BUCKET,\n",
        ")\n",
        "\n",
        "remote_app = agent_engines.create(\n",
        "    app,\n",
        "    requirements=[\n",
        "        \"google-cloud-aiplatform==1.91.0\",\n",
        "        \"google-adk==0.5.0 \",\n",
        "        \"google-cloud-spanner==3.48.0\",\n",
        "        \"langchain-google-spanner==0.8.2\",\n",
        "        \"langchain-google-vertexai==2.0.22\",\n",
        "        \"langchain-experimental==0.3.4\",\n",
        "    ],\n",
        ")\n",
        "#        \"cloudpickle==3.1.1\",\n",
        "#        \"pydantic==2.9.2\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "yqtHrs8P3HG9"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{'content': {'parts': [{'function_call': {'id': 'adk-527d88f2-6b8a-40f3-b9dd-50b477f92339', 'args': {'query': 'What businesses does Larry Page invest in?'}, 'name': 'ask_graph'}}], 'role': 'model'}, 'invocation_id': 'e-17225639-2f06-4fce-95ae-06603a4a4a05', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'long_running_tool_ids': [], 'id': 'g76zfd1M', 'timestamp': 1747068089.968902}\n",
            "{'content': {'parts': [{'function_response': {'id': 'adk-527d88f2-6b8a-40f3-b9dd-50b477f92339', 'name': 'ask_graph', 'response': {'result': 'Lawrence Edward Page invests in Kitty Hawk and Opener.\\n'}}}], 'role': 'user'}, 'invocation_id': 'e-17225639-2f06-4fce-95ae-06603a4a4a05', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'id': 'QLwNY9jQ', 'timestamp': 1747068097.19637}\n",
            "{'content': {'parts': [{'text': 'Lawrence Edward Page invests in Kitty Hawk and Opener. Would you like me to perform a broader search to look for other businesses he may have invested in?\\n'}], 'role': 'model'}, 'invocation_id': 'e-17225639-2f06-4fce-95ae-06603a4a4a05', 'author': 'graph_rag_agent', 'actions': {'state_delta': {}, 'artifact_delta': {}, 'requested_auth_configs': {}}, 'id': 'dACMoPCP', 'timestamp': 1747068097.268164}\n"
          ]
        }
      ],
      "source": [
        "# remote_app = vertexai.agent_engines.get('agent_engine_uri')\n",
        "for event in remote_app.stream_query(\n",
        "    user_id=\"43\",\n",
        "    message=\"What businesses does Larry Page invest in?\",\n",
        "):\n",
        "    print(event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QbjhQtdcPEAD"
      },
      "source": [
        "Because we enabled [Tracing](https://cloud.google.com/trace/docs/overview) in Vertex AI Agent Engine, we can leverage it for troublshooting and monitoring."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hbOxV-FdPe00"
      },
      "source": [
        "![Screenshot 2025-05-12 at 12.02.39 PM.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vaY6vhFE9egF"
      },
      "source": [
        "## Clean Up\n",
        "\n",
        "*   Delete the Spanner instance\n",
        "*   Delete the Vertex AI Agent Engine instance"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Qb4FcKf9dS-d"
      },
      "outputs": [],
      "source": [
        "!gcloud spanner instances delete {SPANNER_INSTANCE_ID} --quiet"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "HCOr8HuJ1p7S"
      },
      "outputs": [],
      "source": [
        "remote_app.delete(force=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IKFKrjkud9pn"
      },
      "source": [
        "## What's next\n",
        "\n",
        "* Dive deeper into [LangChain with Spanner](https://github.com/googleapis/langchain-google-spanner-python/tree/main).\n",
        "* Learn more about [Spanner](https://cloud.google.com/spanner/docs/getting-started/python).\n",
        "* Explore other [Spanner Graph Notebooks](https://github.com/cloudspannerecosystem/spanner-graph-notebook/blob/main/README.md).\n",
        "* Learn more about [Vertex AI Engine](https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/overview).\n",
        "* Learn more about [Agent Development Kit](https://google.github.io/adk-docs/)."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [
        "tC8nObmLw9P0",
        "zTa3RFWesfsL",
        "DoK1gW9dsRxB",
        "Axb7c8Y1YmQ8",
        "QHXfcXadd9pl",
        "4OVOZZ1Ld9pm",
        "IKFKrjkud9pn"
      ],
      "name": "graph_rag_spanner_sdk_adk.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
