{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "RxXgWtxtz9Ew"
   },
   "outputs": [],
   "source": [
    "# Copyright 2023 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "C5SwRvvKJvcz"
   },
   "source": [
    "# **Building AI-powered data-driven applications using pgvector, LangChain and LLMs**\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "27wCSEVJhmjp"
   },
   "source": [
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GoogleCloudPlatform/python-docs-samples/blob/main/cloud-sql/postgres/pgvector/notebooks/pgvector_gen_ai_demo.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_GhFgSOldCvV"
   },
   "source": [
    "This hands-on tutorial will show you how you can add generative AI features to your own applications with just a few lines of code using pgvector, LangChain and LLMs on Google Cloud.\n",
    "\n",
    "We will build together a sample Python application that will be able to understand and respond to human language queries about the relational data stored in your PostgreSQL database. In fact, we will further push the creative limits of the application by teaching it to generate new content based on our existing dataset.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "93QUUCrvKm03"
   },
   "source": [
    "## Objective\n",
    "\n",
    "After completing the steps in this notebook:\n",
    "- You will have a good understanding of how to use the [pgvector extension](https://github.com/pgvector/pgvector) to store and search vector embeddings in PostgreSQL. Learn more about [vector embeddings](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings).\n",
    "- You will get a hands-on experience with using the open-source [LangChain framework](https://python.langchain.com/en/latest/index.html) to develop applications powered by large language models. LangChain makes it easier to develop and deploy applications against any LLM model in a vendor-agnostic manner.\n",
    "- You will learn about the powerful features in [Google PaLM models made available through Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DS7GdlJ1XowY"
   },
   "source": [
    "## Example Scenario\n",
    "\n",
    "This notebook uses an example of an e-commerce company that runs an online marketplace for buying and selling children toys. This company wants to add new generative AI experiences in their e-commerce applications for both buyers and sellers on their platform.\n",
    "\n",
    "The goals are:\n",
    "\n",
    "- (_Usecase 1_) For buyers: Build a new AI-powered hybrid search, where users can describe their needs in simple English text, along with regular filters (like price, etc.)\n",
    "- (_Usecase 2_) For sellers: Add a new AI-powered content generation feature, where sellers will get auto-generated item description suggestions for new products that they want to add to the platform.\n",
    "\n",
    "\n",
    "Dataset:\n",
    "- The dataset for this notebook has been sampled and created from a larger public retail dataset available at [Kaggle](https://www.kaggle.com/datasets/promptcloud/walmart-product-details-2020). The sampled dataset used in this notebook has only about 800 toy products, while the public dataset has over 370,000 products in different categories."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OopTDfqckI57"
   },
   "source": [
    "## Overview of the steps\n",
    "\n",
    "1. Download the dataset and load it into a PostgreSQL table called `products`.\n",
    "   This table has 4 fields: `product_id`, `product_name`, `description`, `list_price`.\n",
    "2. Split the long `description` field values into smaller chunks and generate\n",
    "   vector embeddings for each chunk. The vector embeddings are then stored in another PostgreSQL table called `product_embeddings` using the `pgvector` extension. The `product_embeddings` table has a foreign key referencing the `products` table.\n",
    "3. For a given user query, generate its vector embeddings and use `pgvector`\n",
    "   vector similarity search operators to find closest matching products _after applying the relevant SQL filters._\n",
    "4. Once matching products and their descriptions are found, use the [MapReduceChain](https://python.langchain.com/docs/modules/chains/document/map_reduce) from LangChain framework to generate a summarized high-quality context using an LLM model (Google PaLM in this case).\n",
    "5. Finally, pass the context to an LLM prompt to answer the user query. The LLM\n",
    "   model will return a well-formatted natural sounding English result back to\n",
    "   the user.\n",
    "\n",
    "![BWD8LDLzC9aHAhw.png]()\n",
    "\n",
    "Let's dive in!\n",
    "\n",
    "---\n",
    "\n",
    "&nbsp;\n",
    "&nbsp;\n",
    "&nbsp;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9ZlIikgCLyXk"
   },
   "source": [
    "## Before you begin\n",
    "\n",
    ">⚠️ **Running this codelab will incur Google Cloud charges. You may also be billed for Vertex AI API usages.**\n",
    "\n",
    "Pre-requisities:\n",
    "- You need to have an active Google Cloud account to successfully complete this tutorial.\n",
    "-  This sample notebook must be connected to a **Google Cloud project**, but nothing else is needed other than your Google Cloud project.\n",
    "- You can use an existing project. Alternatively, you can create a new Cloud project [with free trial cloud credits.](https://cloud.google.com/free/docs/gcp-free-tier)\n",
    "- You can use an existing Cloud SQL PostgreSQL instance for this tutorial. If an existing instance is not found, this tutorial will automatically create one for you.\n",
    "- Note that this notebook connects to the Cloud SQL instance via public IP using the [Cloud SQL Python connector](https://cloud.google.com/blog/topics/developers-practitioners/how-connect-cloud-sql-using-python-easy-way). Therefore, your Cloud SQL instance should have a public IP assigned to it.\n",
    "- At the end of the tutorial, you can optionally clean-up these resources to avoid further charges.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OkpRH6SS42vm"
   },
   "source": [
    "### Using this interactive notebook\n",
    "\n",
    "Click the **run** icon on the top left corner ▶️  of each cell within this notebook.\n",
    "\n",
    "> 💡 Alternatively, you can run the currently selected cell with `Ctrl + Enter` (or `⌘ + Enter` on a Mac).\n",
    "\n",
    "> ⚠️ **To avoid any errors**, wait for each cell to finish in their order before clicking the next “run” icon."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_nqImIYGf-yG"
   },
   "source": [
    "## Setup"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OHZVIMDm3djR"
   },
   "source": [
    "### Install required packages\n",
    "\n",
    ">⚠️ You may receive a warning to \"Restart Runtime\" after the packages are installed. Don't worry, the subsequent cells will help you restart the runtime."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jI_1BhMx3iX0"
   },
   "outputs": [],
   "source": [
    "# Install dependencies.\n",
    "!pip install asyncio==3.4.3 asyncpg==0.27.0 cloud-sql-python-connector[\"asyncpg\"]==1.2.3\n",
    "!pip install numpy==1.22.4 pandas==1.5.3\n",
    "!pip install pgvector==0.1.8\n",
    "!pip install langchain==0.0.196 transformers==4.30.1\n",
    "!pip install google-cloud-aiplatform==1.26.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "E4ITjLkp4ME0"
   },
   "outputs": [],
   "source": [
    "# Automatically restart kernel after installs so that your environment\n",
    "# can access the new packages.\n",
    "import IPython\n",
    "\n",
    "app = IPython.Application.instance()\n",
    "app.kernel.do_shutdown(True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "CX0ssSs11bYz"
   },
   "source": [
    "### Setup Google Cloud environment\n",
    "\n",
    ">⚠️ Please fill in your **Google Cloud project ID** and a new **password** for your Cloud SQL PostgreSQL database."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "5Zln92ee1xvG"
   },
   "outputs": [],
   "source": [
    "# @markdown Replace the required placeholder text below. You can modify any other default values, if you like.\n",
    "\n",
    "# Please fill in these values.\n",
    "project_id = \"[YOUR_PROJECT_ID  **REQUIRED**]\"  # @param {type:\"string\"}\n",
    "database_password = \"[YOUR_PASSWORD  **REQUIRED**]\"  # @param {type:\"string\"}\n",
    "region = \"us-west2\"  # @param {type:\"string\"}\n",
    "instance_name = \"pg15-pgvector-demo\"  # @param {type:\"string\"}\n",
    "database_name = \"retail\"  # @param {type:\"string\"}\n",
    "database_user = \"retail-admin\"  # @param {type:\"string\"}\n",
    "\n",
    "\n",
    "# Quick input validations.\n",
    "assert project_id, \"⚠️ Please provide a Google Cloud project ID\"\n",
    "assert region, \"⚠️ Please provide a Google Cloud region\"\n",
    "assert instance_name, \"⚠️ Please provide the name of your instance\"\n",
    "assert database_name, \"⚠️ Please provide a database name\"\n",
    "assert database_user, \"⚠️ Please provide a database user\"\n",
    "assert database_password, \"⚠️ Please provide a database password\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "ni20S52G2HLi"
   },
   "outputs": [],
   "source": [
    "#@markdown ###Authenticate your Google Cloud Account and enable APIs.\n",
    "# Authenticate gcloud.\n",
    "from google.colab import auth\n",
    "auth.authenticate_user()\n",
    "\n",
    "# Configure gcloud.\n",
    "!gcloud config set project {project_id}\n",
    "\n",
    "# Grant Cloud SQL Client role to authenticated user\n",
    "current_user = !gcloud auth list --filter=status:ACTIVE --format=\"value(account)\"\n",
    "\n",
    "!gcloud projects add-iam-policy-binding {project_id} \\\n",
    "  --member=user:{current_user[0]} \\\n",
    "  --role=\"roles/cloudsql.client\"\n",
    "\n",
    "\n",
    "# Enable Cloud SQL Admin API\n",
    "!gcloud services enable sqladmin.googleapis.com\n",
    "!gcloud services enable aiplatform.googleapis.com"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oYE38EHefzjj"
   },
   "source": [
    "### Setup Cloud SQL instance and PostgreSQL database"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "v3UzoWgEelyT"
   },
   "outputs": [],
   "source": [
    "#@markdown Create and setup a Cloud SQL PostgreSQL instance, if not done already.\n",
    "database_version = !gcloud sql instances describe {instance_name} --format=\"value(databaseVersion)\"\n",
    "if database_version[0].startswith(\"POSTGRES\"):\n",
    "  print(\"Found an existing Postgres Cloud SQL Instance!\")\n",
    "else:\n",
    "  print(\"Creating new Cloud SQL instance...\")\n",
    "  !gcloud sql instances create {instance_name} --database-version=POSTGRES_15 \\\n",
    "    --region={region} --cpu=1 --memory=4GB --root-password={database_password}\n",
    "\n",
    "# Create the database, if it does not exist.\n",
    "out = !gcloud sql databases list --instance={instance_name} --filter=\"NAME:{database_name}\" --format=\"value(NAME)\"\n",
    "if ''.join(out) == database_name:\n",
    "  print(\"Database %s already exists, skipping creation.\" % database_name)\n",
    "else:\n",
    "  !gcloud sql databases create {database_name} --instance={instance_name}\n",
    "\n",
    "# Create the database user for accessing the database.\n",
    "!gcloud sql users create {database_user} \\\n",
    "  --instance={instance_name} \\\n",
    "  --password={database_password}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "h9f8iQAXfdai"
   },
   "outputs": [],
   "source": [
    "# @markdown Verify that you are able to connect to the database. Executing this block should print the current PostgreSQL server version.\n",
    "\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "\n",
    "\n",
    "async def main():\n",
    "    # get current running event loop to be used with Connector\n",
    "    loop = asyncio.get_running_loop()\n",
    "    # initialize Connector object as async context manager\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # create connection to Cloud SQL database\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\"\n",
    "            # ... additional database driver args\n",
    "        )\n",
    "\n",
    "        # query Cloud SQL database\n",
    "        results = await conn.fetch(\"SELECT version()\")\n",
    "        print(results[0][\"version\"])\n",
    "\n",
    "        # close asyncpg connection\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Test connection with `asyncio`\n",
    "await main()  # type: ignore"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Mj5N8V1CgLJ4"
   },
   "source": [
    "## Prepare data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hCO82M0i6TiD"
   },
   "source": [
    "### Download and load the dataset in PostgreSQL"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "pYaxNic_DIL6"
   },
   "outputs": [],
   "source": [
    "# Load dataset from a web URL and store it in a pandas dataframe.\n",
    "\n",
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "DATASET_URL = \"https://github.com/GoogleCloudPlatform/python-docs-samples/raw/main/cloud-sql/postgres/pgvector/data/retail_toy_dataset.csv\"\n",
    "df = pd.read_csv(DATASET_URL)\n",
    "df = df.loc[:, [\"product_id\", \"product_name\", \"description\", \"list_price\"]]\n",
    "df = df.dropna()\n",
    "df.head(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "amOgB-9yJ-jf"
   },
   "outputs": [],
   "source": [
    "# Save the Pandas dataframe in a PostgreSQL table.\n",
    "\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "\n",
    "\n",
    "async def main():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # Create connection to Cloud SQL database\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\",\n",
    "        )\n",
    "\n",
    "        await conn.execute(\"DROP TABLE IF EXISTS products CASCADE\")\n",
    "        # Create the `products` table.\n",
    "        await conn.execute(\n",
    "            \"\"\"CREATE TABLE products(\n",
    "                                product_id VARCHAR(1024) PRIMARY KEY,\n",
    "                                product_name TEXT,\n",
    "                                description TEXT,\n",
    "                                list_price NUMERIC)\"\"\"\n",
    "        )\n",
    "\n",
    "        # Copy the dataframe to the `products` table.\n",
    "        tuples = list(df.itertuples(index=False))\n",
    "        await conn.copy_records_to_table(\n",
    "            \"products\", records=tuples, columns=list(df), timeout=10\n",
    "        )\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Run the SQL commands now.\n",
    "await main()  # type: ignore"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sP9MDFiIgVoV"
   },
   "source": [
    "## Vector Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1zutD18TzudB"
   },
   "source": [
    "### Generate vector embeddings using a Text Embedding model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9EphlPxXnMTf"
   },
   "source": [
    "Step 1: Split long product description text into smaller chunks\n",
    "\n",
    "- The product descriptions can be much longer than what can fit into a single API request for generating the vector embedding.\n",
    "\n",
    "- For example, Vertex AI text embedding model accepts a maximum of 3,072 input tokens for a single API request.\n",
    "\n",
    "- Use the `RecursiveCharacterTextSplitter` from LangChain library to split\n",
    "the description into smaller chunks of 500 characters each."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "g_geQnFh0XML"
   },
   "outputs": [],
   "source": [
    "# Split long text descriptions into smaller chunks that can fit into\n",
    "# the API request size limit, as expected by the LLM providers.\n",
    "\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    separators=[\".\", \"\\n\"],\n",
    "    chunk_size=500,\n",
    "    chunk_overlap=0,\n",
    "    length_function=len,\n",
    ")\n",
    "chunked = []\n",
    "for index, row in df.iterrows():\n",
    "    product_id = row[\"product_id\"]\n",
    "    desc = row[\"description\"]\n",
    "    splits = text_splitter.create_documents([desc])\n",
    "    for s in splits:\n",
    "        r = {\"product_id\": product_id, \"content\": s.page_content}\n",
    "        chunked.append(r)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PzrarngXnkOk"
   },
   "source": [
    "Step 2: Generate vector embedding for each chunk by calling an Embedding Generation service\n",
    "\n",
    "- In this demo, Vertex AI text embedding model is used to generate vector embeddings, which outputs a 768-dimensional vector for each chunk of text.\n",
    "\n",
    ">⚠️ The following code snippet may run for a few minutes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jsK_9ARUHTIx"
   },
   "outputs": [],
   "source": [
    "# Generate the vector embeddings for each chunk of text.\n",
    "# This code snippet may run for a few minutes.\n",
    "\n",
    "from langchain.embeddings import VertexAIEmbeddings\n",
    "from google.cloud import aiplatform\n",
    "import time\n",
    "\n",
    "aiplatform.init(project=f\"{project_id}\", location=f\"{region}\")\n",
    "embeddings_service = VertexAIEmbeddings()\n",
    "\n",
    "\n",
    "# Helper function to retry failed API requests with exponential backoff.\n",
    "def retry_with_backoff(func, *args, retry_delay=5, backoff_factor=2, **kwargs):\n",
    "    max_attempts = 10\n",
    "    retries = 0\n",
    "    for i in range(max_attempts):\n",
    "        try:\n",
    "            return func(*args, **kwargs)\n",
    "        except Exception as e:\n",
    "            print(f\"error: {e}\")\n",
    "            retries += 1\n",
    "            wait = retry_delay * (backoff_factor**retries)\n",
    "            print(f\"Retry after waiting for {wait} seconds...\")\n",
    "            time.sleep(wait)\n",
    "\n",
    "\n",
    "batch_size = 5\n",
    "for i in range(0, len(chunked), batch_size):\n",
    "    request = [x[\"content\"] for x in chunked[i : i + batch_size]]\n",
    "    response = retry_with_backoff(embeddings_service.embed_documents, request)\n",
    "    # Store the retrieved vector embeddings for each chunk back.\n",
    "    for x, e in zip(chunked[i : i + batch_size], response):\n",
    "        x[\"embedding\"] = e\n",
    "\n",
    "# Store the generated embeddings in a pandas dataframe.\n",
    "product_embeddings = pd.DataFrame(chunked)\n",
    "product_embeddings.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9f68jtfnNqck"
   },
   "source": [
    "### Use pgvector to store the generated embeddings within PostgreSQL\n",
    "\n",
    "- The `pgvector` extension introduces a new `vector` data type.\n",
    "- **The new `vector` data type allows you to directly save a vector embedding (represented as a NumPy array) through a simple INSERT statement in PostgreSQL!**\n",
    "\n",
    ">⚠️ The following code snippet may run for a few minutes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "hZKe_9qeRdMH"
   },
   "outputs": [],
   "source": [
    "# Store the generated vector embeddings in a PostgreSQL table.\n",
    "# This code may run for a few minutes.\n",
    "\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "import numpy as np\n",
    "from pgvector.asyncpg import register_vector\n",
    "\n",
    "\n",
    "async def main():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # Create connection to Cloud SQL database.\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\",\n",
    "        )\n",
    "\n",
    "        await conn.execute(\"CREATE EXTENSION IF NOT EXISTS vector\")\n",
    "        await register_vector(conn)\n",
    "\n",
    "        await conn.execute(\"DROP TABLE IF EXISTS product_embeddings\")\n",
    "        # Create the `product_embeddings` table to store vector embeddings.\n",
    "        await conn.execute(\n",
    "            \"\"\"CREATE TABLE product_embeddings(\n",
    "                                product_id VARCHAR(1024) NOT NULL REFERENCES products(product_id),\n",
    "                                content TEXT,\n",
    "                                embedding vector(768))\"\"\"\n",
    "        )\n",
    "\n",
    "        # Store all the generated embeddings back into the database.\n",
    "        for index, row in product_embeddings.iterrows():\n",
    "            await conn.execute(\n",
    "                \"INSERT INTO product_embeddings (product_id, content, embedding) VALUES ($1, $2, $3)\",\n",
    "                row[\"product_id\"],\n",
    "                row[\"content\"],\n",
    "                np.array(row[\"embedding\"]),\n",
    "            )\n",
    "\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Run the SQL commands now.\n",
    "await main()  # type: ignore"
   ]
  },
  {
    "cell_type": "markdown",
    "metadata": {
      "id": "Ley1ZGjnG5Fr"
    },
    "source": [
      "### Create indexes for faster similarity search in pgvector\n",
      "\n",
      "- Vector indexes can significantly speed up similarity search operation and avoid the brute-force exact nearest neighbor search that is used by default.\n",
      "\n",
      "- pgvector comes with two types of indexes (as of v0.5.1): `hnsw` and `ivfflat`.\n",
      "\n",
      "> 💡 Click [here](https://cloud.google.com/blog/products/databases/faster-similarity-search-performance-with-pgvector-indexes) to learn more about pgvector indexes.\n",
      "\n",
      "Enter or modify the values of index parameters for your index of choice and run the corresponding cell:"
    ]
  },
  {
    "cell_type": "code",
    "source": [
      "# @markdown Create an HNSW index on the `product_embeddings` table:\n",
      "m =  24 # @param {type:\"integer\"}\n",
      "ef_construction = 100  # @param {type:\"integer\"}\n",
      "operator =  \"vector_cosine_ops\"  # @param [\"vector_cosine_ops\", \"vector_l2_ops\", \"vector_ip_ops\"]\n",
      "\n",
      "# Quick input validations.\n",
      "assert m, \"⚠️ Please input a valid value for m.\"\n",
      "assert ef_construction, \"⚠️ Please input a valid value for ef_construction.\"\n",
      "assert operator, \"⚠️ Please input a valid value for operator.\"\n",
      "\n",
      "from pgvector.asyncpg import register_vector\n",
      "import asyncio\n",
      "import asyncpg\n",
      "from google.cloud.sql.connector import Connector\n",
      "\n",
      "\n",
      "async def main():\n",
      "    loop = asyncio.get_running_loop()\n",
      "    async with Connector(loop=loop) as connector:\n",
      "        # Create connection to Cloud SQL database.\n",
      "        conn: asyncpg.Connection = await connector.connect_async(\n",
      "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
      "            \"asyncpg\",\n",
      "            user=f\"{database_user}\",\n",
      "            password=f\"{database_password}\",\n",
      "            db=f\"{database_name}\",\n",
      "        )\n",
      "\n",
      "        await register_vector(conn)\n",
      "\n",
      "        # Create an HNSW index on the `product_embeddings` table.\n",
      "        await conn.execute(\n",
      "            f\"\"\"CREATE INDEX ON product_embeddings\n",
      "              USING hnsw(embedding {operator})\n",
      "              WITH (m = {m}, ef_construction = {ef_construction})\n",
      "            \"\"\"\n",
      "        )\n",
      "\n",
      "        await conn.close()\n",
      "\n",
      "\n",
      "# Run the SQL commands now.\n",
      "await main()  # type: ignore"
    ],
    "metadata": {
      "id": "EJUDntZ1KTk7",
      "cellView": "form"
    },
    "execution_count": null,
    "outputs": []
  },
  {
    "cell_type": "code",
    "source": [
      "# @markdown Create an IVFFLAT index on the `product_embeddings` table:\n",
      "lists =  100 # @param {type:\"integer\"}\n",
      "operator =  \"vector_cosine_ops\"  # @param [\"vector_cosine_ops\", \"vector_l2_ops\", \"vector_ip_ops\"]\n",
      "\n",
      "# Quick input validations.\n",
      "assert lists, \"⚠️ Please input a valid value for lists.\"\n",
      "\n",
      "from pgvector.asyncpg import register_vector\n",
      "import asyncio\n",
      "import asyncpg\n",
      "from google.cloud.sql.connector import Connector\n",
      "\n",
      "\n",
      "async def main():\n",
      "    loop = asyncio.get_running_loop()\n",
      "    async with Connector(loop=loop) as connector:\n",
      "        # Create connection to Cloud SQL database.\n",
      "        conn: asyncpg.Connection = await connector.connect_async(\n",
      "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
      "            \"asyncpg\",\n",
      "            user=f\"{database_user}\",\n",
      "            password=f\"{database_password}\",\n",
      "            db=f\"{database_name}\",\n",
      "        )\n",
      "\n",
      "        await register_vector(conn)\n",
      "\n",
      "        # Create an IVFFLAT index on the `product_embeddings` table.\n",
      "        await conn.execute(\n",
      "            f\"\"\"CREATE INDEX ON product_embeddings\n",
      "              USING ivfflat(embedding {operator})\n",
      "              WITH (lists = {lists})\n",
      "            \"\"\"\n",
      "        )\n",
      "\n",
      "        await conn.close()\n",
      "\n",
      "\n",
      "# Run the SQL commands now.\n",
      "await main()  # type: ignore"
    ],
    "metadata": {
      "id": "7kFKBuysMk2I",
      "cellView": "form"
    },
    "execution_count": null,
    "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Lm0dVJeInyfM"
   },
   "source": [
    "### Demo: Finding similar toy products using pgvector cosine search operator\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "_zRBR9YJoENp"
   },
   "outputs": [],
   "source": [
    "# @markdown Enter a short description of the toy to search for within a specified price range:\n",
    "toy = \"playing card games\"  # @param {type:\"string\"}\n",
    "min_price = 25  # @param {type:\"integer\"}\n",
    "max_price = 100  # @param {type:\"integer\"}\n",
    "\n",
    "# Quick input validations.\n",
    "assert toy, \"⚠️ Please input a valid input search text\"\n",
    "\n",
    "from langchain.embeddings import VertexAIEmbeddings\n",
    "from google.cloud import aiplatform\n",
    "\n",
    "aiplatform.init(project=f\"{project_id}\", location=f\"{region}\")\n",
    "\n",
    "embeddings_service = VertexAIEmbeddings()\n",
    "qe = embeddings_service.embed_query([toy])\n",
    "from pgvector.asyncpg import register_vector\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "\n",
    "matches = []\n",
    "\n",
    "\n",
    "async def main():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # Create connection to Cloud SQL database.\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\",\n",
    "        )\n",
    "\n",
    "        await register_vector(conn)\n",
    "        similarity_threshold = 0.1\n",
    "        num_matches = 50\n",
    "\n",
    "        # Find similar products to the query using cosine similarity search\n",
    "        # over all vector embeddings. This new feature is provided by `pgvector`.\n",
    "        results = await conn.fetch(\n",
    "            \"\"\"\n",
    "                            WITH vector_matches AS (\n",
    "                              SELECT product_id, 1 - (embedding <=> $1) AS similarity\n",
    "                              FROM product_embeddings\n",
    "                              WHERE 1 - (embedding <=> $1) > $2\n",
    "                              ORDER BY similarity DESC\n",
    "                              LIMIT $3\n",
    "                            )\n",
    "                            SELECT product_name, list_price, description FROM products\n",
    "                            WHERE product_id IN (SELECT product_id FROM vector_matches)\n",
    "                            AND list_price >= $4 AND list_price <= $5\n",
    "                            \"\"\",\n",
    "            qe,\n",
    "            similarity_threshold,\n",
    "            num_matches,\n",
    "            min_price,\n",
    "            max_price,\n",
    "        )\n",
    "\n",
    "        if len(results) == 0:\n",
    "            raise Exception(\"Did not find any results. Adjust the query parameters.\")\n",
    "\n",
    "        for r in results:\n",
    "            # Collect the description for all the matched similar toy products.\n",
    "            matches.append(\n",
    "                {\n",
    "                    \"product_name\": r[\"product_name\"],\n",
    "                    \"description\": r[\"description\"],\n",
    "                    \"list_price\": round(r[\"list_price\"], 2),\n",
    "                }\n",
    "            )\n",
    "\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Run the SQL commands now.\n",
    "await main()  # type: ignore\n",
    "\n",
    "# Show the results for similar products that matched the user query.\n",
    "matches = pd.DataFrame(matches)\n",
    "matches.head(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5z25qzj4HO69"
   },
   "source": [
    "Checkpoint:\n",
    "- We have extracted the semantic knowledge of the dataset and made it searchable through pgvector and PostgreSQL.\n",
    "- The demo will show next how you can use this semantic knowledge to answer complex natural language queries using LLMs."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WtDT5DbCNWoe"
   },
   "source": [
    "## LLMs and LangChain"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qSk0E3hlXSP0"
   },
   "source": [
    "### *Use case 1*: Building an AI-curated contextual hybrid search\n",
    "\n",
    "Combine natural language query text with regular relational filters to create a powerful hybrid search.\n",
    "\n",
    "Example: A grandparent wants to use the **AI-powered search interface** to find an educational toy for their grandkid that fits within their budget."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "0-_AYhlgXYSK"
   },
   "outputs": [],
   "source": [
    "# @markdown Enter the user search query in a simple English text. The price filters are shown separately here for demo purposes. These filters may represent additional input from your frontend application.\n",
    "# Please fill in these values.\n",
    "user_query = \"Do you have a beach toy set that teaches numbers and letters to kids?\"  # @param {type:\"string\"}\n",
    "min_price = 20  # @param {type:\"integer\"}\n",
    "max_price = 100  # @param {type:\"integer\"}\n",
    "\n",
    "# Quick input validations.\n",
    "assert user_query, \"⚠️ Please input a valid input search text\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fsOl5A9ldRgh"
   },
   "source": [
    "Step 1: Generate the vector embedding for the user query"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Y83GRy7jdYRa"
   },
   "outputs": [],
   "source": [
    "qe = embeddings_service.embed_query([user_query])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ovmRE0updh16"
   },
   "source": [
    "Step 2: Use `pgvector` to find similar products\n",
    "\n",
    "- The new `pgvector` similarity search operators provide powerful semantics\n",
    "to combine the vector search operation with regular query filters in a single SQL query.\n",
    "- **Using pgvector, you can now seamlessly integrate the power of relational databases with your vector search operations!**\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HJyqPE9XYCya"
   },
   "outputs": [],
   "source": [
    "from pgvector.asyncpg import register_vector\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "\n",
    "matches = []\n",
    "\n",
    "\n",
    "async def main():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # Create connection to Cloud SQL database.\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\",\n",
    "        )\n",
    "\n",
    "        await register_vector(conn)\n",
    "        similarity_threshold = 0.7\n",
    "        num_matches = 5\n",
    "\n",
    "        # Find similar products to the query using cosine similarity search\n",
    "        # over all vector embeddings. This new feature is provided by `pgvector`.\n",
    "        results = await conn.fetch(\n",
    "            \"\"\"\n",
    "                            WITH vector_matches AS (\n",
    "                              SELECT product_id, 1 - (embedding <=> $1) AS similarity\n",
    "                              FROM product_embeddings\n",
    "                              WHERE 1 - (embedding <=> $1) > $2\n",
    "                              ORDER BY similarity DESC\n",
    "                              LIMIT $3\n",
    "                            )\n",
    "                            SELECT product_name, list_price, description FROM products\n",
    "                            WHERE product_id IN (SELECT product_id FROM vector_matches)\n",
    "                            AND list_price >= $4 AND list_price <= $5\n",
    "                            \"\"\",\n",
    "            qe,\n",
    "            similarity_threshold,\n",
    "            num_matches,\n",
    "            min_price,\n",
    "            max_price,\n",
    "        )\n",
    "\n",
    "        if len(results) == 0:\n",
    "            raise Exception(\"Did not find any results. Adjust the query parameters.\")\n",
    "\n",
    "        for r in results:\n",
    "            # Collect the description for all the matched similar toy products.\n",
    "            matches.append(\n",
    "                f\"\"\"The name of the toy is {r[\"product_name\"]}.\n",
    "                          The price of the toy is ${round(r[\"list_price\"], 2)}.\n",
    "                          Its description is below:\n",
    "                          {r[\"description\"]}.\"\"\"\n",
    "            )\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Run the SQL commands now.\n",
    "await main()  # type: ignore\n",
    "\n",
    "# Show the results for similar products that matched the user query.\n",
    "matches"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "3oiqKZBKe4jq"
   },
   "source": [
    "Step 3: Use LangChain to summarize and generate a high-quality prompt to answer the user query\n",
    "\n",
    "- After finding the similar products and their descriptions using `pgvector`, the next step is to use them for generating a prompt input for the LLM model.\n",
    "- Since individual product descriptions can be very long, they may not fit within the specified input payload limit for an LLM model.\n",
    "- The `MapReduceChain` from LangChain framework is used to generate and combine short summaries of similarly matched products.\n",
    "- The combined summaries are then used to build a high-quality prompt for an input to the LLM model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jPPUSBKb1Soc"
   },
   "outputs": [],
   "source": [
    "# Using LangChain for summarization and efficient context building.\n",
    "\n",
    "from langchain.chains.summarize import load_summarize_chain\n",
    "from langchain.docstore.document import Document\n",
    "from langchain.llms import VertexAI\n",
    "from langchain import PromptTemplate, LLMChain\n",
    "from IPython.display import display, Markdown\n",
    "\n",
    "llm = VertexAI()\n",
    "\n",
    "map_prompt_template = \"\"\"\n",
    "              You will be given a detailed description of a toy product.\n",
    "              This description is enclosed in triple backticks (```).\n",
    "              Using this description only, extract the name of the toy,\n",
    "              the price of the toy and its features.\n",
    "\n",
    "              ```{text}```\n",
    "              SUMMARY:\n",
    "              \"\"\"\n",
    "map_prompt = PromptTemplate(template=map_prompt_template, input_variables=[\"text\"])\n",
    "\n",
    "combine_prompt_template = \"\"\"\n",
    "                You will be given a detailed description different toy products\n",
    "                enclosed in triple backticks (```) and a question enclosed in\n",
    "                double backticks(``).\n",
    "                Select one toy that is most relevant to answer the question.\n",
    "                Using that selected toy description, answer the following\n",
    "                question in as much detail as possible.\n",
    "                You should only use the information in the description.\n",
    "                Your answer should include the name of the toy, the price of the toy\n",
    "                and its features. Your answer should be less than 200 words.\n",
    "                Your answer should be in Markdown in a numbered list format.\n",
    "\n",
    "\n",
    "                Description:\n",
    "                ```{text}```\n",
    "\n",
    "\n",
    "                Question:\n",
    "                ``{user_query}``\n",
    "\n",
    "\n",
    "                Answer:\n",
    "                \"\"\"\n",
    "combine_prompt = PromptTemplate(\n",
    "    template=combine_prompt_template, input_variables=[\"text\", \"user_query\"]\n",
    ")\n",
    "\n",
    "docs = [Document(page_content=t) for t in matches]\n",
    "chain = load_summarize_chain(\n",
    "    llm, chain_type=\"map_reduce\", map_prompt=map_prompt, combine_prompt=combine_prompt\n",
    ")\n",
    "answer = chain.run(\n",
    "    {\n",
    "        \"input_documents\": docs,\n",
    "        \"user_query\": user_query,\n",
    "    }\n",
    ")\n",
    "\n",
    "\n",
    "display(Markdown(answer))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aMLXHsMcme8D"
   },
   "source": [
    "### _Use case 2_: Adding AI-powered creative content generation\n",
    "\n",
    "Use knowledge from the existing dataset to generate new AI-powered content from an initial prompt.\n",
    "\n",
    "Example: A third-party seller on the retail platform wants to use the **AI-powered content generation** to create a detailed description of their new bicycle product."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "zTKKx8dRneGT"
   },
   "outputs": [],
   "source": [
    "# @markdown Describe your a new product in just a few words:\n",
    "# Please fill in these values.\n",
    "creative_prompt = \"A bicycle with brand name 'Roadstar bike' for kids that comes with training wheels and helmet.\"  # @param {type:\"string\"}\n",
    "\n",
    "# Quick input validations.\n",
    "assert creative_prompt, \"⚠️ Please input a valid input search text\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Saer6XxA5wag"
   },
   "source": [
    "Step 1: Find an existing product description matching the initial prompt\n",
    "\n",
    "- Leverage the `pgvector` similarity search operator to find an existing\n",
    "product description that closely matches the new product specified\n",
    "in the initial prompt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "GYWxaolhyakt"
   },
   "outputs": [],
   "source": [
    "from pgvector.asyncpg import register_vector\n",
    "import asyncio\n",
    "import asyncpg\n",
    "from google.cloud.sql.connector import Connector\n",
    "\n",
    "qe = embeddings_service.embed_query([creative_prompt])\n",
    "qe_str = \"[%s]\" % (\",\".join([str(x) for x in qe]))\n",
    "matches = []\n",
    "\n",
    "\n",
    "async def main():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    async with Connector(loop=loop) as connector:\n",
    "        # Create connection to Cloud SQL database.\n",
    "        conn: asyncpg.Connection = await connector.connect_async(\n",
    "            f\"{project_id}:{region}:{instance_name}\",  # Cloud SQL instance connection name\n",
    "            \"asyncpg\",\n",
    "            user=f\"{database_user}\",\n",
    "            password=f\"{database_password}\",\n",
    "            db=f\"{database_name}\",\n",
    "        )\n",
    "\n",
    "        await register_vector(conn)\n",
    "        similarity_threshold = 0.7\n",
    "\n",
    "        # Find similar products to the query using cosine similarity search\n",
    "        # over all vector embeddings. This new feature is provided by `pgvector`.\n",
    "        results = await conn.fetch(\n",
    "            \"\"\"\n",
    "                            WITH vector_matches AS (\n",
    "                              SELECT product_id, 1 - (embedding <=> $1) AS similarity\n",
    "                              FROM product_embeddings\n",
    "                              WHERE 1 - (embedding <=> $2) > $3\n",
    "                              ORDER BY similarity DESC\n",
    "                              LIMIT 1\n",
    "                            )\n",
    "                            SELECT description FROM products\n",
    "                            WHERE product_id IN (SELECT product_id FROM vector_matches)\n",
    "                            \"\"\",\n",
    "            qe,\n",
    "            qe,\n",
    "            similarity_threshold,\n",
    "        )\n",
    "\n",
    "        for r in results:\n",
    "            matches.append(r[\"description\"])\n",
    "\n",
    "        await conn.close()\n",
    "\n",
    "\n",
    "# Run the SQL commands now.\n",
    "await main()  # type: ignore\n",
    "\n",
    "# Show the matched product description.\n",
    "matches"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Qm-y-J6kZHS_"
   },
   "source": [
    "Step 2: Use the existing matched product description as the prompt context to generate new creative output from the LLM.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "xelu9mHfocNV"
   },
   "outputs": [],
   "source": [
    "from langchain.llms import VertexAI\n",
    "from langchain import PromptTemplate, LLMChain\n",
    "from IPython.display import display, Markdown\n",
    "\n",
    "template = \"\"\"\n",
    "            You are given descriptions about some similar kind of toys in the context.\n",
    "            This context is enclosed in triple backticks (```).\n",
    "            Combine these descriptions and adapt them to match the specifications in\n",
    "            the initial prompt. All the information from the initial prompt must\n",
    "            be included. You are allowed to be as creative as possible,\n",
    "            and describe the new toy in as much detail. Your answer should be\n",
    "            less than 200 words.\n",
    "\n",
    "            Context:\n",
    "            ```{context}```\n",
    "\n",
    "            Initial Prompt:\n",
    "            {creative_prompt}\n",
    "\n",
    "            Answer:\n",
    "        \"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=template, input_variables=[\"context\", \"creative_prompt\"]\n",
    ")\n",
    "\n",
    "# Increase the `temperature` to allow more creative writing freedom.\n",
    "llm = VertexAI(temperature=0.7)\n",
    "\n",
    "llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
    "answer = llm_chain.run(\n",
    "    {\n",
    "        \"context\": \"\\n\".join(matches),\n",
    "        \"creative_prompt\": creative_prompt,\n",
    "    }\n",
    ")\n",
    "\n",
    "\n",
    "display(Markdown(answer))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gVGWjeALHJN8"
   },
   "source": [
    "##  (Optional) Cleaning up\n",
    "\n",
    "That's it! We are at the end of this tutorial. If you want, you can now delete your Cloud SQL instance by running the following code snippet."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "cellView": "form",
    "id": "jmJFu0C8IAmg"
   },
   "outputs": [],
   "source": [
    "# @markdown Clean-up and delete the Cloud SQL instance.\n",
    "!gcloud sql instances patch {instance_name} --no-deletion-protection\n",
    "!gcloud sql instances delete {instance_name} --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PtIl7N4v0Nuj"
   },
   "source": [
    "Generative AI is a powerful paradigm shift in application development that lets you create novel applications to serve users in new ways - [from answering patients' complex medical questions](https://cloud.google.com/blog/topics/healthcare-life-sciences/sharing-google-med-palm-2-medical-large-language-model) to [helping enterprises analyze cyberattacks](https://cloud.google.com/blog/products/identity-security/rsa-google-cloud-security-ai-workbench-generative-ai). In this demo, we showed you just two examples of powerful features that you can create by combining LLMs and databases.\n",
    "\n",
    "We can't wait to see what you build with it! 🚀"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "last_runtime": {
    "build_target": "",
    "kind": "local"
   },
   "private_outputs": true,
   "provenance": [
    {
     "file_id": "1eR07nr068aU1aJ9pOSuy7bw6tS_3wn_1",
     "timestamp": 1687467279543
    }
   ],
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
