{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "E_RJy7C1bpCT"
   },
   "source": [
    "# Google Vertex AI Feature Store\n",
    "\n",
    "> [Google Cloud Vertex Feature Store](https://cloud.google.com/vertex-ai/docs/featurestore/latest/overview) streamlines your ML feature management and online serving processes by letting you serve at low-latency your data in [Google Cloud BigQuery](https://cloud.google.com/bigquery?hl=en), including the capacity to perform approximate neighbor retrieval for embeddings\n",
    "\n",
    "\n",
    "This tutorial shows you how to easily perform low-latency vector search and approximate nearest neighbor retrieval directly from your BigQuery data, enabling powerful ML applications with minimal setup. We will do that using the `VertexFSVectorStore` class. \n",
    "\n",
    "This class is part of a set of 2 classes capable of providing a unified data storage and flexible vector search in Google Cloud:\n",
    "- **BigQuery Vector Search**: with `BigQueryVectorStore` class, which is ideal for rapid prototyping with no infrastructure setup and batch retrieval.\n",
    "- **Feature Store Online Store**: with `VertexFSVectorStore` class, enables low-latency retrieval with manual or scheduled data sync. Perfect for production-ready user-facing GenAI applications.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "25a003cd4edc"
   },
   "source": [
    "![Diagram BQ-VertexFS]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "EmPJkpOCckyh"
   },
   "source": [
    "## Getting started\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IR54BmgvdHT_"
   },
   "source": [
    "### Install the library"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "0ZITIDE160OD"
   },
   "outputs": [],
   "source": [
    "%pip install --upgrade --quiet  langchain langchain-google-vertexai \"langchain-google-community[featurestore]\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "v40bB_GMcr9f"
   },
   "source": [
    "To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which restarts the current kernel."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6o0iGVIdDD6K"
   },
   "outputs": [],
   "source": [
    "import IPython\n",
    "\n",
    "app = IPython.Application.instance()\n",
    "app.kernel.do_shutdown(True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "be453ee45565"
   },
   "source": [
    "## Before you begin"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fb8ebc778a77"
   },
   "source": [
    "#### Set your project ID\n",
    "\n",
    "If you don't know your project ID, try the following:\n",
    "* Run `gcloud config list`.\n",
    "* Run `gcloud projects list`.\n",
    "* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3f00771c2519"
   },
   "outputs": [],
   "source": [
    "PROJECT_ID = \"\"  # @param {type:\"string\"}\n",
    "\n",
    "# Set the project id\n",
    "! gcloud config set project {PROJECT_ID}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0c8db9870db1"
   },
   "source": [
    "#### Set the region\n",
    "\n",
    "You can also change the `REGION` variable used by BigQuery. Learn more about [BigQuery regions](https://cloud.google.com/bigquery/docs/locations#supported_locations)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "afbc16ea31fc"
   },
   "outputs": [],
   "source": [
    "REGION = \"us-central1\"  # @param {type: \"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1af54eb03565"
   },
   "source": [
    "#### Set the dataset and table names\n",
    "\n",
    "They will be your BigQuery Vector Store."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "259862ba68b1"
   },
   "outputs": [],
   "source": [
    "DATASET = \"my_langchain_dataset\"  # @param {type: \"string\"}\n",
    "TABLE = \"doc_and_vectors\"  # @param {type: \"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "445325c9b3bb"
   },
   "source": [
    "### Authenticating your notebook environment\n",
    "\n",
    "- If you are using **Colab** to run this notebook, uncomment the cell below and continue.\n",
    "- If you are using **Vertex AI Workbench**, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "d9ff48bb5b3c"
   },
   "outputs": [],
   "source": [
    "# from google.colab import auth as google_auth\n",
    "\n",
    "# google_auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AD3yG49BdLlr"
   },
   "source": [
    "## Demo: VertexFSVectorStore"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7b62754dfe6b"
   },
   "source": [
    "### Create an embedding class instance\n",
    "\n",
    "You may need to enable Vertex AI API in your project by running\n",
    "`gcloud services enable aiplatform.googleapis.com --project {PROJECT_ID}`\n",
    "(replace `{PROJECT_ID}` with the name of your project).\n",
    "\n",
    "You can use any [LangChain embeddings model](/docs/integrations/text_embedding/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Vb2RJocV9_LQ"
   },
   "outputs": [],
   "source": [
    "from langchain_google_vertexai import VertexAIEmbeddings\n",
    "\n",
    "embedding = VertexAIEmbeddings(\n",
    "    model_name=\"textembedding-gecko@latest\", project=PROJECT_ID\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ee602c76f7e6"
   },
   "source": [
    "### Initialize VertexFSVectorStore\n",
    "\n",
    "BigQuery Dataset and Table will be automatically created if they do not exist. See class definition [here](https://github.com/langchain-ai/langchain-google/blob/main/libs/community/langchain_google_community/bq_storage_vectorstores/featurestore.py#L33) for all optional paremeters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9db48a734ad8"
   },
   "outputs": [],
   "source": [
    "from langchain_google_community import VertexFSVectorStore\n",
    "\n",
    "store = VertexFSVectorStore(\n",
    "    project_id=PROJECT_ID,\n",
    "    dataset_name=DATASET,\n",
    "    table_name=TABLE,\n",
    "    location=REGION,\n",
    "    embedding=embedding,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0017a81d6a85"
   },
   "source": [
    "### Add texts\n",
    "\n",
    "> Note: The first synchronization process will take around ~20 minutes because of Feature Online Store creation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ebe4f493008b"
   },
   "outputs": [],
   "source": [
    "all_texts = [\"Apples and oranges\", \"Cars and airplanes\", \"Pineapple\", \"Train\", \"Banana\"]\n",
    "metadatas = [{\"len\": len(t)} for t in all_texts]\n",
    "\n",
    "store.add_texts(all_texts, metadatas=metadatas)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "524cf1357bf3"
   },
   "source": [
    "You can also start a sync on demand by executing the `sync_data` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "a8126ade72c1"
   },
   "outputs": [],
   "source": [
    "store.sync_data()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "3ddb1e1a5083"
   },
   "source": [
    "When in a production environment, you can also use `cron_schedule` class parameter to setup an automatic scheduled synchronization. \n",
    "For example:\n",
    "```python\n",
    "store = VertexFSVectorStore(cron_schedule=\"TZ=America/Los_Angeles 00 13 11 8 *\", ...)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "42d2e7e6a71d"
   },
   "source": [
    "### Search for documents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "54785b0aa3e2"
   },
   "outputs": [],
   "source": [
    "query = \"I'd like a fruit.\"\n",
    "docs = store.similarity_search(query)\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "bfcdbc4d3e01"
   },
   "source": [
    "### Search for documents by vector"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3d0749a5919f"
   },
   "outputs": [],
   "source": [
    "query_vector = embedding.embed_query(query)\n",
    "docs = store.similarity_search_by_vector(query_vector, k=2)\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "40ae268fe9f0"
   },
   "source": [
    "### Search for documents with metadata filter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8f65b86ada37"
   },
   "outputs": [],
   "source": [
    "# This should only return \"Banana\" document.\n",
    "docs = store.similarity_search_by_vector(query_vector, filter={\"len\": 6})\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0fc8bfef534e"
   },
   "source": [
    "### Add text with embeddings\n",
    "\n",
    "You can also bring your own embeddings with the`add_texts_with_embeddings` method.\n",
    "This is particularly useful for multimodal data which might require custom preprocessing before the embedding generation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "fa2f749e91d2"
   },
   "outputs": [],
   "source": [
    "items = [\"some text\"]\n",
    "embs = embedding.embed(items)\n",
    "\n",
    "ids = store.add_texts_with_embeddings(\n",
    "    texts=[\"some text\"], embs=embs, metadatas=[{\"len\": 1}]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "82a5d2842835"
   },
   "source": [
    "### Batch serving with BigQuery\n",
    "You can simply use the method `.to_bq_vector_store()` to get a BigQueryVectorStore object, which offers optimized performances for batch use cases. All mandatory parameters will be automatically transferred from the existing class. See the [class definition](https://github.com/langchain-ai/langchain-google/blob/main/libs/community/langchain_google_community/bq_storage_vectorstores/bigquery.py#L26) for all the parameters you can use.\n",
    "\n",
    "Moving back to BigQueryVectorStore is equivalently easy with the `.to_vertex_fs_vector_store()` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "2d7997776d40"
   },
   "outputs": [],
   "source": [
    "store.to_bq_vector_store()  # pass optional VertexFSVectorStore parameters as arguments"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "google_vertex_ai_feature_store.ipynb",
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
