{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "266aaf59-9b66-4c0d-b797-5292628392ed",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "pip install transformers sentence_transformers openai"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b19cb54f-e63f-4d9b-b7ff-d18a30635cd2",
   "metadata": {},
   "source": [
    "# Overview\n",
    "\n",
    "In this tutorial, we'll use Feast to inject documents into the context of an LLM (Large Language Model) to power a RAG Application (Retrieval Augmented Generation) with Milvus as the online vector database.\n",
    "\n",
    "Feast solves several common issues in this flow:\n",
    "1. **Online retrieval:** At inference time, LLMs often need access to data that isn't readily \n",
    "   available and needs to be precomputed from other data sources.\n",
    "2. **Vector Search:** Feast has built support for vector similarity search that is easily configured declaritively so users can focus on their application. Milvus provides powerful and efficient vector similarity search capabilities.\n",
    "3. **Richer structured data:** Along with vector search, users can query standard structured fields to inject into the LLM context for better user experiences.\n",
    "4. **Feature/Context and versioning:** Different teams within an organization are often unable to reuse \n",
    "   data across projects and services, resulting in duplicate application logic. Models have data dependencies that need \n",
    "   to be versioned, for example when running A/B tests on model/prompt versions.\n",
    "   * Feast enables discovery of and collaboration on previously used documents, features, and enables versioning of sets of \n",
    "     data.\n",
    "\n",
    "We will:\n",
    "1. Deploy a local feature store with a **Parquet file offline store** and **Sqlite online store**.\n",
    "2. Write/materialize the data (i.e., feature values) from the offline store (a parquet file) into the online store (Sqlite).\n",
    "3. Serve the features using the Feast SDK with Milvus's vector search capabilitie\n",
    "4. Inject the document into the LLM's context to answer questions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "425cf2f7-70b5-423c-a4f2-f470d8638135",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "! pip install feast[nlp] -U -q\n",
    "! echo \"Please restart your runtime now (Runtime -> Restart runtime). This ensures that the correct dependencies are loaded.\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db162bb9-e262-4958-990d-fd8f3f1f1249",
   "metadata": {},
   "source": [
    "**Reminder**: Please restart your runtime after installing Feast (Runtime -> Restart runtime). This ensures that the correct dependencies are loaded."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a25cf84f-c255-4bb3-a3d7-e5512c1ba10d",
   "metadata": {},
   "source": [
    "## Step 2: Create a feature repository\n",
    "\n",
    "A feature repository is a directory that contains the configuration of the feature store and individual features. This configuration is written as code (Python/YAML) and it's highly recommended that teams track it centrally using git. See [Feature Repository](https://docs.feast.dev/reference/feature-repository) for a detailed explanation of feature repositories.\n",
    "\n",
    "The easiest way to create a new feature repository to use the `feast init` command in your terminal. For this RAG demo, you **do not** need to initialize a feast repo. We have already provided a complete feature repository for you in the current directory (check `feature_repo`) with all the necessary Milvus configurations set up and ready to use.\n",
    "\n",
    "\n",
    "### Demo data scenario \n",
    "- We take data from the popular library [Docling](https://github.com/docling-project/docling) to parse PDFs into sentences which are used for RAG.\n",
    "- Our goal is to show how simple it is to transform PDFs into text data that can be used for RAG applications with Milvus and Feast."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "c07166a0-ff77-4bc7-b159-feb8f43aa3f0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import feast\n",
    "import warnings\n",
    "\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c969b62f-4f58-49ed-ae23-ace1916de0c0",
   "metadata": {},
   "source": [
    "### Step 2a: Inspecting the feature repository\n",
    "\n",
    "Let's take a look at the demo repo itself. It breaks down into\n",
    "\n",
    "\n",
    "* `data/` contains raw demo parquet data\n",
    "* `example_repo.py` contains demo feature definitions\n",
    "* `feature_store.yaml` contains a demo setup configuring where data sources are\n",
    "* `test_workflow.py` showcases how to run all key Feast commands, including defining, retrieving, and pushing features.\n",
    "   * You can run this with `python test_workflow.py`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "5d531836-5981-4a34-9367-51b09af18a8a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/Users/farceo/dev/feast/examples/rag-docling/feature_repo\n",
      "\u001b[1m\u001b[36m__pycache__\u001b[m\u001b[m          example_repo.py      transformed_rows.pkl\n",
      "\u001b[1m\u001b[36mdata\u001b[m\u001b[m                 feature_store.yaml\n",
      "\n",
      "./__pycache__:\n",
      "example_repo.cpython-310.pyc example_repo.cpython-311.pyc\n",
      "\n",
      "./data:\n",
      "docling_samples.parquet       small.pdf\n",
      "metadata_samples.parquet      smallest-possible-pdf-2.0.pdf\n",
      "online_store.db\n"
     ]
    }
   ],
   "source": [
    "%cd feature_repo/\n",
    "!ls -R"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d14a8073-5030-4d35-9c96-f5360aeaf39f",
   "metadata": {},
   "source": [
    "### Step 2b: Inspecting the project configuration\n",
    "Let's inspect the setup of the project in `feature_store.yaml`. \n",
    "\n",
    "The key line defining the overall architecture of the feature store is the **provider**. \n",
    "\n",
    "The provider value sets default offline and online stores. \n",
    "* The offline store provides the compute layer to process historical data (for generating training data & feature \n",
    "  values for serving). \n",
    "* The online store is a low latency store of the latest feature values (for powering real-time inference).\n",
    "\n",
    "Valid values for `provider` in `feature_store.yaml` are:\n",
    "\n",
    "* local: use file source with Milvus Lite\n",
    "* gcp: use BigQuery/Snowflake with Google Cloud Datastore/Redis\n",
    "* aws: use Redshift/Snowflake with DynamoDB/Redis\n",
    "\n",
    "Note that there are many other offline / online stores Feast works with, including Azure, Hive, Trino, and PostgreSQL via community plugins. See https://docs.feast.dev/roadmap for all supported connectors.\n",
    "\n",
    "A custom setup can also be made by following [Customizing Feast](https://docs.feast.dev/v/master/how-to-guides/customizing-feast)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "14c830ef-f5a4-4867-ad5c-87e709df7057",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[94mproject\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mrag\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94mprovider\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mlocal\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94mregistry\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mdata/registry.db\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94monline_store\u001b[39;49;00m:\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m  \u001b[39;49;00m\u001b[94mtype\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mmilvus\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m  \u001b[39;49;00m\u001b[94mpath\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mdata/online_store.db\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m  \u001b[39;49;00m\u001b[94membedding_dim\u001b[39;49;00m:\u001b[37m \u001b[39;49;00m384\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m  \u001b[39;49;00m\u001b[94mindex_type\u001b[39;49;00m:\u001b[37m \u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\u001b[33mFLAT\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94moffline_store\u001b[39;49;00m:\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m  \u001b[39;49;00m\u001b[94mtype\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mfile\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94mentity_key_serialization_version\u001b[39;49;00m:\u001b[37m \u001b[39;49;00m3\u001b[37m\u001b[39;49;00m\n",
      "\u001b[94mauth\u001b[39;49;00m:\u001b[37m\u001b[39;49;00m\n",
      "\u001b[37m    \u001b[39;49;00m\u001b[94mtype\u001b[39;49;00m:\u001b[37m \u001b[39;49;00mno_auth\u001b[37m\u001b[39;49;00m\n"
     ]
    }
   ],
   "source": [
    "!pygmentize feature_store.yaml"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ce80d1a-05d3-434d-bd1e-1ade8abd1f9f",
   "metadata": {},
   "source": [
    "### Inspecting the raw data\n",
    "\n",
    "The raw feature data we have in this demo is stored in a local parquet file. The dataset Wikipedia summaries of diferent cities."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "788a27ff-16a4-4b23-8c1c-ba27fd918aa5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "embedding length = 384\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd \n",
    "\n",
    "df = pd.read_parquet(\"./data/docling_samples.parquet\")\n",
    "mdf = pd.read_parquet(\"./data/metadata_samples.parquet\")\n",
    "df['chunk_embedding'] = df['vector'].apply(lambda x: x.tolist())\n",
    "embedding_length = len(df['vector'][0])\n",
    "print(f'embedding length = {embedding_length}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ada1bf0a-8b1a-4821-becd-488abeb4d2ac",
   "metadata": {},
   "outputs": [],
   "source": [
    "df['created'] = pd.Timestamp.now()\n",
    "mdf['created'] = pd.Timestamp.now()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "058d5634-0ac2-4e9a-a677-f0869de61f43",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>document_id</th>\n",
       "      <th>chunk_id</th>\n",
       "      <th>file_name</th>\n",
       "      <th>raw_chunk_markdown</th>\n",
       "      <th>vector</th>\n",
       "      <th>chunk_embedding</th>\n",
       "      <th>created</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-1</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...</td>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-2</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nTables organize valuab...</td>\n",
       "      <td>[0.050771258771419525, -0.0055733839981257915,...</td>\n",
       "      <td>[0.050771258771419525, -0.0055733839981257915,...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-3</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\ncomplex column/row-hea...</td>\n",
       "      <td>[-0.05088765174150467, 0.05101901665329933, -0...</td>\n",
       "      <td>[-0.05088765174150467, 0.05101901665329933, -0...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-4</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nmodel. The latter impr...</td>\n",
       "      <td>[0.011835305020213127, -0.09409898519515991, 0...</td>\n",
       "      <td>[0.011835305020213127, -0.09409898519515991, 0...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-5</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nwe can obtain the cont...</td>\n",
       "      <td>[-0.0068757119588553905, 0.006624480709433556,...</td>\n",
       "      <td>[-0.0068757119588553905, 0.006624480709433556,...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  document_id chunk_id     file_name  \\\n",
       "0       doc-1  chunk-1  2203.01017v2   \n",
       "1       doc-1  chunk-2  2203.01017v2   \n",
       "2       doc-1  chunk-3  2203.01017v2   \n",
       "3       doc-1  chunk-4  2203.01017v2   \n",
       "4       doc-1  chunk-5  2203.01017v2   \n",
       "\n",
       "                                  raw_chunk_markdown  \\\n",
       "0  Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...   \n",
       "1  a. Picture of a table:\\nTables organize valuab...   \n",
       "2  a. Picture of a table:\\ncomplex column/row-hea...   \n",
       "3  a. Picture of a table:\\nmodel. The latter impr...   \n",
       "4  a. Picture of a table:\\nwe can obtain the cont...   \n",
       "\n",
       "                                              vector  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...   \n",
       "1  [0.050771258771419525, -0.0055733839981257915,...   \n",
       "2  [-0.05088765174150467, 0.05101901665329933, -0...   \n",
       "3  [0.011835305020213127, -0.09409898519515991, 0...   \n",
       "4  [-0.0068757119588553905, 0.006624480709433556,...   \n",
       "\n",
       "                                     chunk_embedding  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...   \n",
       "1  [0.050771258771419525, -0.0055733839981257915,...   \n",
       "2  [-0.05088765174150467, 0.05101901665329933, -0...   \n",
       "3  [0.011835305020213127, -0.09409898519515991, 0...   \n",
       "4  [-0.0068757119588553905, 0.006624480709433556,...   \n",
       "\n",
       "                     created  \n",
       "0 2025-04-20 23:19:48.930517  \n",
       "1 2025-04-20 23:19:48.930517  \n",
       "2 2025-04-20 23:19:48.930517  \n",
       "3 2025-04-20 23:19:48.930517  \n",
       "4 2025-04-20 23:19:48.930517  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import display\n",
    "\n",
    "display(df.head())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "e36b538d-21d2-4770-b5d0-667aaa9fe1db",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>document_id</th>\n",
       "      <th>file_name</th>\n",
       "      <th>full_document_markdown</th>\n",
       "      <th>pdf_bytes</th>\n",
       "      <th>created</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>## TableFormer: Table Structure Understanding ...</td>\n",
       "      <td>b'%PDF-1.5\\n%\\x8f\\n5 0 obj\\n&lt;&lt; /Type /XObject ...</td>\n",
       "      <td>2025-04-20 23:19:48.931844</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>doc-3</td>\n",
       "      <td>2305.03393v1-pg9</td>\n",
       "      <td>order to compute the TED score. Inference timi...</td>\n",
       "      <td>b'%PDF-1.3\\n%\\xc4\\xe5\\xf2\\xe5\\xeb\\xa7\\xf3\\xa0\\...</td>\n",
       "      <td>2025-04-20 23:19:48.931844</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>doc-2</td>\n",
       "      <td>2305.03393v1</td>\n",
       "      <td>## Optimized Table Tokenization for Table Stru...</td>\n",
       "      <td>b'%PDF-1.5\\n%\\x8f\\n74 0 obj\\n&lt;&lt; /Filter /Flate...</td>\n",
       "      <td>2025-04-20 23:19:48.931844</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>doc-4</td>\n",
       "      <td>amt_handbook_sample</td>\n",
       "      <td>pulleys, provided the inner race of the bearin...</td>\n",
       "      <td>b'%PDF-1.6\\r%\\xe2\\xe3\\xcf\\xd3\\r\\n875 0 obj\\r&lt;&lt;...</td>\n",
       "      <td>2025-04-20 23:19:48.931844</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>doc-5</td>\n",
       "      <td>code_and_formula</td>\n",
       "      <td>## JavaScript Code Example\\n\\nLorem ipsum dolo...</td>\n",
       "      <td>b'%PDF-1.5\\n%\\xbf\\xf7\\xa2\\xfe\\n3 0 obj\\n&lt;&lt; /Li...</td>\n",
       "      <td>2025-04-20 23:19:48.931844</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  document_id            file_name  \\\n",
       "0       doc-1         2203.01017v2   \n",
       "1       doc-3     2305.03393v1-pg9   \n",
       "2       doc-2         2305.03393v1   \n",
       "3       doc-4  amt_handbook_sample   \n",
       "4       doc-5     code_and_formula   \n",
       "\n",
       "                              full_document_markdown  \\\n",
       "0  ## TableFormer: Table Structure Understanding ...   \n",
       "1  order to compute the TED score. Inference timi...   \n",
       "2  ## Optimized Table Tokenization for Table Stru...   \n",
       "3  pulleys, provided the inner race of the bearin...   \n",
       "4  ## JavaScript Code Example\\n\\nLorem ipsum dolo...   \n",
       "\n",
       "                                           pdf_bytes  \\\n",
       "0  b'%PDF-1.5\\n%\\x8f\\n5 0 obj\\n<< /Type /XObject ...   \n",
       "1  b'%PDF-1.3\\n%\\xc4\\xe5\\xf2\\xe5\\xeb\\xa7\\xf3\\xa0\\...   \n",
       "2  b'%PDF-1.5\\n%\\x8f\\n74 0 obj\\n<< /Filter /Flate...   \n",
       "3  b'%PDF-1.6\\r%\\xe2\\xe3\\xcf\\xd3\\r\\n875 0 obj\\r<<...   \n",
       "4  b'%PDF-1.5\\n%\\xbf\\xf7\\xa2\\xfe\\n3 0 obj\\n<< /Li...   \n",
       "\n",
       "                     created  \n",
       "0 2025-04-20 23:19:48.931844  \n",
       "1 2025-04-20 23:19:48.931844  \n",
       "2 2025-04-20 23:19:48.931844  \n",
       "3 2025-04-20 23:19:48.931844  \n",
       "4 2025-04-20 23:19:48.931844  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "display(mdf.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ec07d38d-d0ff-4dc3-b041-3bf24de9e7e3",
   "metadata": {},
   "source": [
    "## Step 3: Register feature definitions and deploy your feature store\n",
    "\n",
    "`feast apply` scans python files in the current directory for feature/entity definitions and deploys infrastructure according to `feature_store.yaml`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79409ca9-7552-4aa5-b95b-29f836a0d3a5",
   "metadata": {},
   "source": [
    "### Step 3a: Inspecting feature definitions\n",
    "Let's inspect what `example_repo.py` looks like:\n",
    "\n",
    "```python\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76634929-c84a-4301-93d3-88292335bde0",
   "metadata": {},
   "source": [
    "### Step 3b: Applying feature definitions\n",
    "Now we run `feast apply` to register the feature views and entities defined in `example_repo.py`, and sets up SQLite online store tables. Note that we had previously specified SQLite as the online store in `feature_store.yaml` by specifying a `local` provider."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "63454dea-9d55-4188-b048-8b943fe80e3a",
   "metadata": {},
   "outputs": [],
   "source": [
    "%rm -rf .ipynb_checkpoints/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "837e1530-e863-4e5f-b206-b6b4b3ca2aa2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/pymilvus/client/__init__.py:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\n",
      "  from pkg_resources import DistributionNotFound, get_distribution\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/pkg_resources/__init__.py:3147: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('sphinxcontrib')`.\n",
      "Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\n",
      "  declare_namespace(pkg)\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/environs/__init__.py:58: DeprecationWarning: The '__version_info__' attribute is deprecated and will be removed in in a future version. Use feature detection or 'packaging.Version(importlib.metadata.version(\"marshmallow\")).release' instead.\n",
      "  _SUPPORTS_LOAD_DEFAULT = ma.__version_info__ >= (3, 13)\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:192: UserWarning: Field name \"vector_enabled\" in \"MilvusOnlineStoreConfig\" shadows an attribute in parent \"VectorStoreConfig\"\n",
      "  warnings.warn(\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/docling_core/types/doc/document.py:3847: DeprecationWarning: deprecated\n",
      "  if not d.validate_tree(d.body) or not d.validate_tree(d.furniture):\n",
      "No project found in the repository. Using project name rag defined in feature_store.yaml\n",
      "Applying changes for project rag\n",
      "/Users/farceo/dev/feast/sdk/python/feast/feature_store.py:581: RuntimeWarning: On demand feature view is an experimental feature. This API is stable, but the functionality does not scale well for offline retrieval\n",
      "  warnings.warn(\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/docling/pipeline/standard_pdf_pipeline.py:61: DeprecationWarning: Field `generate_table_images` is deprecated. To obtain table images, set `PdfPipelineOptions.generate_page_images = True` before conversion and then use the `TableItem.get_image` function.\n",
      "  or self.pipeline_options.generate_table_images\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/docling_core/types/doc/document.py:3847: DeprecationWarning: deprecated\n",
      "  if not d.validate_tree(d.body) or not d.validate_tree(d.furniture):\n",
      "/Users/farceo/dev/feast/.venv/lib/python3.11/site-packages/docling/pipeline/standard_pdf_pipeline.py:215: DeprecationWarning: Field `generate_table_images` is deprecated. To obtain table images, set `PdfPipelineOptions.generate_page_images = True` before conversion and then use the `TableItem.get_image` function.\n",
      "  or self.pipeline_options.generate_table_images\n",
      "Connecting to Milvus in local mode using /Users/farceo/dev/feast/examples/rag-docling/feature_repo/data/online_store.db\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "04/20/2025 11:20:09 PM pymilvus.milvus_client.milvus_client DEBUG: Created new connection using: 5253ac0d157c4625b39bd6bc6c543be1\n",
      "Deploying infrastructure for \u001b[1m\u001b[32mdocling_feature_view\u001b[0m\n",
      "E20250420 23:20:09.423681 278564 server.cpp:47] [SERVER][BlockLock][] Process exit\n"
     ]
    }
   ],
   "source": [
    "! feast apply "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ad7654cc-865c-4bb4-8c0f-d3086c5d9f7e",
   "metadata": {},
   "source": [
    "## Step 5: Load features into your online store"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "34ded931-3de0-4951-aead-1e8ca1679cbe",
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "from feast import FeatureStore\n",
    "\n",
    "store = FeatureStore(repo_path=\".\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c784d77-e96c-455c-9f1f-9183bab58d72",
   "metadata": {},
   "source": [
    "### Step 5a: Using `write_to_online_store`\n",
    "\n",
    "We now serialize the latest values of features since the beginning of time to prepare for serving. Note, `write_to_online_store` serializes all new features since the last `write_to_online_store` call, or since the time provided minus the `ttl` timedelta. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "3ebfcb4a-c275-421d-bfb4-347b821ea15d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>document_id</th>\n",
       "      <th>chunk_id</th>\n",
       "      <th>file_name</th>\n",
       "      <th>raw_chunk_markdown</th>\n",
       "      <th>vector</th>\n",
       "      <th>chunk_embedding</th>\n",
       "      <th>created</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-1</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...</td>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-2</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nTables organize valuab...</td>\n",
       "      <td>[0.050771258771419525, -0.0055733839981257915,...</td>\n",
       "      <td>[0.050771258771419525, -0.0055733839981257915,...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-3</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\ncomplex column/row-hea...</td>\n",
       "      <td>[-0.05088765174150467, 0.05101901665329933, -0...</td>\n",
       "      <td>[-0.05088765174150467, 0.05101901665329933, -0...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-4</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nmodel. The latter impr...</td>\n",
       "      <td>[0.011835305020213127, -0.09409898519515991, 0...</td>\n",
       "      <td>[0.011835305020213127, -0.09409898519515991, 0...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>doc-1</td>\n",
       "      <td>chunk-5</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>a. Picture of a table:\\nwe can obtain the cont...</td>\n",
       "      <td>[-0.0068757119588553905, 0.006624480709433556,...</td>\n",
       "      <td>[-0.0068757119588553905, 0.006624480709433556,...</td>\n",
       "      <td>2025-04-20 23:19:48.930517</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  document_id chunk_id     file_name  \\\n",
       "0       doc-1  chunk-1  2203.01017v2   \n",
       "1       doc-1  chunk-2  2203.01017v2   \n",
       "2       doc-1  chunk-3  2203.01017v2   \n",
       "3       doc-1  chunk-4  2203.01017v2   \n",
       "4       doc-1  chunk-5  2203.01017v2   \n",
       "\n",
       "                                  raw_chunk_markdown  \\\n",
       "0  Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...   \n",
       "1  a. Picture of a table:\\nTables organize valuab...   \n",
       "2  a. Picture of a table:\\ncomplex column/row-hea...   \n",
       "3  a. Picture of a table:\\nmodel. The latter impr...   \n",
       "4  a. Picture of a table:\\nwe can obtain the cont...   \n",
       "\n",
       "                                              vector  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...   \n",
       "1  [0.050771258771419525, -0.0055733839981257915,...   \n",
       "2  [-0.05088765174150467, 0.05101901665329933, -0...   \n",
       "3  [0.011835305020213127, -0.09409898519515991, 0...   \n",
       "4  [-0.0068757119588553905, 0.006624480709433556,...   \n",
       "\n",
       "                                     chunk_embedding  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...   \n",
       "1  [0.050771258771419525, -0.0055733839981257915,...   \n",
       "2  [-0.05088765174150467, 0.05101901665329933, -0...   \n",
       "3  [0.011835305020213127, -0.09409898519515991, 0...   \n",
       "4  [-0.0068757119588553905, 0.006624480709433556,...   \n",
       "\n",
       "                     created  \n",
       "0 2025-04-20 23:19:48.930517  \n",
       "1 2025-04-20 23:19:48.930517  \n",
       "2 2025-04-20 23:19:48.930517  \n",
       "3 2025-04-20 23:19:48.930517  \n",
       "4 2025-04-20 23:19:48.930517  "
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ad81775-7d9d-4765-9058-495aa907bc1a",
   "metadata": {},
   "source": [
    "## Ingesting transformed data to the feature view that has no associated transformation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "af7fdad8-6cf9-4da6-923a-cbdc229f1f4a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Connecting to Milvus in local mode using data/online_store.db\n"
     ]
    }
   ],
   "source": [
    "store.write_to_online_store(feature_view_name='docling_feature_view', df=df)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88d7c3a6-521b-4df4-9847-0b577c013f4b",
   "metadata": {},
   "source": [
    "## Ingesting Pre-transformed data that was created in our Docling Demo notebook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "a2655725-5cc4-4f07-ade4-dc5e705eed05",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Turning off transformation on writes is as simple as changing the default behavior\n",
    "store.write_to_online_store(\n",
    "    feature_view_name='docling_transform_docs', \n",
    "    df=df[df['document_id']!='doc-1'], \n",
    "    transform_on_write=False,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b365386b-ed45-48c0-85a3-301899dd7758",
   "metadata": {},
   "source": [
    "## Ingesting the raw data data and transforming before insertion to Milvus with Docling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "b43b62fc-961d-4803-b0f1-58cd23e66a7f",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Token indices sequence length is longer than the specified maximum sequence length for this model (933 > 512). Running this sequence through the model will result in indexing errors\n"
     ]
    }
   ],
   "source": [
    "# Now we can transform a raw PDF on the fly\n",
    "store.write_to_online_store(\n",
    "    feature_view_name='docling_transform_docs', \n",
    "    df=mdf[mdf['document_id']=='doc-1'], \n",
    "    transform_on_write=True, # this is the default\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b836e5b1-1fe2-4e9d-8c9a-bdc91da8254e",
   "metadata": {},
   "source": [
    "### Step 5b: Inspect materialized features\n",
    "\n",
    "Note that now there are `online_store.db` and `registry.db`, which store the materialized features and schema information, respectively."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "1307b1aa-fecf-4adf-aafc-f65d89ca735c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>document_id_chunk_id_pk</th>\n",
       "      <th>chunk_id</th>\n",
       "      <th>chunk_text</th>\n",
       "      <th>created_ts</th>\n",
       "      <th>document_id</th>\n",
       "      <th>event_ts</th>\n",
       "      <th>vector</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0200000002000000080000006368756e6b5f6964020000...</td>\n",
       "      <td>chunk-0</td>\n",
       "      <td>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...</td>\n",
       "      <td>1745220099533292</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>1745220099533292</td>\n",
       "      <td>[-0.056879763, 0.016678581, -0.019722786, -0.0...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0200000002000000080000006368756e6b5f6964020000...</td>\n",
       "      <td>chunk-1</td>\n",
       "      <td>a. Picture of a table:\\nTables organize valuab...</td>\n",
       "      <td>1745220099533294</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>1745220099533294</td>\n",
       "      <td>[0.05077126, -0.005573384, -0.05867869, 0.0341...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0200000002000000080000006368756e6b5f6964020000...</td>\n",
       "      <td>chunk-2</td>\n",
       "      <td>a. Picture of a table:\\ncomplex column/row-hea...</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>[-0.05088765, 0.051019017, -0.06598652, -0.045...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0200000002000000080000006368756e6b5f6964020000...</td>\n",
       "      <td>chunk-3</td>\n",
       "      <td>a. Picture of a table:\\nmodel. The latter impr...</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>[0.011835305, -0.094098985, 0.00086131715, -0....</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>0200000002000000080000006368756e6b5f6964020000...</td>\n",
       "      <td>chunk-4</td>\n",
       "      <td>a. Picture of a table:\\nwe can obtain the cont...</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>1745220099533295</td>\n",
       "      <td>[-0.006875712, 0.0066244807, -0.10691858, -0.0...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                             document_id_chunk_id_pk chunk_id  \\\n",
       "0  0200000002000000080000006368756e6b5f6964020000...  chunk-0   \n",
       "1  0200000002000000080000006368756e6b5f6964020000...  chunk-1   \n",
       "2  0200000002000000080000006368756e6b5f6964020000...  chunk-2   \n",
       "3  0200000002000000080000006368756e6b5f6964020000...  chunk-3   \n",
       "4  0200000002000000080000006368756e6b5f6964020000...  chunk-4   \n",
       "\n",
       "                                          chunk_text        created_ts  \\\n",
       "0  Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...  1745220099533292   \n",
       "1  a. Picture of a table:\\nTables organize valuab...  1745220099533294   \n",
       "2  a. Picture of a table:\\ncomplex column/row-hea...  1745220099533295   \n",
       "3  a. Picture of a table:\\nmodel. The latter impr...  1745220099533295   \n",
       "4  a. Picture of a table:\\nwe can obtain the cont...  1745220099533295   \n",
       "\n",
       "  document_id          event_ts  \\\n",
       "0       doc-1  1745220099533292   \n",
       "1       doc-1  1745220099533294   \n",
       "2       doc-1  1745220099533295   \n",
       "3       doc-1  1745220099533295   \n",
       "4       doc-1  1745220099533295   \n",
       "\n",
       "                                              vector  \n",
       "0  [-0.056879763, 0.016678581, -0.019722786, -0.0...  \n",
       "1  [0.05077126, -0.005573384, -0.05867869, 0.0341...  \n",
       "2  [-0.05088765, 0.051019017, -0.06598652, -0.045...  \n",
       "3  [0.011835305, -0.094098985, 0.00086131715, -0....  \n",
       "4  [-0.006875712, 0.0066244807, -0.10691858, -0.0...  "
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pymilvus_client = store._provider._online_store._connect(store.config)\n",
    "COLLECTION_NAME = [c for c in pymilvus_client.list_collections() if 'docling_transform_docs' in c][0]\n",
    "\n",
    "milvus_query_result = pymilvus_client.query(\n",
    "    collection_name=COLLECTION_NAME,\n",
    "    filter=\"document_id == 'doc-1'\",\n",
    "    limit=1000,\n",
    ")\n",
    "pd.DataFrame(milvus_query_result).head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fbf3921-e775-46b7-9915-d18c6592586f",
   "metadata": {},
   "source": [
    "### Quick note on entity keys\n",
    "Note from the above command that the online store indexes by `entity_key`. \n",
    "\n",
    "[Entity keys](https://docs.feast.dev/getting-started/concepts/entity#entity-key) include a list of all entities needed (e.g. all relevant primary keys) to generate the feature vector. In this case, this is a serialized version of the `document_id`. We use this later to fetch all features for a given driver at inference time."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "516f6e4a-2d37-4428-8dba-81620a65c2ad",
   "metadata": {},
   "source": [
    "## Step 6: Embedding a query using PyTorch and Sentence Transformers"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66b4e67d-6f94-4532-b107-abc4c0f002f1",
   "metadata": {},
   "source": [
    "During inference (e.g., during when a user submits a chat message) we need to embed the input text. This can be thought of as a feature transformation of the input data. In this example, we'll do this with a small Sentence Transformer from Hugging Face."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "0cb57d77-4b22-4702-8c7a-f549a1b233ca",
   "metadata": {},
   "outputs": [],
   "source": [
    "from example_repo import embed_text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "5a69d55c-6f70-4f63-a167-0a38e840e3b6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[0.06765689700841904,\n",
       " 0.06349590420722961,\n",
       " 0.0487130805850029,\n",
       " 0.07930495589971542,\n",
       " 0.03744804859161377,\n",
       " 0.0026527801528573036,\n",
       " 0.039374902844429016,\n",
       " -0.007098457310348749,\n",
       " 0.05936148017644882,\n",
       " 0.031537000089883804]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embed_text(\"this is an example sentence\")[0:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67868cdf-04e9-4086-bed8-050e4902ed71",
   "metadata": {},
   "source": [
    "## Step 7: Fetching real-time vectors and data for online inference"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29b9ae94-7daa-4d56-8bca-9339d09cd1ed",
   "metadata": {},
   "source": [
    "At inference time, we need to use vector similarity search through the document embeddings from the online feature store using `retrieve_online_documents_v2()` while passing the embedded query. These feature vectors can then be fed into the context of the LLM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "583ffffc-d1c8-450c-8997-376bd3960f0e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research\n",
      "{ ahn,nli,mly,taa @zurich.ibm.com }\n"
     ]
    }
   ],
   "source": [
    "sample_query = df['raw_chunk_markdown'].values[0] \n",
    "print(sample_query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "0c76a526-35dc-4af5-bd46-d181e3a8c23a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Note we can enhance this special case to embed within the feature server, optionally.\n",
    "query_embedding = embed_text(sample_query)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "511c5d5e-aee3-4375-9112-50a930f8b52e",
   "metadata": {},
   "source": [
    "### Let's fetch the data from the \"batch\" version of the documents stored in the `docling_feature_view` FeatureView"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "d6821005-8849-464a-9860-2833a38b7157",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>vector</th>\n",
       "      <th>file_name</th>\n",
       "      <th>raw_chunk_markdown</th>\n",
       "      <th>chunk_id</th>\n",
       "      <th>distance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...</td>\n",
       "      <td>chunk-1</td>\n",
       "      <td>1.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>References\\n[1] Nicolas Carion, Francisco Mass...</td>\n",
       "      <td>chunk-188</td>\n",
       "      <td>0.370859</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>2203.01017v2</td>\n",
       "      <td>2. Previous work and State of the Art\\nhand. H...</td>\n",
       "      <td>chunk-31</td>\n",
       "      <td>0.352598</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                              vector     file_name  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...  2203.01017v2   \n",
       "1  [-0.056879762560129166, 0.01667858101427555, -...  2203.01017v2   \n",
       "2  [-0.056879762560129166, 0.01667858101427555, -...  2203.01017v2   \n",
       "\n",
       "                                  raw_chunk_markdown   chunk_id  distance  \n",
       "0  Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...    chunk-1  1.000000  \n",
       "1  References\\n[1] Nicolas Carion, Francisco Mass...  chunk-188  0.370859  \n",
       "2  2. Previous work and State of the Art\\nhand. H...   chunk-31  0.352598  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Retrieve top k documents\n",
    "context_data = store.retrieve_online_documents_v2(\n",
    "    features=[\n",
    "        \"docling_feature_view:vector\",\n",
    "        \"docling_feature_view:file_name\",\n",
    "        \"docling_feature_view:raw_chunk_markdown\",\n",
    "        \"docling_feature_view:chunk_id\",\n",
    "    ],\n",
    "    query=query_embedding,\n",
    "    top_k=3,\n",
    "    distance_metric='COSINE',\n",
    ").to_df()\n",
    "\n",
    "display(context_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06f81ffd-3b13-43fc-bd85-2665fedf6a96",
   "metadata": {},
   "source": [
    "### Now let's fetch the data from the \"on demand\" version of the documents stored in the `docling_transform_docs` FeatureView"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "77c3f0e1-eec9-4fd9-8659-9a5668cbb8e7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>vector</th>\n",
       "      <th>document_id</th>\n",
       "      <th>chunk_text</th>\n",
       "      <th>chunk_id</th>\n",
       "      <th>distance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>doc-1</td>\n",
       "      <td>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...</td>\n",
       "      <td>chunk-0</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>doc-7</td>\n",
       "      <td></td>\n",
       "      <td>chunk-25</td>\n",
       "      <td>0.978799</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>[-0.056879762560129166, 0.01667858101427555, -...</td>\n",
       "      <td>doc-7</td>\n",
       "      <td></td>\n",
       "      <td>chunk-72</td>\n",
       "      <td>0.968456</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                              vector document_id  \\\n",
       "0  [-0.056879762560129166, 0.01667858101427555, -...       doc-1   \n",
       "1  [-0.056879762560129166, 0.01667858101427555, -...       doc-7   \n",
       "2  [-0.056879762560129166, 0.01667858101427555, -...       doc-7   \n",
       "\n",
       "                                          chunk_text  chunk_id  distance  \n",
       "0  Ahmed Nassar, Nikolaos Livathinos, Maksym Lysa...   chunk-0       NaN  \n",
       "1                                                     chunk-25  0.978799  \n",
       "2                                                     chunk-72  0.968456  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Retrieve top k documents from the transformed data\n",
    "context_data = store.retrieve_online_documents_v2(\n",
    "    features=[\n",
    "        \"docling_transform_docs:vector\",\n",
    "        \"docling_transform_docs:document_id\",\n",
    "        \"docling_transform_docs:chunk_text\",\n",
    "        \"docling_transform_docs:chunk_id\",\n",
    "    ],\n",
    "    query=query_embedding,\n",
    "    top_k=3,\n",
    "    distance_metric='COSINE',\n",
    ").to_df()\n",
    "\n",
    "display(context_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69f477eb-e121-4d7c-9a8b-416cad9cfa8a",
   "metadata": {},
   "source": [
    "### `FeatureView` vs. `OnDemandFeatureView` for Vector Search\n",
    "\n",
    "If you look in `example_repo.py` you'll notice that `docling_example_feature_view` and `docling_transform_docs` are very similar\n",
    "with the exception of `docling_transform_docs` having the schema defined in the `@on_demand_feature_view` decorator and a function \n",
    "(i.e., a feature transformation) implemented after the name declaration.\n",
    "\n",
    "On the backend, Feast orchestrates the execution of this transformation within the Feature Server so that Feast can transform your \n",
    "documents with Docling via API and make your docs available for vector similarity search after transformation and insertion to the online store."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "f6aa7d24-4a80-48ea-9732-0818f333dac7",
   "metadata": {},
   "outputs": [],
   "source": [
    " def format_documents(context_df):\n",
    "    output_context = \"\"\n",
    "    \n",
    "    # Remove duplicates based on 'chunk_id' (ensuring unique document chunks)\n",
    "    unique_documents = context_df.drop_duplicates(subset=[\"chunk_id\"])[\"chunk_text\"]\n",
    "    \n",
    "    # Format each document\n",
    "    for i, document_text in enumerate(unique_documents):\n",
    "        output_context += f\"****START DOCUMENT {i}****\\n\"\n",
    "        output_context += f\"document = {{ {document_text.strip()} }}\\n\"\n",
    "        output_context += f\"****END DOCUMENT {i}****\\n\\n\"\n",
    "    \n",
    "    return output_context.strip()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "5dd668d6-da81-48a2-a841-a9df1804bfa7",
   "metadata": {},
   "outputs": [],
   "source": [
    "RAG_CONTEXT = format_documents(context_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "3978561a-79a0-48bb-86ca-d81293a0e618",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "****START DOCUMENT 0****\n",
      "document = { Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research\n",
      "{ ahn,nli,mly,taa @zurich.ibm.com } }\n",
      "****END DOCUMENT 0****\n",
      "\n",
      "****START DOCUMENT 1****\n",
      "document = {  }\n",
      "****END DOCUMENT 1****\n",
      "\n",
      "****START DOCUMENT 2****\n",
      "document = {  }\n",
      "****END DOCUMENT 2****\n"
     ]
    }
   ],
   "source": [
    "print(RAG_CONTEXT)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "09cad16f-4078-42de-80ee-2672dae5608a",
   "metadata": {},
   "outputs": [],
   "source": [
    "FULL_PROMPT = f\"\"\"\n",
    "You are an assistant for answering questions about a series of documents. You will be provided documentation from different documents. Provide a conversational answer.\n",
    "If you don't know the answer, just say \"I do not know.\" Don't make up an answer.\n",
    "\n",
    "Here are document(s) you should use when answer the users question:\n",
    "{RAG_CONTEXT}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "5d4b1739-e686-4d77-9f25-1cdec66f3773",
   "metadata": {},
   "outputs": [],
   "source": [
    "question = 'Who are the authors of the paper?'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "7bb4a000-8ef3-4006-9c61-7d76fa865d28",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from openai import OpenAI\n",
    "\n",
    "client = OpenAI(\n",
    "    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "da814147-9c78-4906-a84a-78fc88c2fc49",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = client.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=[\n",
    "        {\"role\": \"system\", \"content\": FULL_PROMPT},\n",
    "        {\"role\": \"user\", \"content\": question}\n",
    "    ],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "68cbd8df-af73-4dbe-97a9-f3cd89f36f3d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The authors of the paper are Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, and Peter Staar from IBM Research.\n"
     ]
    }
   ],
   "source": [
    "print('\\n'.join([c.message.content for c in response.choices]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4f01627-533b-49b0-9814-292360d064c6",
   "metadata": {},
   "source": [
    "# End"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
