{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "8c3403a6",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Copyright 2021 NVIDIA Corporation. All Rights Reserved.\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "# ================================\n",
    "\n",
    "# Each user is responsible for checking the content of datasets and the\n",
    "# applicable licenses and determining if suitable for the intended use."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ad9b5cc0-2110-464e-9773-003ffe7d216c",
   "metadata": {},
   "source": [
    "<img src=\"https://developer.download.nvidia.com/notebooks/dlsw-notebooks/merlin_merlin_01-building-recommender-systems-with-merlin/nvidia_logo.png\" style=\"width: 90px; float: right;\"> \n",
    "\n",
    "## Building Intelligent Recommender Systems with Merlin integrated with Milvus\n",
    "\n",
    "This notebook is created using the latest stable [merlin-tensorflow](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow/tags) container. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9657308-2e08-49b4-8924-eace75a4634c",
   "metadata": {},
   "source": [
    "### Overview\n",
    "\n",
    "Recommender Systems (RecSys) are the engine of the modern internet and the catalyst for human decisions. Building a recommendation system is challenging because it requires multiple stages (data preprocessing, offline training, item retrieval, filtering, ranking, ordering, etc.) to work together seamlessly and efficiently. The biggest challenges for new practitioners are the lack of understanding around what RecSys look like in the real world, and the gap between examples of simple models and a production-ready end-to-end recommender systems."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "405280b0-3d48-43b6-ab95-d29be7a43e9e",
   "metadata": {},
   "source": [
    "The figure below represents a four-stage recommender system. This is a more complex process than only training a single model and deploying it, and it is much more realistic and closer to what's happening in the real-world recommender production systems."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "27220153",
   "metadata": {},
   "source": [
    "![fourstage](../images/fourstages.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b27ffed1-4b4b-4b6f-b933-31e9f6c1b4e1",
   "metadata": {},
   "source": [
    "In this notebook and the next, we are going to showcase how we can develop and train a four-stage recommender system integrated with Milvus vector database indexing and querying framework (for approximate nearest neighbor-ANN search), and deploy it easily on [Triton Inference Server](https://github.com/triton-inference-server/server) using Merlin Systems library. Let's go over the concepts in the figure briefly. \n",
    "- **Retrieval:** This is the step to narrow down millions of items into thousands of candidates. We are going to train a Two-Tower item retrieval model to retrieve the relevant top-K candidate items.\n",
    "- **Filtering:** This step is to exclude the already interacted  or undesirable items from the candidate items set or to apply business logic rules. Although this is an important step, for this example we skip this step.\n",
    "- **Scoring:** This is also known as ranking. Here the retrieved and filtered candidate items are being scored. We are going to train a ranking model to be able to use at our scoring step. \n",
    "- **Ordering:** At this stage, we can order the final set of items that we want to recommend to the user. Here, we’re able to align the output of the model with business needs, constraints, or criteria.\n",
    "\n",
    "To learn more about the four-stage recommender systems, you can listen to Even Oldridge's [Moving Beyond Recommender Models talk](https://www.youtube.com/watch?v=5qjiY-kLwFY&list=PL65MqKWg6XcrdN4TJV0K1PdLhF_Uq-b43&index=7) at KDD'21 and read more [in this blog post](https://eugeneyan.com/writing/system-design-for-discovery/)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e26f3194-9f17-4fa7-8baa-14333f2a122a",
   "metadata": {},
   "source": [
    "### Learning objectives\n",
    "- Understanding four stages of recommender systems (this notebook)\n",
    "- Training retrieval and ranking models with Merlin Models (this notebook)\n",
    "- Setting up a feature store library (this notebook)\n",
    "- Exporting user and item embeddings to be used in retrieving recommendation candidates (this notebook)\n",
    "- Setting up Milvus as an approximate nearest neighbours (ANN) search library (second notebook)\n",
    "- Deploying trained models to Triton Inference Server with Merlin Systems (second notebook)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58d8bd1f-fa29-4d4b-a320-c76538f2302f",
   "metadata": {},
   "source": [
    "In addition to NVIDIA Merlin libraries and the Triton Inference Server client library, we use two external libraries in these series of examples:\n",
    "\n",
    "- [Feast](https://docs.feast.dev/): an end-to-end open source feature store library for machine learning\n",
    "- [Milvus](https://github.com/matrixji/python-milvus-server): a library for efficient similarity search and clustering of dense vectors\n",
    "\n",
    "You can find more information about `Feast feature store` and `Milvus` libraries in the next notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46b7f3bd",
   "metadata": {},
   "source": [
    "### Import required libraries and functions"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c1586d8-e5a6-40c3-b6bb-61a3e62fa34c",
   "metadata": {},
   "source": [
    "**Compatibility:**\n",
    "\n",
    "This notebook is developed and tested using the latest `merlin-tensorflow` container from the NVIDIA NGC catalog. To find the tag for the most recently-released container, refer to the [Merlin TensorFlow](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow) page."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a57aaf7-2a23-4740-98bb-253450ddc8c4",
   "metadata": {},
   "source": [
    "Also install Feast and Milvus libraries as shown below. Feast is used in this notebook and Milvus is used in the next notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2cd8cc8d-5cc7-4a9f-91e5-3deec6f1fe74",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# for running this example, install the following version of the Feast library\n",
    "%pip install \"feast==0.31\"\n",
    "\n",
    "# The second notebook will use Milvus server and pymilvus, which can be installed as follows:\n",
    "%pip install milvus\n",
    "%pip install pymilvus"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bcea3bb-6b69-469e-bef5-9ebb63572e10",
   "metadata": {},
   "source": [
    "Next, import other required libraries:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "08cdbfcc",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-01 20:02:24.661276: I tensorflow/core/platform/cpu_feature_guard.cc:183] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
      "To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
      "/usr/local/lib/python3.8/dist-packages/merlin/dtypes/mappings/torch.py:43: UserWarning: PyTorch dtype mappings did not load successfully due to an error: No module named 'torch'\n",
      "  warn(f\"PyTorch dtype mappings did not load successfully due to an error: {exc.msg}\")\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.data_structures has been moved to tensorflow.python.trackable.data_structures. The old module will be deleted in version 2.11.\n",
      "[INFO]: sparse_operation_kit is imported\n",
      "WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.\n",
      "[SOK INFO] Import /usr/local/lib/python3.8/dist-packages/merlin_sok-1.2.0-py3.8-linux-x86_64.egg/sparse_operation_kit/lib/libsok_experiment.so\n",
      "[SOK INFO] Import /usr/local/lib/python3.8/dist-packages/merlin_sok-1.2.0-py3.8-linux-x86_64.egg/sparse_operation_kit/lib/libsok_experiment.so\n",
      "[SOK INFO] Initialize finished, communication tool: horovod\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-01 20:02:34.071549: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
      "2023-07-01 20:02:34.071623: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:226] Using CUDA malloc Async allocator for GPU: 0\n",
      "2023-07-01 20:02:34.071869: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1638] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 16255 MB memory:  -> device: 0, name: Tesla V100-SXM2-32GB-LS, pci bus id: 0000:85:00.0, compute capability: 7.0\n",
      "/usr/local/lib/python3.8/dist-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "# for running this example on CPU, comment out the line below\n",
    "os.environ[\"TF_GPU_ALLOCATOR\"] = \"cuda_malloc_async\"\n",
    "\n",
    "import nvtabular as nvt\n",
    "from nvtabular.ops import Rename, Filter, Dropna, LambdaOp, Categorify, \\\n",
    "    TagAsUserFeatures, TagAsUserID, TagAsItemFeatures, TagAsItemID, AddMetadata\n",
    "\n",
    "from merlin.schema.tags import Tags\n",
    "\n",
    "import merlin.models.tf as mm\n",
    "from merlin.io.dataset import Dataset\n",
    "from merlin.datasets.ecommerce import transform_aliccp\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "028a1398-76a8-4998-97d8-34a806e130d3",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# disable INFO and DEBUG logging everywhere\n",
    "import logging\n",
    "\n",
    "logging.disable(logging.WARNING)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "baad8ae3",
   "metadata": {},
   "source": [
    "In this example notebook, we will use the YooChoose dataset that is publicly available [here](https://www.kaggle.com/datasets/chadgostopp/recsys-challenge-2015). Due to licensing rules, you must download the file `yoochoose-clicks.dat` yourself and save it in your local folder. Then set your `DATA_FOLDER` in the next cell to point to this folder. Once you have the original dataset processed (in the next few cells), you can export it to the same folder with the name `yoochoose-clicks-milvus.dat` for future executions of this notebook (to avoid re-building the modified dataset).\n",
    "\n",
    "Also define the `BASE_DIR` path as your feature store repo path."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "81ddb370",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# set up the data folder that contains the yoochoose data\n",
    "DATA_FOLDER = os.environ.get(\"DATA_FOLDER\", \"/workspace/data/\")\n",
    "# set up the base dir for feature store\n",
    "BASE_DIR = os.environ.get(\"BASE_DIR\", \"/workspace/data/fstore_milvus/\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a746a3f-1845-4af3-8a37-1b34aa1bb81b",
   "metadata": {},
   "source": [
    "Next, we read the YooChoose data from its previously downloaded location and do the following:\n",
    "- rename `session_id` as `user_id` and `category` as `item_category`\n",
    "- add two new columns `user_age` and `click`, and initialize the first one with random values and second with value 1\n",
    "\n",
    "If the cell below was previously executed and the modified version of the YoocCoose dataset was exported to a parquet file, it will simply load the dataset from the exported parquet file (`yoochoose-clicks-milvus.dat`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "b747120c-bc24-4ded-860d-1657e3cf7642",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "   user_id           timestamp    item_id  item_category  user_age  click\n",
      "0     5671 2014-04-01 09:57:29  214820413              0        50      1\n",
      "1     5671 2014-04-01 10:12:34  214820383              0        50      1\n",
      "2     5669 2014-04-05 12:25:01  214832760              0        37      1\n",
      "3     5669 2014-04-05 12:25:27  214832760              0        37      1\n",
      "4     5669 2014-04-05 12:32:25  214697825              0        37      1\n",
      "Number of unique users:  9249729\n",
      "Number of unique items:  52739\n"
     ]
    }
   ],
   "source": [
    "import cudf\n",
    "import random\n",
    "import pandas as pd\n",
    "\n",
    "data_file = os.path.join(DATA_FOLDER, \"yoochoose-clicks-milvus.dat\")\n",
    "if os.path.exists(data_file):\n",
    "    gdf = cudf.read_parquet(data_file)\n",
    "else:\n",
    "    DATA_PATH = os.path.join(DATA_FOLDER, 'yoochoose-clicks.dat')\n",
    "    OVERWRITE = False\n",
    "    gdf = cudf.read_csv(DATA_PATH, sep=',', names=['session_id','timestamp', 'item_id', 'category'], dtype=['int', 'datetime64[s]', 'int', 'int'])\n",
    "\n",
    "    # rename two existing columns, and drop unnecessary columns\n",
    "    gdf.rename(columns={\"session_id\": \"user_id\", \"category\": \"item_category\"}, inplace=True)\n",
    "\n",
    "    # add two new columns and initialize with random values\n",
    "    import random\n",
    "    random.seed(5)\n",
    "\n",
    "    # get unique user_id's to assign a random age to each user\n",
    "    gdf2 = gdf.drop_duplicates(subset=['user_id'])\n",
    "    gdf2.drop(labels=[\"timestamp\",\"item_id\",\"item_category\"], axis=1, inplace=True)\n",
    "    rr = [random.randint(18,75) for _ in range(gdf2.shape[0])]\n",
    "    gdf2[\"user_age\"] = rr\n",
    "    gdf = gdf.merge(gdf2, on=['user_id'], how='left')\n",
    "    del(gdf2)\n",
    "    del(rr)\n",
    "\n",
    "    # add \"click\" as a target field and initialize it value 1\n",
    "    # all yoochoose rows are positive samples, but a target column is needed in the workflow below\n",
    "    gdf[\"click\"] = 1\n",
    "    \n",
    "    # write to parquet file\n",
    "    gdf.to_parquet(data_file)\n",
    "    \n",
    "print(gdf.head())\n",
    "print(\"Number of unique users: \", gdf.user_id.nunique())\n",
    "print(\"Number of unique items: \", gdf.item_id.nunique())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49453d1c-f7bf-4c25-a222-f7655bd897e8",
   "metadata": {},
   "source": [
    "Next, sort the user interactions by timestamp, and split the resulting dataset as 80-20 train-validation sets by time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "739afb6f-99f8-4b1a-b3dd-36553caaaa54",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "gdf = gdf.sort_values(\"timestamp\")\n",
    "nsize = int(gdf.shape[0]*0.8)        # 80-20 split (top 80% is train, bottom 20% is validation\n",
    "train_raw = Dataset(gdf[:nsize][:])\n",
    "valid_raw = Dataset(gdf[nsize:][:])\n",
    "del(gdf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "1075644f-1b23-4e7e-a251-40fc2583df87",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "7305761 49008\n"
     ]
    }
   ],
   "source": [
    "df = train_raw.compute()\n",
    "print(df.user_id.nunique(), df.item_id.nunique())\n",
    "del(df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "13c38b82-a7a1-483f-9e47-12489ef553bd",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "619"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import gc\n",
    "gc.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e428d01-f2f0-42d4-85d0-0986bb83a847",
   "metadata": {},
   "source": [
    "### Feature Engineering with NVTabular"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "d4bf870c-30cf-4074-88d3-b75981b3a873",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "output_path = os.path.join(DATA_FOLDER, \"processed_nvt\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e7bfb5c-88ed-4cf9-8a17-98c0284adb36",
   "metadata": {},
   "source": [
    "In the following NVTabular workflow, notice that we apply the `Dropna()` Operator at the end. We add the Operator to remove rows with missing values in the final DataFrame after the preceding transformations. Although, the dataset that we use in this notebook does not have null entries, you might have null entries in your `user_id` and `item_id` columns in your own custom dataset. Therefore, while applying `Dropna()` we will not be registering null `user_id_raw` and `item_id_raw` values in the feature store, and will be avoiding potential issues that can occur because of any null entries."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "f91ada78-4e4d-4415-ab94-e351aa454e9e",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "user_id_raw = [\"user_id\"] >> Rename(postfix='_raw') >> LambdaOp(lambda col: col.astype(\"int32\")) >> TagAsUserFeatures()\n",
    "item_id_raw = [\"item_id\"] >> Rename(postfix='_raw') >> LambdaOp(lambda col: col.astype(\"int32\")) >> TagAsItemFeatures()\n",
    "\n",
    "user_id = [\"user_id\"] >> Categorify(dtype=\"int32\") >> TagAsUserID()\n",
    "item_id = [\"item_id\"] >> Categorify(dtype=\"int32\") >> TagAsItemID()\n",
    "\n",
    "item_features = (\n",
    "    [\"item_category\"] >> Categorify(dtype=\"int32\") >> TagAsItemFeatures()\n",
    ")\n",
    "\n",
    "user_features = (\n",
    "    [\"user_age\"] >> Categorify(dtype=\"int32\") >> TagAsUserFeatures()\n",
    ")\n",
    "\n",
    "targets = [\"click\"] >> AddMetadata(tags=[Tags.BINARY_CLASSIFICATION, \"target\"])\n",
    "\n",
    "outputs = user_id + item_id + item_features + user_features + user_id_raw + item_id_raw + targets\n",
    "\n",
    "# add dropna op to filter rows with nulls\n",
    "outputs = outputs >> Dropna()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71aae006-a161-4127-889a-8f433a9f7362",
   "metadata": {},
   "source": [
    "Next we will perform `fit` and `transform` steps on the raw dataset applying the operators defined in the NVTabular workflow pipeline below, and also save our workflow model. After fit and transform, the processed parquet files are saved to output_path."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "ca39eae4-693f-4ed0-9692-4818c32406d5",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Generate statistics for the features and export parquet files\n",
    "# this step will generate the schema file\n",
    "workflow = nvt.Workflow(outputs)\n",
    "workflow.fit_transform(train_raw).to_parquet(os.path.join(output_path, \"train\"))\n",
    "workflow.transform(valid_raw).to_parquet(os.path.join(output_path, \"valid\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09c87748-af61-42b8-8574-1afe3d71118f",
   "metadata": {},
   "source": [
    "### Training a Retrieval Model with Two-Tower Model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e644fcba-7b0b-44c0-97fd-80f4fcb01191",
   "metadata": {},
   "source": [
    "We start with the offline candidate retrieval stage. We are going to train a Two-Tower model for item retrieval. To learn more about the Two-tower model you can visit [05-Retrieval-Model.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/05-Retrieval-Model.ipynb)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf9bca46-a6b6-4a73-afd8-fe2869c60748",
   "metadata": {},
   "source": [
    "#### Feature Engineering with NVTabular"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "da2b09cc-09fb-4814-a1cb-7e6168d9eb4b",
   "metadata": {},
   "source": [
    "We are going to process our raw categorical features by encoding them using `Categorify()` operator and tag the features with `user` or `item` tags in the schema file. To learn more about [NVTabular](https://github.com/NVIDIA-Merlin/NVTabular) and the schema object visit this example [notebook](https://github.com/NVIDIA-Merlin/models/blob/main/examples/02-Merlin-Models-and-NVTabular-integration.ipynb) in the Merlin Models repo."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3bc7abd-8d97-452b-a4af-5227821a99c9",
   "metadata": {},
   "source": [
    "Define a new output path to store the filtered datasets and schema files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "df72a793-194b-44f4-80c3-aaa368a9a01e",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "output_path2 = os.path.join(DATA_FOLDER, \"processed/retrieval\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "251d4697-8f9c-4c93-8de4-c3480a8378de",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "train_tt = Dataset(os.path.join(output_path, \"train\", \"*.parquet\"))\n",
    "valid_tt = Dataset(os.path.join(output_path, \"valid\", \"*.parquet\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffd7e2ac-a251-49d0-943b-e9272c852ba6",
   "metadata": {},
   "source": [
    "We select only positive interaction rows where `click==1` in the dataset with `Filter()` operator."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "7e085a6d-74ad-4c24-8e7c-4e449c15f471",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "inputs = train_tt.schema.column_names\n",
    "outputs = inputs >> Filter(f=lambda df: df[\"click\"] == 1)\n",
    "\n",
    "workflow2 = nvt.Workflow(outputs)\n",
    "\n",
    "workflow2.fit(train_tt)\n",
    "\n",
    "workflow2.transform(train_tt).to_parquet(\n",
    "    output_path=os.path.join(output_path2, \"train\")\n",
    ")\n",
    "\n",
    "workflow2.transform(valid_tt).to_parquet(\n",
    "    output_path=os.path.join(output_path2, \"valid\")\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc4721ae-7228-4d3f-9586-dcdfefecc19f",
   "metadata": {},
   "source": [
    "NVTabular exported the schema file, `schema.pbtxt` a protobuf text file, of our processed dataset. To learn more about the schema object and schema file you can explore [02-Merlin-Models-and-NVTabular-integration.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/02-Merlin-Models-and-NVTabular-integration.ipynb) notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aa025b80-0f18-437c-a85f-4edcb89f4222",
   "metadata": {},
   "source": [
    "**Read filtered parquet files as Dataset objects.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "252a8e60-b447-46b5-ade6-3557cbafa797",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "train_tt = Dataset(os.path.join(output_path2, \"train\", \"*.parquet\"), part_size=\"500MB\")\n",
    "valid_tt = Dataset(os.path.join(output_path2, \"valid\", \"*.parquet\"), part_size=\"500MB\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "71063653-2f39-4b54-8399-145d6f281d4d",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "schema = train_tt.schema.select_by_tag([Tags.ITEM_ID, Tags.USER_ID, Tags.ITEM, Tags.USER]).without(['user_id_raw', 'item_id_raw', 'click'])\n",
    "train_tt.schema = schema\n",
    "valid_tt.schema = schema"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "9312511a-f368-42f2-93d2-eb95aebbf46c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "model_tt = mm.TwoTowerModel(\n",
    "    schema,\n",
    "    query_tower=mm.MLPBlock([128, 64], no_activation_last_layer=True),\n",
    "    samplers=[mm.InBatchSampler()],\n",
    "    embedding_options=mm.EmbeddingOptions(infer_embedding_sizes=True),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "4d47cb8b-e06a-4932-9a19-fb244ef43152",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/keras/initializers/initializers.py:120: UserWarning: The initializer TruncatedNormal is unseeded and being called multiple times, which will return identical values each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initalizer instance more than once.\n",
      "  warnings.warn(\n",
      "2023-07-01 20:05:10.621876: I tensorflow/core/common_runtime/executor.cc:1209] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype int32\n",
      "\t [[{{node Placeholder/_0}}]]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "25785/25785 [==============================] - ETA: 0s - loss: 5.5750 - recall_at_10: 0.1765 - ndcg_at_10: 0.1104 - regularization_loss: 0.0000e+00 - loss_batch: 5.5750"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-01 21:47:47.495805: I tensorflow/core/common_runtime/executor.cc:1209] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype int32\n",
      "\t [[{{node Placeholder/_0}}]]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "25785/25785 [==============================] - 6203s 240ms/step - loss: 5.5750 - recall_at_10: 0.1765 - ndcg_at_10: 0.1104 - regularization_loss: 0.0000e+00 - loss_batch: 5.5749 - val_loss: 7.0045 - val_recall_at_10: 0.0101 - val_ndcg_at_10: 0.0049 - val_regularization_loss: 0.0000e+00 - val_loss_batch: 4.1984\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7ef9e8223ca0>"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model_tt.compile(\n",
    "    optimizer=\"adam\",\n",
    "    run_eagerly=False,\n",
    "    loss=\"categorical_crossentropy\",\n",
    "    metrics=[mm.RecallAt(10), mm.NDCGAt(10)],\n",
    ")\n",
    "model_tt.fit(train_tt, validation_data=valid_tt, batch_size=1024, epochs=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80d83007-f9e8-408f-9f65-a0e9e19cb586",
   "metadata": {},
   "source": [
    "### Exporting query (user) model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22af58a9-5525-454a-bf25-a9df0462aa53",
   "metadata": {},
   "source": [
    "We export the query tower to use it later during the model deployment stage with Merlin Systems."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "d2370f13-ff9a-4ee0-ba1e-451c7bec0f8a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "query_tower = model_tt.retrieval_block.query_block()\n",
    "query_tower.save(os.path.join(BASE_DIR, \"query_tower\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e16401d4",
   "metadata": {
    "tags": []
   },
   "source": [
    "### Training a Ranking Model with DLRM"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b72e8a2a-fc4a-43ab-934c-6d941c56aad2",
   "metadata": {},
   "source": [
    "Now we will move onto training an offline ranking model. This ranking model will be used for scoring our retrieved items."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c4f2b234",
   "metadata": {},
   "source": [
    "Read processed parquet files. We use the `schema` object to define our model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "cb870461-6ac2-49b2-ba6a-2da6ecb57f1d",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# define train and valid dataset objects\n",
    "train = Dataset(os.path.join(output_path, \"train\", \"*.parquet\"), part_size=\"500MB\")\n",
    "valid = Dataset(os.path.join(output_path, \"valid\", \"*.parquet\"), part_size=\"500MB\")\n",
    "\n",
    "# define schema object\n",
    "schema = train.schema.without(['user_id_raw', 'item_id_raw'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "30e4ebc2",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'click'"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "target_column = schema.select_by_tag(Tags.TARGET).column_names[0]\n",
    "target_column"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f68e26b",
   "metadata": {},
   "source": [
    "Deep Learning Recommendation Model [(DLRM)](https://arxiv.org/abs/1906.00091) architecture is a popular neural network model originally proposed by Facebook in 2019. The model was introduced as a personalization deep learning model that uses embeddings to process sparse features that represent categorical data and a multilayer perceptron (MLP) to process dense features, then interacts these features explicitly using the statistical techniques proposed in [here](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5694074). To learn more about DLRM architetcture please visit `Exploring-different-models` [notebook](https://github.com/NVIDIA-Merlin/models/blob/main/examples/04-Exporting-ranking-models.ipynb) in the Merlin Models GH repo."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "e4325080",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "model = mm.DLRMModel(\n",
    "    schema,\n",
    "    embedding_dim=64,\n",
    "    bottom_block=mm.MLPBlock([128, 64]),\n",
    "    top_block=mm.MLPBlock([128, 64, 32]),\n",
    "    prediction_tasks=mm.BinaryClassificationTask(target_column),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "bfe2aa9e",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-01 21:48:46.622138: I tensorflow/core/common_runtime/executor.cc:1209] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype int32\n",
      "\t [[{{node Placeholder/_0}}]]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "25785/25785 [==============================] - ETA: 0s - loss: 6.4838e-04 - auc: 0.0000e+00 - regularization_loss: 0.0000e+00 - loss_batch: 6.4838e-04"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-01 23:47:00.253619: I tensorflow/core/common_runtime/executor.cc:1209] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype int32\n",
      "\t [[{{node Placeholder/_0}}]]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "25785/25785 [==============================] - 7123s 276ms/step - loss: 6.4838e-04 - auc: 0.0000e+00 - regularization_loss: 0.0000e+00 - loss_batch: 6.4836e-04 - val_loss: 8.3344e-14 - val_auc: 0.0000e+00 - val_regularization_loss: 0.0000e+00 - val_loss_batch: 4.3610e-14\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7ef9e2fa6a90>"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.compile(optimizer=\"adam\", run_eagerly=False, metrics=[tf.keras.metrics.AUC()])\n",
    "model.fit(train, validation_data=valid, batch_size=1024)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "498c4d49-7a59-4260-87b9-b86b66f2c67f",
   "metadata": {},
   "source": [
    "Let's save our DLRM model to be able to load back at the deployment stage. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "00447c12-ea80-4d98-ab47-cc1a982a6958",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "model.save(os.path.join(BASE_DIR, \"dlrm\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d64a3f3f-81d8-489c-835f-c62f76df22d5",
   "metadata": {},
   "source": [
    "In the following cells we are going to export the required user and item features files, and save the query (user) tower model and item embeddings to disk. If you want to read more about exporting retrieval models, please visit [05-Retrieval-Model.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/05-Retrieval-Model.ipynb) notebook in Merlin Models library repo."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5da1f434-f5a1-4478-b588-7e7ec17e6a88",
   "metadata": {},
   "source": [
    "### Set up a feature store with Feast"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99a4e939-d3cf-44f0-9012-d2af3264ee25",
   "metadata": {},
   "source": [
    "Before we move onto the next step, we need to create a Feast feature repository. [Feast](https://feast.dev/) is an end-to-end open source feature store for machine learning. Feast (Feature Store) is a customizable operational data system that re-uses existing infrastructure to manage and serve machine learning features to real-time models.\n",
    "\n",
    "We will create the feature repo in the current working directory, which is `BASE_DIR` for us."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "2e7e96d2-9cd2-40d1-b356-8cd76b57bb4a",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Feast is an open source project that collects anonymized error reporting and usage statistics. To opt out or learn more see https://docs.feast.dev/reference/usage\n",
      "\n",
      "Creating a new Feast repository in \u001b[1m\u001b[32m/workspace/data/fstore_milvus/feast_repo\u001b[0m.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "!rm -rf $BASE_DIR/feast_repo\n",
    "!cd $BASE_DIR && feast init feast_repo"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e630e53-8336-487a-9ceb-133b1538acfb",
   "metadata": {},
   "source": [
    "You should be seeing a message like <i>Creating a new Feast repository in ... </i> printed out above. Now, navigate to the `feature_repo` folder and remove the demo parquet file created by default, and `examples.py` file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "26ba2521-ed1b-4c2b-afdd-26b4a5a9c008",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "feature_repo_path = os.path.join(BASE_DIR, \"feast_repo/feature_repo\")\n",
    "if os.path.exists(f\"{feature_repo_path}/example_repo.py\"):\n",
    "    os.remove(f\"{feature_repo_path}/example_repo.py\")\n",
    "if os.path.exists(f\"{feature_repo_path}/data/driver_stats.parquet\"):\n",
    "    os.remove(f\"{feature_repo_path}/data/driver_stats.parquet\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "78315676-eb6c-405a-b1fd-3174ea328406",
   "metadata": {},
   "source": [
    "### Exporting user and item features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "ea0b369c-2f01-42e3-9f3c-74c3ff4a6d64",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from merlin.models.utils.dataset import unique_rows_by_features\n",
    "\n",
    "user_features = (\n",
    "    unique_rows_by_features(train, Tags.USER, Tags.USER_ID)\n",
    "    .compute()\n",
    "    .reset_index(drop=True)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "6b0949f9-e67a-414f-9d74-65f138e820a8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user_id</th>\n",
       "      <th>user_age</th>\n",
       "      <th>user_id_raw</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>3</td>\n",
       "      <td>42</td>\n",
       "      <td>189448</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>7</td>\n",
       "      <td>515537</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5</td>\n",
       "      <td>23</td>\n",
       "      <td>825463</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>6</td>\n",
       "      <td>8</td>\n",
       "      <td>881789</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>1026667</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   user_id  user_age  user_id_raw\n",
       "0        3        42       189448\n",
       "1        4         7       515537\n",
       "2        5        23       825463\n",
       "3        6         8       881789\n",
       "4        7        11      1026667"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "user_features.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4a46bd8c-1337-4c74-a85b-25348a897d90",
   "metadata": {},
   "source": [
    "We will artificially add `datetime` and `created` timestamp columns to our user_features dataframe. This required by Feast to track the user-item features and their creation time and to determine which version to use when we query Feast."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "d30bd2f8-8a78-4df7-9bc4-42bd741c5b99",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "\n",
    "user_features[\"datetime\"] = datetime.now()\n",
    "user_features[\"datetime\"] = user_features[\"datetime\"].astype(\"datetime64[ns]\")\n",
    "user_features[\"created\"] = datetime.now()\n",
    "user_features[\"created\"] = user_features[\"created\"].astype(\"datetime64[ns]\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "d4998cd1-9dcd-4911-8f23-372e197b41e9",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user_id</th>\n",
       "      <th>user_age</th>\n",
       "      <th>user_id_raw</th>\n",
       "      <th>datetime</th>\n",
       "      <th>created</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>3</td>\n",
       "      <td>42</td>\n",
       "      <td>189448</td>\n",
       "      <td>2023-07-02 01:11:27.753645</td>\n",
       "      <td>2023-07-02 01:11:27.762124</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>7</td>\n",
       "      <td>515537</td>\n",
       "      <td>2023-07-02 01:11:27.753645</td>\n",
       "      <td>2023-07-02 01:11:27.762124</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5</td>\n",
       "      <td>23</td>\n",
       "      <td>825463</td>\n",
       "      <td>2023-07-02 01:11:27.753645</td>\n",
       "      <td>2023-07-02 01:11:27.762124</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>6</td>\n",
       "      <td>8</td>\n",
       "      <td>881789</td>\n",
       "      <td>2023-07-02 01:11:27.753645</td>\n",
       "      <td>2023-07-02 01:11:27.762124</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>1026667</td>\n",
       "      <td>2023-07-02 01:11:27.753645</td>\n",
       "      <td>2023-07-02 01:11:27.762124</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   user_id  user_age  user_id_raw                   datetime  \\\n",
       "0        3        42       189448 2023-07-02 01:11:27.753645   \n",
       "1        4         7       515537 2023-07-02 01:11:27.753645   \n",
       "2        5        23       825463 2023-07-02 01:11:27.753645   \n",
       "3        6         8       881789 2023-07-02 01:11:27.753645   \n",
       "4        7        11      1026667 2023-07-02 01:11:27.753645   \n",
       "\n",
       "                     created  \n",
       "0 2023-07-02 01:11:27.762124  \n",
       "1 2023-07-02 01:11:27.762124  \n",
       "2 2023-07-02 01:11:27.762124  \n",
       "3 2023-07-02 01:11:27.762124  \n",
       "4 2023-07-02 01:11:27.762124  "
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "user_features.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "2981b3ed-6156-49f0-aa14-326a3853a58a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "user_features.to_parquet(os.path.join(feature_repo_path, \"data\", \"user_features.parquet\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "0a33a668-8e2a-4546-8f54-0060d405ba91",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "item_features = (\n",
    "    unique_rows_by_features(train, Tags.ITEM, Tags.ITEM_ID)\n",
    "    .compute()\n",
    "    .reset_index(drop=True)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "68a694d6-926f-4b0f-8edc-8cc7ac85ade7",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "item_features[\"datetime\"] = datetime.now()\n",
    "item_features[\"datetime\"] = item_features[\"datetime\"].astype(\"datetime64[ns]\")\n",
    "item_features[\"created\"] = datetime.now()\n",
    "item_features[\"created\"] = item_features[\"created\"].astype(\"datetime64[ns]\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "6c03fa22-b112-4243-bbe1-1cd7260cb85b",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>item_id</th>\n",
       "      <th>item_category</th>\n",
       "      <th>item_id_raw</th>\n",
       "      <th>datetime</th>\n",
       "      <th>created</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>3</td>\n",
       "      <td>3</td>\n",
       "      <td>643078800</td>\n",
       "      <td>2023-07-02 01:11:42.755217</td>\n",
       "      <td>2023-07-02 01:11:42.757653</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>3</td>\n",
       "      <td>214829878</td>\n",
       "      <td>2023-07-02 01:11:42.755217</td>\n",
       "      <td>2023-07-02 01:11:42.757653</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5</td>\n",
       "      <td>3</td>\n",
       "      <td>214826610</td>\n",
       "      <td>2023-07-02 01:11:42.755217</td>\n",
       "      <td>2023-07-02 01:11:42.757653</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>6</td>\n",
       "      <td>3</td>\n",
       "      <td>214834880</td>\n",
       "      <td>2023-07-02 01:11:42.755217</td>\n",
       "      <td>2023-07-02 01:11:42.757653</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>214839973</td>\n",
       "      <td>2023-07-02 01:11:42.755217</td>\n",
       "      <td>2023-07-02 01:11:42.757653</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   item_id  item_category  item_id_raw                   datetime  \\\n",
       "0        3              3    643078800 2023-07-02 01:11:42.755217   \n",
       "1        4              3    214829878 2023-07-02 01:11:42.755217   \n",
       "2        5              3    214826610 2023-07-02 01:11:42.755217   \n",
       "3        6              3    214834880 2023-07-02 01:11:42.755217   \n",
       "4        7              3    214839973 2023-07-02 01:11:42.755217   \n",
       "\n",
       "                     created  \n",
       "0 2023-07-02 01:11:42.757653  \n",
       "1 2023-07-02 01:11:42.757653  \n",
       "2 2023-07-02 01:11:42.757653  \n",
       "3 2023-07-02 01:11:42.757653  \n",
       "4 2023-07-02 01:11:42.757653  "
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "item_features.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "c312884b-a1f8-4e08-8068-696e06a9bf46",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# save to disk\n",
    "item_features.to_parquet(os.path.join(feature_repo_path, \"data\", \"item_features.parquet\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff30ceab-b264-4509-9c5b-5a10425e143b",
   "metadata": {},
   "source": [
    "### Extract and save Item embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea8485a3-eed7-4d3a-90d7-188bfa9301c8",
   "metadata": {},
   "source": [
    "We are now ready to export item and user embeddings for the ANN (approximate nearest neighbor) search stage with the Milvus library."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "00f1fe65-882e-4962-bb16-19a130fda215",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "item_embs = model_tt.item_embeddings(\n",
    "    Dataset(item_features, schema=schema), batch_size=1024\n",
    ")\n",
    "item_embs_df = item_embs.compute(scheduler=\"synchronous\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "76807727-dd29-42b5-ac65-6a36477fceb8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>item_id</th>\n",
       "      <th>item_category</th>\n",
       "      <th>0</th>\n",
       "      <th>1</th>\n",
       "      <th>2</th>\n",
       "      <th>3</th>\n",
       "      <th>4</th>\n",
       "      <th>5</th>\n",
       "      <th>6</th>\n",
       "      <th>7</th>\n",
       "      <th>...</th>\n",
       "      <th>54</th>\n",
       "      <th>55</th>\n",
       "      <th>56</th>\n",
       "      <th>57</th>\n",
       "      <th>58</th>\n",
       "      <th>59</th>\n",
       "      <th>60</th>\n",
       "      <th>61</th>\n",
       "      <th>62</th>\n",
       "      <th>63</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>3</td>\n",
       "      <td>3</td>\n",
       "      <td>-0.551053</td>\n",
       "      <td>-2.396144</td>\n",
       "      <td>-0.208827</td>\n",
       "      <td>0.561655</td>\n",
       "      <td>-1.077037</td>\n",
       "      <td>0.873872</td>\n",
       "      <td>-0.243167</td>\n",
       "      <td>-0.403004</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.230633</td>\n",
       "      <td>1.729995</td>\n",
       "      <td>0.300505</td>\n",
       "      <td>2.170915</td>\n",
       "      <td>-0.709685</td>\n",
       "      <td>1.368153</td>\n",
       "      <td>1.259812</td>\n",
       "      <td>-2.248280</td>\n",
       "      <td>3.220146</td>\n",
       "      <td>-0.625717</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>3</td>\n",
       "      <td>-0.069944</td>\n",
       "      <td>-2.396738</td>\n",
       "      <td>-1.096040</td>\n",
       "      <td>0.835876</td>\n",
       "      <td>0.045193</td>\n",
       "      <td>0.399019</td>\n",
       "      <td>1.231206</td>\n",
       "      <td>0.123315</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.794665</td>\n",
       "      <td>0.190620</td>\n",
       "      <td>-0.472352</td>\n",
       "      <td>0.228886</td>\n",
       "      <td>-1.146539</td>\n",
       "      <td>1.005825</td>\n",
       "      <td>-0.929019</td>\n",
       "      <td>-1.248043</td>\n",
       "      <td>0.153283</td>\n",
       "      <td>-1.011497</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5</td>\n",
       "      <td>3</td>\n",
       "      <td>-0.286878</td>\n",
       "      <td>0.229818</td>\n",
       "      <td>-0.322053</td>\n",
       "      <td>-0.111830</td>\n",
       "      <td>-0.074048</td>\n",
       "      <td>1.775329</td>\n",
       "      <td>-1.103029</td>\n",
       "      <td>-0.317330</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.913579</td>\n",
       "      <td>0.487137</td>\n",
       "      <td>0.823664</td>\n",
       "      <td>0.722070</td>\n",
       "      <td>0.028665</td>\n",
       "      <td>0.540165</td>\n",
       "      <td>-1.361806</td>\n",
       "      <td>-0.494059</td>\n",
       "      <td>0.643328</td>\n",
       "      <td>-0.832613</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>6</td>\n",
       "      <td>3</td>\n",
       "      <td>-0.096670</td>\n",
       "      <td>0.359706</td>\n",
       "      <td>-0.618411</td>\n",
       "      <td>-0.176869</td>\n",
       "      <td>-0.126849</td>\n",
       "      <td>2.039704</td>\n",
       "      <td>-1.367793</td>\n",
       "      <td>-0.154768</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.855388</td>\n",
       "      <td>0.790234</td>\n",
       "      <td>0.807903</td>\n",
       "      <td>0.798341</td>\n",
       "      <td>0.038054</td>\n",
       "      <td>0.460077</td>\n",
       "      <td>-1.297610</td>\n",
       "      <td>-0.390472</td>\n",
       "      <td>0.959509</td>\n",
       "      <td>-0.782228</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>-0.461784</td>\n",
       "      <td>-0.761904</td>\n",
       "      <td>-1.013378</td>\n",
       "      <td>0.482173</td>\n",
       "      <td>0.015751</td>\n",
       "      <td>0.918302</td>\n",
       "      <td>-0.247184</td>\n",
       "      <td>0.011383</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.045502</td>\n",
       "      <td>0.711631</td>\n",
       "      <td>0.325466</td>\n",
       "      <td>0.951030</td>\n",
       "      <td>-0.393002</td>\n",
       "      <td>1.096497</td>\n",
       "      <td>-0.710226</td>\n",
       "      <td>-0.835890</td>\n",
       "      <td>0.384451</td>\n",
       "      <td>-1.314889</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 66 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   item_id  item_category         0         1         2         3         4  \\\n",
       "0        3              3 -0.551053 -2.396144 -0.208827  0.561655 -1.077037   \n",
       "1        4              3 -0.069944 -2.396738 -1.096040  0.835876  0.045193   \n",
       "2        5              3 -0.286878  0.229818 -0.322053 -0.111830 -0.074048   \n",
       "3        6              3 -0.096670  0.359706 -0.618411 -0.176869 -0.126849   \n",
       "4        7              3 -0.461784 -0.761904 -1.013378  0.482173  0.015751   \n",
       "\n",
       "          5         6         7  ...        54        55        56        57  \\\n",
       "0  0.873872 -0.243167 -0.403004  ... -0.230633  1.729995  0.300505  2.170915   \n",
       "1  0.399019  1.231206  0.123315  ... -0.794665  0.190620 -0.472352  0.228886   \n",
       "2  1.775329 -1.103029 -0.317330  ... -0.913579  0.487137  0.823664  0.722070   \n",
       "3  2.039704 -1.367793 -0.154768  ... -0.855388  0.790234  0.807903  0.798341   \n",
       "4  0.918302 -0.247184  0.011383  ... -0.045502  0.711631  0.325466  0.951030   \n",
       "\n",
       "         58        59        60        61        62        63  \n",
       "0 -0.709685  1.368153  1.259812 -2.248280  3.220146 -0.625717  \n",
       "1 -1.146539  1.005825 -0.929019 -1.248043  0.153283 -1.011497  \n",
       "2  0.028665  0.540165 -1.361806 -0.494059  0.643328 -0.832613  \n",
       "3  0.038054  0.460077 -1.297610 -0.390472  0.959509 -0.782228  \n",
       "4 -0.393002  1.096497 -0.710226 -0.835890  0.384451 -1.314889  \n",
       "\n",
       "[5 rows x 66 columns]"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "item_embs_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "cf8b82ea-6cce-4dab-ad17-114b5e7eabd4",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# select only item_id together with embedding columns\n",
    "item_embeddings = item_embs_df.drop(\n",
    "    columns=[\"item_category\"]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "e02f0957-6665-400a-80c0-60b307466caf",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>item_id</th>\n",
       "      <th>0</th>\n",
       "      <th>1</th>\n",
       "      <th>2</th>\n",
       "      <th>3</th>\n",
       "      <th>4</th>\n",
       "      <th>5</th>\n",
       "      <th>6</th>\n",
       "      <th>7</th>\n",
       "      <th>8</th>\n",
       "      <th>...</th>\n",
       "      <th>54</th>\n",
       "      <th>55</th>\n",
       "      <th>56</th>\n",
       "      <th>57</th>\n",
       "      <th>58</th>\n",
       "      <th>59</th>\n",
       "      <th>60</th>\n",
       "      <th>61</th>\n",
       "      <th>62</th>\n",
       "      <th>63</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>3</td>\n",
       "      <td>-0.551053</td>\n",
       "      <td>-2.396144</td>\n",
       "      <td>-0.208827</td>\n",
       "      <td>0.561655</td>\n",
       "      <td>-1.077037</td>\n",
       "      <td>0.873872</td>\n",
       "      <td>-0.243167</td>\n",
       "      <td>-0.403004</td>\n",
       "      <td>-3.167561</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.230633</td>\n",
       "      <td>1.729995</td>\n",
       "      <td>0.300505</td>\n",
       "      <td>2.170915</td>\n",
       "      <td>-0.709685</td>\n",
       "      <td>1.368153</td>\n",
       "      <td>1.259812</td>\n",
       "      <td>-2.248280</td>\n",
       "      <td>3.220146</td>\n",
       "      <td>-0.625717</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>-0.069944</td>\n",
       "      <td>-2.396738</td>\n",
       "      <td>-1.096040</td>\n",
       "      <td>0.835876</td>\n",
       "      <td>0.045193</td>\n",
       "      <td>0.399019</td>\n",
       "      <td>1.231206</td>\n",
       "      <td>0.123315</td>\n",
       "      <td>-2.067063</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.794665</td>\n",
       "      <td>0.190620</td>\n",
       "      <td>-0.472352</td>\n",
       "      <td>0.228886</td>\n",
       "      <td>-1.146539</td>\n",
       "      <td>1.005825</td>\n",
       "      <td>-0.929019</td>\n",
       "      <td>-1.248043</td>\n",
       "      <td>0.153283</td>\n",
       "      <td>-1.011497</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5</td>\n",
       "      <td>-0.286878</td>\n",
       "      <td>0.229818</td>\n",
       "      <td>-0.322053</td>\n",
       "      <td>-0.111830</td>\n",
       "      <td>-0.074048</td>\n",
       "      <td>1.775329</td>\n",
       "      <td>-1.103029</td>\n",
       "      <td>-0.317330</td>\n",
       "      <td>-1.872080</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.913579</td>\n",
       "      <td>0.487137</td>\n",
       "      <td>0.823664</td>\n",
       "      <td>0.722070</td>\n",
       "      <td>0.028665</td>\n",
       "      <td>0.540165</td>\n",
       "      <td>-1.361806</td>\n",
       "      <td>-0.494059</td>\n",
       "      <td>0.643328</td>\n",
       "      <td>-0.832613</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>6</td>\n",
       "      <td>-0.096670</td>\n",
       "      <td>0.359706</td>\n",
       "      <td>-0.618411</td>\n",
       "      <td>-0.176869</td>\n",
       "      <td>-0.126849</td>\n",
       "      <td>2.039704</td>\n",
       "      <td>-1.367793</td>\n",
       "      <td>-0.154768</td>\n",
       "      <td>-1.671755</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.855388</td>\n",
       "      <td>0.790234</td>\n",
       "      <td>0.807903</td>\n",
       "      <td>0.798341</td>\n",
       "      <td>0.038054</td>\n",
       "      <td>0.460077</td>\n",
       "      <td>-1.297610</td>\n",
       "      <td>-0.390472</td>\n",
       "      <td>0.959509</td>\n",
       "      <td>-0.782228</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>7</td>\n",
       "      <td>-0.461784</td>\n",
       "      <td>-0.761904</td>\n",
       "      <td>-1.013378</td>\n",
       "      <td>0.482173</td>\n",
       "      <td>0.015751</td>\n",
       "      <td>0.918302</td>\n",
       "      <td>-0.247184</td>\n",
       "      <td>0.011383</td>\n",
       "      <td>-2.286717</td>\n",
       "      <td>...</td>\n",
       "      <td>-0.045502</td>\n",
       "      <td>0.711631</td>\n",
       "      <td>0.325466</td>\n",
       "      <td>0.951030</td>\n",
       "      <td>-0.393002</td>\n",
       "      <td>1.096497</td>\n",
       "      <td>-0.710226</td>\n",
       "      <td>-0.835890</td>\n",
       "      <td>0.384451</td>\n",
       "      <td>-1.314889</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 65 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   item_id         0         1         2         3         4         5  \\\n",
       "0        3 -0.551053 -2.396144 -0.208827  0.561655 -1.077037  0.873872   \n",
       "1        4 -0.069944 -2.396738 -1.096040  0.835876  0.045193  0.399019   \n",
       "2        5 -0.286878  0.229818 -0.322053 -0.111830 -0.074048  1.775329   \n",
       "3        6 -0.096670  0.359706 -0.618411 -0.176869 -0.126849  2.039704   \n",
       "4        7 -0.461784 -0.761904 -1.013378  0.482173  0.015751  0.918302   \n",
       "\n",
       "          6         7         8  ...        54        55        56        57  \\\n",
       "0 -0.243167 -0.403004 -3.167561  ... -0.230633  1.729995  0.300505  2.170915   \n",
       "1  1.231206  0.123315 -2.067063  ... -0.794665  0.190620 -0.472352  0.228886   \n",
       "2 -1.103029 -0.317330 -1.872080  ... -0.913579  0.487137  0.823664  0.722070   \n",
       "3 -1.367793 -0.154768 -1.671755  ... -0.855388  0.790234  0.807903  0.798341   \n",
       "4 -0.247184  0.011383 -2.286717  ... -0.045502  0.711631  0.325466  0.951030   \n",
       "\n",
       "         58        59        60        61        62        63  \n",
       "0 -0.709685  1.368153  1.259812 -2.248280  3.220146 -0.625717  \n",
       "1 -1.146539  1.005825 -0.929019 -1.248043  0.153283 -1.011497  \n",
       "2  0.028665  0.540165 -1.361806 -0.494059  0.643328 -0.832613  \n",
       "3  0.038054  0.460077 -1.297610 -0.390472  0.959509 -0.782228  \n",
       "4 -0.393002  1.096497 -0.710226 -0.835890  0.384451 -1.314889  \n",
       "\n",
       "[5 rows x 65 columns]"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "item_embeddings.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "66d7271e-0ea6-4568-ac5a-04089735f542",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# save to disk\n",
    "item_embeddings.to_parquet(os.path.join(BASE_DIR, \"item_embeddings.parquet\"))\n",
    "del(item_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7cd383d-8397-45be-af66-5971618e7117",
   "metadata": {},
   "source": [
    "The next cell creates a second copy of the item embeddings that does not include `item_id`. This is optional but it is useful if you do not need or want to use the `item_id` values when creating a Milvus vector index."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "8fb157a5-e2da-4579-b948-d89361daa7eb",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(49008, 64)\n"
     ]
    }
   ],
   "source": [
    "# select only embedding columns\n",
    "item_embeddings = item_embs_df.drop(columns=[\"item_category\", \"item_id\"])\n",
    "# save to disk\n",
    "item_embeddings.to_parquet(os.path.join(BASE_DIR, \"item_embeddings2.parquet\"))\n",
    "print(item_embeddings.shape)\n",
    "del(item_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99797fdd-a1b5-4896-b4b7-4a2ea41c4c42",
   "metadata": {},
   "source": [
    "Now, do a similar export for user embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "84df6568-cec7-4024-9d21-f1bf5b9b419b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "user_embs = model_tt.query_embeddings(\n",
    "    Dataset(user_features, schema=schema), batch_size=1024\n",
    ")\n",
    "user_embs_df = user_embs.compute(scheduler=\"synchronous\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "7e105216-7962-4112-b0d5-4ee8c7732254",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Index(['user_id', 'user_age', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n",
       "       '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21',\n",
       "       '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33',\n",
       "       '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45',\n",
       "       '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57',\n",
       "       '58', '59', '60', '61', '62', '63'],\n",
       "      dtype='object')"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "user_embs_df.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "445e17c1-b7a5-4e52-b8f9-5365d3978091",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(7305761, 66)"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "user_embs_df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "2dc4d6ee-47e1-4ad8-b953-f25c46b8b09f",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "16414"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gc.collect()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "de72a36f-5afa-4e12-9b48-70bd1720af10",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# select only item_id together with embedding columns\n",
    "user_embeddings = user_embs_df.drop(columns=['user_age'])\n",
    "# save to disk\n",
    "user_embeddings.to_parquet(os.path.join(BASE_DIR, \"user_embeddings.parquet\"))\n",
    "del(user_embeddings)\n",
    "\n",
    "# select and export only embedding columns (without the user_id column)\n",
    "user_embeddings = user_embs_df.drop(columns=['user_age','user_id'])\n",
    "# save to disk\n",
    "user_embeddings.to_parquet(os.path.join(BASE_DIR, \"user_embeddings2.parquet\"))\n",
    "del(user_embeddings)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "1671ddd9-179b-40b8-af8f-7ee86614a381",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# if the above parquet export creates OOM error on the GPU (due to large embedding table size that exceeds GPU memory), run code below to do it with CPU memory \n",
    "import pandas as pd\n",
    "df = user_embs_df.to_pandas()\n",
    "df.drop(columns=['user_age'], inplace=True)\n",
    "# save to disk\n",
    "df.to_parquet(os.path.join(BASE_DIR, \"user_embeddings.parquet\"))\n",
    "# select only embedding columns\n",
    "df2 = df.drop(columns=['user_id'])\n",
    "# save to disk\n",
    "df2.to_parquet(os.path.join(BASE_DIR, \"user_embeddings2.parquet\"))\n",
    "del(df)\n",
    "del(df2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dadae279-913c-487b-ad55-4b4d6c110dc1",
   "metadata": {},
   "source": [
    "### Create feature definitions "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f70939f-8063-4422-b29b-6668acb1cfb7",
   "metadata": {},
   "source": [
    "Now we will create our user and item features definitions in the user_features.py and item_features.py files and save these files in the feature_repo."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "4ee27d67-e35a-42c5-8025-ed73f35c8e13",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "file = open(os.path.join(feature_repo_path, \"user_features.py\"), \"w\")\n",
    "file.write(\n",
    "    \"\"\"\n",
    "from datetime import timedelta\n",
    "from feast import Entity, Field, FeatureView, ValueType, Feature\n",
    "from feast.types import Int32\n",
    "from feast.infra.offline_stores.file_source import FileSource\n",
    "\n",
    "user_features = FileSource(\n",
    "    path=\"{}\",\n",
    "    timestamp_field=\"datetime\",\n",
    "    created_timestamp_column=\"created\",\n",
    ")\n",
    "\n",
    "user_raw = Entity(name=\"user_id_raw\", value_type=ValueType.INT32, join_keys=[\"user_id_raw\"],)\n",
    "\n",
    "user_features_view = FeatureView(\n",
    "    name=\"user_features\",\n",
    "    entities=[user_raw],\n",
    "    ttl=timedelta(0),\n",
    "    schema=[\n",
    "        Field(name=\"user_age\", dtype=Int32),\n",
    "        Field(name=\"user_id\", dtype=Int32),\n",
    "    ],\n",
    "    online=True,\n",
    "    source=user_features,\n",
    "    tags=dict(),\n",
    ")\n",
    "\"\"\".format(\n",
    "        os.path.join(BASE_DIR, \"feast_repo/feature_repo/data/\", \"user_features.parquet\")\n",
    "    )\n",
    ")\n",
    "file.close()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "48a5927c-840d-410c-8f5b-bebce4f79640",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "with open(os.path.join(feature_repo_path, \"item_features.py\"), \"w\") as f:\n",
    "    f.write(\n",
    "        \"\"\"\n",
    "from datetime import timedelta\n",
    "from feast import Entity, Field, FeatureView, ValueType\n",
    "from feast.types import Int32\n",
    "from feast.infra.offline_stores.file_source import FileSource\n",
    "\n",
    "item_features = FileSource(\n",
    "    path=\"{}\",\n",
    "    timestamp_field=\"datetime\",\n",
    "    created_timestamp_column=\"created\",\n",
    ")\n",
    "\n",
    "item = Entity(name=\"item_id\", value_type=ValueType.INT32, join_keys=[\"item_id\"],)\n",
    "\n",
    "item_features_view = FeatureView(\n",
    "    name=\"item_features\",\n",
    "    entities=[item],\n",
    "    ttl=timedelta(0),\n",
    "    schema=[\n",
    "        Field(name=\"item_category\", dtype=Int32),\n",
    "        Field(name=\"item_id_raw\", dtype=Int32),\n",
    "    ],\n",
    "    online=True,\n",
    "    source=item_features,\n",
    "    tags=dict(),\n",
    ")\n",
    "\"\"\".format(\n",
    "            os.path.join(BASE_DIR, \"feast_repo/feature_repo/data/\", \"item_features.parquet\")\n",
    "        )\n",
    "    )\n",
    "file.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "660333b2-4f99-49c7-8cd3-f0aad5dbd66f",
   "metadata": {},
   "source": [
    "Let's checkout our Feast feature repository structure."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "57133c1e-18d9-4ccb-9704-cdebd271985e",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting seedir\n",
      "  Downloading seedir-0.4.2-py3-none-any.whl (111 kB)\n",
      "\u001b[K     |████████████████████████████████| 111 kB 13.9 MB/s eta 0:00:01\n",
      "\u001b[?25hCollecting natsort\n",
      "  Downloading natsort-8.4.0-py3-none-any.whl (38 kB)\n",
      "Installing collected packages: natsort, seedir\n",
      "Successfully installed natsort-8.4.0 seedir-0.4.2\n"
     ]
    }
   ],
   "source": [
    "# install seedir\n",
    "!pip install seedir"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "986d53ea-c946-4046-a390-6d3b8801d280",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "feast_repo/\n",
      "├─README.md\n",
      "├─__init__.py\n",
      "└─feature_repo/\n",
      "  ├─__init__.py\n",
      "  ├─__pycache__/\n",
      "  │ ├─__init__.cpython-38.pyc\n",
      "  │ ├─example_repo.cpython-38.pyc\n",
      "  │ └─test_workflow.cpython-38.pyc\n",
      "  ├─data/\n",
      "  │ ├─item_features.parquet\n",
      "  │ └─user_features.parquet\n",
      "  ├─feature_store.yaml\n",
      "  ├─item_features.py\n",
      "  ├─test_workflow.py\n",
      "  └─user_features.py\n"
     ]
    }
   ],
   "source": [
    "import seedir as sd\n",
    "import os\n",
    "\n",
    "sd.seedir(\n",
    "    os.path.join(BASE_DIR, \"feast_repo\"),\n",
    "    style=\"lines\",\n",
    "    itemlimit=10,\n",
    "    depthlimit=3,\n",
    "    exclude_folders=\".ipynb_checkpoints\",\n",
    "    sort=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80678ea1-a7fb-4016-9e6f-c905497f4142",
   "metadata": {},
   "source": [
    "### Next Steps\n",
    "We trained and exported our ranking and retrieval models and NVTabular workflows. In the next step, we will learn how to deploy our trained models into [Triton Inference Server (TIS)](https://github.com/triton-inference-server/server) with Merlin Systems library.\n",
    "\n",
    "For the next step, move on to the `02-Deploy-Multi-Stage-Recsys-with-Merlin-Systems-Milvus.ipynb` notebook to deploy our saved models as an ensemble to TIS and obtain prediction results for a given request."
   ]
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "2758ff992bb32b90e83258e2e763c5fcee80c4002721441c6c0d17c649a641dd"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  },
  "merlin": {
   "containers": [
    "nvcr.io/nvidia/merlin/merlin-tensorflow-inference:latest"
   ]
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
