{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2961ee55",
   "metadata": {},
   "source": [
    "<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
    "\n",
    "# Hierarchical Parameter Server Demo"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25c1747e",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "In HugeCTR version 3.5, we provide Python APIs for embedding table lookup with [HugeCTR Hierarchical Parameter Server (HPS)](https://nvidia-merlin.github.io/HugeCTR/master/hugectr_core_features.html#hierarchical-parameter-server)\n",
    "HPS supports different database backends and GPU embedding caches.\n",
    "\n",
    "This notebook demonstrates how to use HPS with HugeCTR Python APIs. Without loss of generality, the HPS APIs are utilized together with the ONNX Runtime APIs to create an ensemble inference model, where HPS is responsible for embedding table lookup while the ONNX model takes charge of feed forward of dense neural networks."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c420aed2",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "### Get HugeCTR from NGC\n",
    "\n",
    "The HugeCTR Python module is preinstalled in the 22.05 and later [Merlin Training Container](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-training): `nvcr.io/nvidia/merlin/merlin-training:22.05`.\n",
    "\n",
    "You can check the existence of required libraries by running the following Python code after launching this container.\n",
    "\n",
    "```bash\n",
    "$ python3 -c \"import hugectr\"\n",
    "```\n",
    "\n",
    "**Note**: This Python module contains both training APIs and offline inference APIs. For online inference with Triton, please refer to [HugeCTR Backend](https://github.com/triton-inference-server/hugectr_backend).\n",
    "\n",
    "> If you prefer to build HugeCTR from the source code instead of using the NGC container, please refer to the\n",
    "> [How to Start Your Development](https://nvidia-merlin.github.io/HugeCTR/master/hugectr_contributor_guide.html#how-to-start-your-development)\n",
    "> documentation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6ca5759",
   "metadata": {},
   "source": [
    "## Data Generation\n",
    "\n",
    "HugeCTR provides a tool to generate synthetic datasets. The [Data Generator](https://nvidia-merlin.github.io/HugeCTR/master/api/python_interface.html#data-generator-api) is capable of generating datasets of different file formats and different distributions. We will generate one-hot Parquet datasets with power-law distribution for this notebook:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "ba5c7207",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HCTR][11:15:15][INFO][RK0][main]: Generate Parquet dataset\n",
      "[HCTR][11:15:15][INFO][RK0][main]: train data folder: ./data_parquet, eval data folder: ./data_parquet, slot_size_array: 10000, 10000, 10000, 10000, nnz array: 1, 1, 1, 1, #files for train: 16, #files for eval: 4, #samples per file: 40960, Use power law distribution: 1, alpha of power law: 1.3\n",
      "[HCTR][11:15:15][INFO][RK0][main]: ./data_parquet exist\n",
      "[HCTR][11:15:15][INFO][RK0][main]: ./data_parquet exist\n",
      "[HCTR][11:15:15][INFO][RK0][main]: ./data_parquet/train exist\n",
      "[HCTR][11:15:15][INFO][RK0][main]: ./data_parquet/train/gen_0.parquet\n",
      "[HCTR][11:15:17][INFO][RK0][main]: ./data_parquet/train/gen_1.parquet\n",
      "[HCTR][11:15:17][INFO][RK0][main]: ./data_parquet/train/gen_2.parquet\n",
      "[HCTR][11:15:17][INFO][RK0][main]: ./data_parquet/train/gen_3.parquet\n",
      "[HCTR][11:15:17][INFO][RK0][main]: ./data_parquet/train/gen_4.parquet\n",
      "[HCTR][11:15:18][INFO][RK0][main]: ./data_parquet/train/gen_5.parquet\n",
      "[HCTR][11:15:18][INFO][RK0][main]: ./data_parquet/train/gen_6.parquet\n",
      "[HCTR][11:15:18][INFO][RK0][main]: ./data_parquet/train/gen_7.parquet\n",
      "[HCTR][11:15:18][INFO][RK0][main]: ./data_parquet/train/gen_8.parquet\n",
      "[HCTR][11:15:18][INFO][RK0][main]: ./data_parquet/train/gen_9.parquet\n",
      "[HCTR][11:15:19][INFO][RK0][main]: ./data_parquet/train/gen_10.parquet\n",
      "[HCTR][11:15:19][INFO][RK0][main]: ./data_parquet/train/gen_11.parquet\n",
      "[HCTR][11:15:19][INFO][RK0][main]: ./data_parquet/train/gen_12.parquet\n",
      "[HCTR][11:15:19][INFO][RK0][main]: ./data_parquet/train/gen_13.parquet\n",
      "[HCTR][11:15:19][INFO][RK0][main]: ./data_parquet/train/gen_14.parquet\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/train/gen_15.parquet\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/file_list.txt done!\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/val exist\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/val/gen_0.parquet\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/val/gen_1.parquet\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/val/gen_2.parquet\n",
      "[HCTR][11:15:20][INFO][RK0][main]: ./data_parquet/val/gen_3.parquet\n",
      "[HCTR][11:15:21][INFO][RK0][main]: ./data_parquet/file_list_test.txt done!\n"
     ]
    }
   ],
   "source": [
    "import hugectr\n",
    "from hugectr.tools import DataGeneratorParams, DataGenerator\n",
    "\n",
    "data_generator_params = DataGeneratorParams(\n",
    "  format = hugectr.DataReaderType_t.Parquet,\n",
    "  label_dim = 1,\n",
    "  dense_dim = 10,\n",
    "  num_slot = 4,\n",
    "  i64_input_key = True,\n",
    "  nnz_array = [1, 1, 1, 1],\n",
    "  source = \"./data_parquet/file_list.txt\",\n",
    "  eval_source = \"./data_parquet/file_list_test.txt\",\n",
    "  slot_size_array = [10000, 10000, 10000, 10000],\n",
    "  check_type = hugectr.Check_t.Non,\n",
    "  dist_type = hugectr.Distribution_t.PowerLaw,\n",
    "  power_law_type = hugectr.PowerLaw_t.Short,\n",
    "  num_files = 16,\n",
    "  eval_num_files = 4,\n",
    "  num_samples_per_file = 40960)\n",
    "data_generator = DataGenerator(data_generator_params)\n",
    "data_generator.generate()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc51dd73",
   "metadata": {},
   "source": [
    "## Train from Scratch\n",
    "\n",
    "We can train fom scratch by performing the following steps with Python APIs:\n",
    "\n",
    "1. Create the solver, reader and optimizer, then initialize the model.\n",
    "2. Construct the model graph by adding input, sparse embedding and dense layers in order.\n",
    "3. Compile the model and have an overview of the model graph.\n",
    "4. Dump the model graph to the JSON file.\n",
    "5. Fit the model, save the model weights and optimizer states implicitly.\n",
    "6. Dump one batch of evaluation results to files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "de4fd9aa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting train.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile train.py\n",
    "import hugectr\n",
    "from mpi4py import MPI\n",
    "solver = hugectr.CreateSolver(model_name = \"hps_demo\",\n",
    "                              max_eval_batches = 1,\n",
    "                              batchsize_eval = 1024,\n",
    "                              batchsize = 1024,\n",
    "                              lr = 0.001,\n",
    "                              vvgpu = [[0]],\n",
    "                              i64_input_key = True,\n",
    "                              repeat_dataset = True,\n",
    "                              use_cuda_graph = True)\n",
    "reader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Parquet,\n",
    "                                  source = [\"./data_parquet/file_list.txt\"],\n",
    "                                  eval_source = \"./data_parquet/file_list_test.txt\",\n",
    "                                  check_type = hugectr.Check_t.Non,\n",
    "                                  slot_size_array = [10000, 10000, 10000, 10000])\n",
    "optimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.Adam)\n",
    "model = hugectr.Model(solver, reader, optimizer)\n",
    "model.add(hugectr.Input(label_dim = 1, label_name = \"label\",\n",
    "                        dense_dim = 10, dense_name = \"dense\",\n",
    "                        data_reader_sparse_param_array = \n",
    "                        [hugectr.DataReaderSparseParam(\"data1\", [1, 1], True, 2),\n",
    "                        hugectr.DataReaderSparseParam(\"data2\", [1, 1], True, 2)]))\n",
    "model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, \n",
    "                            workspace_size_per_gpu_in_mb = 4,\n",
    "                            embedding_vec_size = 16,\n",
    "                            combiner = \"sum\",\n",
    "                            sparse_embedding_name = \"sparse_embedding1\",\n",
    "                            bottom_name = \"data1\",\n",
    "                            optimizer = optimizer))\n",
    "model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, \n",
    "                            workspace_size_per_gpu_in_mb = 8,\n",
    "                            embedding_vec_size = 32,\n",
    "                            combiner = \"sum\",\n",
    "                            sparse_embedding_name = \"sparse_embedding2\",\n",
    "                            bottom_name = \"data2\",\n",
    "                            optimizer = optimizer))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,\n",
    "                            bottom_names = [\"sparse_embedding1\"],\n",
    "                            top_names = [\"reshape1\"],\n",
    "                            leading_dim=32))                            \n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,\n",
    "                            bottom_names = [\"sparse_embedding2\"],\n",
    "                            top_names = [\"reshape2\"],\n",
    "                            leading_dim=64))                            \n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Concat,\n",
    "                            bottom_names = [\"reshape1\", \"reshape2\", \"dense\"], top_names = [\"concat1\"]))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n",
    "                            bottom_names = [\"concat1\"],\n",
    "                            top_names = [\"fc1\"],\n",
    "                            num_output=1024))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n",
    "                            bottom_names = [\"fc1\"],\n",
    "                            top_names = [\"relu1\"]))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n",
    "                            bottom_names = [\"relu1\"],\n",
    "                            top_names = [\"fc2\"],\n",
    "                            num_output=1))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.BinaryCrossEntropyLoss,\n",
    "                            bottom_names = [\"fc2\", \"label\"],\n",
    "                            top_names = [\"loss\"]))\n",
    "model.compile()\n",
    "model.summary()\n",
    "model.graph_to_json(\"hps_demo.json\")\n",
    "model.fit(max_iter = 1100, display = 200, eval_interval = 1000, snapshot = 1000, snapshot_prefix = \"hps_demo\")\n",
    "model.export_predictions(\"hps_demo_pred_\" + str(1000), \"hps_demo_label_\" + str(1000))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "cd15bdae",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "HugeCTR Version: 3.4\n",
      "====================================================Model Init=====================================================\n",
      "[HCTR][11:15:26][INFO][RK0][main]: Initialize model: hps_demo\n",
      "[HCTR][11:15:26][INFO][RK0][main]: Global seed is 156170895\n",
      "[HCTR][11:15:26][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 0 ->  node 0\n",
      "[HCTR][11:15:27][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][11:15:27][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][11:15:27][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][11:15:27][INFO][RK0][main]: Using All-reduce algorithm: NCCL\n",
      "[HCTR][11:15:27][INFO][RK0][main]: Device 0: Tesla V100-SXM2-32GB\n",
      "[HCTR][11:15:27][INFO][RK0][main]: num of DataReader workers: 1\n",
      "[HCTR][11:15:27][INFO][RK0][main]: Vocabulary size: 40000\n",
      "[HCTR][11:15:27][INFO][RK0][main]: max_vocabulary_size_per_gpu_=21845\n",
      "[HCTR][11:15:27][INFO][RK0][main]: max_vocabulary_size_per_gpu_=21845\n",
      "[HCTR][11:15:27][INFO][RK0][main]: Graph analysis to resolve tensor dependency\n",
      "===================================================Model Compile===================================================\n",
      "[HCTR][11:15:29][INFO][RK0][main]: gpu0 start to init embedding\n",
      "[HCTR][11:15:29][INFO][RK0][main]: gpu0 init embedding done\n",
      "[HCTR][11:15:29][INFO][RK0][main]: gpu0 start to init embedding\n",
      "[HCTR][11:15:29][INFO][RK0][main]: gpu0 init embedding done\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Starting AUC NCCL warm-up\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Warm-up done\n",
      "===================================================Model Summary===================================================\n",
      "[HCTR][11:15:29][INFO][RK0][main]: label                                   Dense                         Sparse                        \n",
      "label                                   dense                          data1,data2                   \n",
      "(None, 1)                               (None, 10)                              \n",
      "——————————————————————————————————————————————————————————————————————————————————————————————————————————————————\n",
      "Layer Type                              Input Name                    Output Name                   Output Shape                  \n",
      "——————————————————————————————————————————————————————————————————————————————————————————————————————————————————\n",
      "DistributedSlotSparseEmbeddingHash      data1                         sparse_embedding1             (None, 2, 16)                 \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "DistributedSlotSparseEmbeddingHash      data2                         sparse_embedding2             (None, 2, 32)                 \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Reshape                                 sparse_embedding1             reshape1                      (None, 32)                    \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Reshape                                 sparse_embedding2             reshape2                      (None, 64)                    \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Concat                                  reshape1                      concat1                       (None, 106)                   \n",
      "                                        reshape2                                                                                  \n",
      "                                        dense                                                                                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "InnerProduct                            concat1                       fc1                           (None, 1024)                  \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "ReLU                                    fc1                           relu1                         (None, 1024)                  \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "InnerProduct                            relu1                         fc2                           (None, 1)                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "BinaryCrossEntropyLoss                  fc2                           loss                                                        \n",
      "                                        label                                                                                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Save the model graph to hps_demo.json successfully\n",
      "=====================================================Model Fit=====================================================\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Use non-epoch mode with number of iterations: 1100\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Training batchsize: 1024, evaluation batchsize: 1024\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Evaluation interval: 1000, snapshot interval: 1000\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Dense network trainable: True\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Sparse embedding sparse_embedding1 trainable: True\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Sparse embedding sparse_embedding2 trainable: True\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Use mixed precision: False, scaler: 1.000000, use cuda graph: True\n",
      "[HCTR][11:15:29][INFO][RK0][main]: lr: 0.001000, warmup_steps: 1, end_lr: 0.000000\n",
      "[HCTR][11:15:29][INFO][RK0][main]: decay_start: 0, decay_steps: 1, decay_power: 2.000000\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Training source file: ./data_parquet/file_list.txt\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Evaluation source file: ./data_parquet/file_list_test.txt\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Iter: 200 Time(200 iters): 0.211451s Loss: 0.694128 lr:0.001\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Iter: 400 Time(200 iters): 0.267199s Loss: 0.689953 lr:0.001\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Iter: 600 Time(200 iters): 0.216242s Loss: 0.689657 lr:0.001\n",
      "[HCTR][11:15:29][INFO][RK0][main]: Iter: 800 Time(200 iters): 0.215779s Loss: 0.677149 lr:0.001\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Iter: 1000 Time(200 iters): 0.219875s Loss: 0.681208 lr:0.001\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Evaluation, AUC: 0.49589\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Eval Time for 1 iters: 0.000359s\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write hash table to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write hash table to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Dumping sparse weights to files, successful\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Done\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Done\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Done\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Done\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Dumping sparse optimzer states to files, successful\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Dumping dense weights to file, successful\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Dumping dense optimizer states to file, successful\n",
      "[HCTR][11:15:30][INFO][RK0][main]: Finish 1100 iterations with batchsize: 1024 in 1.53s.\n"
     ]
    }
   ],
   "source": [
    "!python3 train.py"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07ab2648",
   "metadata": {},
   "source": [
    "## Convert HugeCTR to ONNX\n",
    "\n",
    "We will convert the saved HugeCTR models to ONNX using the HugeCTR to ONNX Converter. For more information about the converter, refer to the README in the [onnx_converter](https://github.com/NVIDIA-Merlin/HugeCTR/tree/master/onnx_converter) directory of the repository.\n",
    "\n",
    "For the sake of double checking the correctness, we will investigate both cases of conversion depending on whether or not to convert the sparse embedding models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "859c99fa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The model is checked!\n",
      "The model is saved at hps_demo_with_embedding.onnx\n",
      "Skip sparse embedding layers in converted ONNX model\n",
      "Skip sparse embedding layers in converted ONNX model\n",
      "The model is checked!\n",
      "The model is saved at hps_demo_without_embedding.onnx\n"
     ]
    }
   ],
   "source": [
    "import hugectr2onnx\n",
    "hugectr2onnx.converter.convert(onnx_model_path = \"hps_demo_with_embedding.onnx\",\n",
    "                            graph_config = \"hps_demo.json\",\n",
    "                            dense_model = \"hps_demo_dense_1000.model\",\n",
    "                            convert_embedding = True,\n",
    "                            sparse_models = [\"hps_demo0_sparse_1000.model\", \"hps_demo1_sparse_1000.model\"])\n",
    "\n",
    "hugectr2onnx.converter.convert(onnx_model_path = \"hps_demo_without_embedding.onnx\",\n",
    "                            graph_config = \"hps_demo.json\",\n",
    "                            dense_model = \"hps_demo_dense_1000.model\",\n",
    "                            convert_embedding = False)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "604b8923",
   "metadata": {},
   "source": [
    "## Inference with HPS & ONNX\n",
    "\n",
    "We will make inference by performing the following steps with Python APIs:\n",
    "\n",
    "1. Configure the HPS hyperparameters.\n",
    "2. Initialize the HPS object, which is responsible for embedding table lookup.\n",
    "3. Loading the Parquet data.\n",
    "4. Make inference with the HPS object and the ONNX inference session of `hps_demo_without_embedding.onnx`.\n",
    "5. Check the correctness by comparing with dumped evaluation results.\n",
    "6. Make inference with the ONNX inference session of `hps_demo_with_embedding.onnx` (double check)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "f1650d32",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HCTR][11:17:13][WARNING][RK0][main]: default_value_for_each_table.size() is not equal to the number of embedding tables\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Creating ParallelHashMap CPU database backend...\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Created parallel (16 partitions) blank database backend in local memory!\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Volatile DB: initial cache rate = 1\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Volatile DB: cache missed embeddings = 0\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Table: hps_et.hps_demo.sparse_embedding1; cached 15749 / 15749 embeddings in volatile database (ParallelHashMap); load: 15749 / 18446744073709551615 (0.00%).\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Table: hps_et.hps_demo.sparse_embedding2; cached 15781 / 15781 embeddings in volatile database (ParallelHashMap); load: 15781 / 18446744073709551615 (0.00%).\n",
      "[HCTR][11:17:13][DEBUG][RK0][main]: Real-time subscribers created!\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Create embedding cache in device 0.\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Use GPU embedding cache: True, cache size percentage: 0.500000\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Configured cache hit rate threshold: 1.000000\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Create inference session on device: 0\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Model name: hps_demo\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Number of embedding tables: 2\n",
      "[HCTR][11:17:13][INFO][RK0][main]: Use I64 input key: True\n",
      "ground_truth:  [0.456111 0.417843 0.428037 ... 0.336745 0.53599  0.508711]\n",
      "pred:  [[0.45611122]\n",
      " [0.4178428 ]\n",
      " [0.42803708]\n",
      " ...\n",
      " [0.3367453 ]\n",
      " [0.53599   ]\n",
      " [0.5087108 ]]\n",
      "mse between pred and ground_truth:  8.241691052249094e-14\n",
      "pred_ref:  [[0.45611122]\n",
      " [0.4178428 ]\n",
      " [0.42803708]\n",
      " ...\n",
      " [0.3367453 ]\n",
      " [0.53599   ]\n",
      " [0.5087108 ]]\n",
      "mse between pred_ref and ground_truth:  7.573986338301264e-05\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2022-03-31 11:17:13.779336470 [W:onnxruntime:, graph.cc:3559 CleanUnusedInitializersAndNodeArgs] Removing initializer 'key_to_indice_hash_all_tables'. It is not used by any node and should be removed from the model.\n"
     ]
    }
   ],
   "source": [
    "from hugectr.inference import HPS, ParameterServerConfig, InferenceParams\n",
    "\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "import onnxruntime as ort\n",
    "\n",
    "slot_size_array = [10000, 10000, 10000, 10000]\n",
    "key_offset = np.insert(np.cumsum(slot_size_array), 0, 0)[:-1]\n",
    "batch_size = 1024\n",
    "\n",
    "# 1. Configure the HPS hyperparameters\n",
    "ps_config = ParameterServerConfig(\n",
    "           emb_table_name = {\"hps_demo\": [\"sparse_embedding1\", \"sparse_embedding2\"]},\n",
    "           embedding_vec_size = {\"hps_demo\": [16, 32]},\n",
    "           max_feature_num_per_sample_per_emb_table = {\"hps_demo\": [2, 2]},\n",
    "           inference_params_array = [\n",
    "              InferenceParams(\n",
    "                model_name = \"hps_demo\",\n",
    "                max_batchsize = batch_size,\n",
    "                hit_rate_threshold = 1.0,\n",
    "                dense_model_file = \"\",\n",
    "                sparse_model_files = [\"hps_demo0_sparse_1000.model\", \"hps_demo1_sparse_1000.model\"],\n",
    "                deployed_devices = [0],\n",
    "                use_gpu_embedding_cache = True,\n",
    "                cache_size_percentage = 0.5,\n",
    "                i64_input_key = True)\n",
    "           ])\n",
    "\n",
    "# 2. Initialize the HPS object\n",
    "hps = HPS(ps_config)\n",
    "\n",
    "# 3. Loading the Parquet data.\n",
    "df = pd.read_parquet(\"data_parquet/val/gen_0.parquet\")\n",
    "dense_input_columns = df.columns[1:11]\n",
    "cat_input1_columns = df.columns[11:13]\n",
    "cat_input2_columns = df.columns[13:15]\n",
    "dense_input = df[dense_input_columns].loc[0:batch_size-1].to_numpy(dtype=np.float32)\n",
    "cat_input1 = (df[cat_input1_columns].loc[0:batch_size-1].to_numpy(dtype=np.int64) + key_offset[0:2]).reshape((batch_size, 2, 1))\n",
    "cat_input2 = (df[cat_input2_columns].loc[0:batch_size-1].to_numpy(dtype=np.int64) + key_offset[2:4]).reshape((batch_size, 2, 1))\n",
    "\n",
    "# 4. Make inference from the HPS object and the ONNX inference session of `hps_demo_without_embedding.onnx`.\n",
    "embedding1 = hps.lookup(cat_input1.flatten(), \"hps_demo\", 0).reshape(batch_size, 2, 16)\n",
    "embedding2 = hps.lookup(cat_input2.flatten(), \"hps_demo\", 1).reshape(batch_size, 2, 32)\n",
    "sess = ort.InferenceSession(\"hps_demo_without_embedding.onnx\")\n",
    "res = sess.run(output_names=[sess.get_outputs()[0].name],\n",
    "               input_feed={sess.get_inputs()[0].name: dense_input,\n",
    "               sess.get_inputs()[1].name: embedding1,\n",
    "               sess.get_inputs()[2].name: embedding2})\n",
    "pred = res[0]\n",
    "\n",
    "# 5. Check the correctness by comparing with dumped evaluation results.\n",
    "ground_truth = np.loadtxt(\"hps_demo_pred_1000\")\n",
    "print(\"ground_truth: \", ground_truth)\n",
    "diff = pred.flatten()-ground_truth\n",
    "mse = np.mean(diff*diff)\n",
    "print(\"pred: \", pred)\n",
    "print(\"mse between pred and ground_truth: \", mse)\n",
    "\n",
    "# 6. Make inference with the ONNX inference session of `hps_demo_with_embedding.onnx` (double check).\n",
    "sess_ref = ort.InferenceSession(\"hps_demo_with_embedding.onnx\")\n",
    "res_ref = sess_ref.run(output_names=[sess_ref.get_outputs()[0].name],\n",
    "                   input_feed={sess_ref.get_inputs()[0].name: dense_input,\n",
    "                   sess_ref.get_inputs()[1].name: cat_input1,\n",
    "                   sess_ref.get_inputs()[2].name: cat_input2})\n",
    "pred_ref = res_ref[0]\n",
    "diff_ref = pred_ref.flatten()-ground_truth\n",
    "mse_ref = np.mean(diff_ref*diff_ref)\n",
    "print(\"pred_ref: \", pred_ref)\n",
    "print(\"mse between pred_ref and ground_truth: \", mse_ref)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
