{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f75c838b",
   "metadata": {},
   "source": [
    "<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
    "\n",
    "# Multi-GPU Offline Inference"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea4ae25b",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "In HugeCTR version 3.4.1, we provide Python APIs to perform multi-GPU offline inference.\n",
    "This work leverages the [HugeCTR Hierarchical Parameter Server](https://nvidia-merlin.github.io/HugeCTR/master/hugectr_core_features.html#hierarchical-parameter-server) and enables concurrent execution on multiple devices.\n",
    "The `Norm` or `Parquet` dataset format is currently supported by multi-GPU offline inference.\n",
    "\n",
    "This notebook explains how to perform multi-GPU offline inference with the HugeCTR Python APIs.\n",
    "For more details about the API, see the [HugeCTR Python Interface](https://nvidia-merlin.github.io/HugeCTR/master/api/python_interface.html#inference-api) documentation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "902f3ef1",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "### Get HugeCTR from NGC\n",
    "\n",
    "The HugeCTR Python module is preinstalled in the 22.05 and later [Merlin Training Container](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-training): `nvcr.io/nvidia/merlin/merlin-training:22.05`.\n",
    "\n",
    "You can check the existence of required libraries by running the following Python code after launching this container.\n",
    "\n",
    "```bash\n",
    "$ python3 -c \"import hugectr\"\n",
    "```\n",
    "\n",
    "**Note**: This Python module contains both training APIs and offline inference APIs. For online inference with Triton Inference Server, refer to the [HugeCTR Backend](https://github.com/triton-inference-server/hugectr_backend) documentation.\n",
    "\n",
    "> If you prefer to build HugeCTR from the source code instead of using the NGC container, refer to the [How to Start Your Development](https://nvidia-merlin.github.io/HugeCTR/master/hugectr_contributor_guide.html#how-to-start-your-development) documentation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "240b78ac",
   "metadata": {},
   "source": [
    "## Data Generation\n",
    "\n",
    "HugeCTR provides a tool to generate synthetic datasets.\n",
    "The [Data Generator](https://nvidia-merlin.github.io/HugeCTR/master/api/python_interface.html#data-generator-api) class is capable of generating datasets in different formats and with different distributions.\n",
    "We will generate multi-hot Parquet datasets with a power-law distribution for this notebook:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "db37ef07",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HCTR][15:01:03][INFO][RK0][main]: Generate Parquet dataset\n",
      "[HCTR][15:01:03][INFO][RK0][main]: train data folder: ./multi_hot_parquet, eval data folder: ./multi_hot_parquet, slot_size_array: 10000, 10000, 10000, nnz array: 2, 1, 3, #files for train: 16, #files for eval: 4, #samples per file: 40960, Use power law distribution: 1, alpha of power law: 1.3\n",
      "[HCTR][15:01:03][INFO][RK0][main]: ./multi_hot_parquet exist\n",
      "[HCTR][15:01:03][INFO][RK0][main]: ./multi_hot_parquet/train/gen_0.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_1.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_2.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_3.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_4.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_5.parquet\n",
      "[HCTR][15:01:05][INFO][RK0][main]: ./multi_hot_parquet/train/gen_6.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_7.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_8.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_9.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_10.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_11.parquet\n",
      "[HCTR][15:01:06][INFO][RK0][main]: ./multi_hot_parquet/train/gen_12.parquet\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/train/gen_13.parquet\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/train/gen_14.parquet\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/train/gen_15.parquet\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/file_list.txt done!\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/val/gen_0.parquet\n",
      "[HCTR][15:01:07][INFO][RK0][main]: ./multi_hot_parquet/val/gen_1.parquet\n",
      "[HCTR][15:01:08][INFO][RK0][main]: ./multi_hot_parquet/val/gen_2.parquet\n",
      "[HCTR][15:01:08][INFO][RK0][main]: ./multi_hot_parquet/val/gen_3.parquet\n",
      "[HCTR][15:01:08][INFO][RK0][main]: ./multi_hot_parquet/file_list_test.txt done!\n"
     ]
    }
   ],
   "source": [
    "import hugectr\n",
    "from hugectr.tools import DataGeneratorParams, DataGenerator\n",
    "\n",
    "data_generator_params = DataGeneratorParams(\n",
    "  format = hugectr.DataReaderType_t.Parquet,\n",
    "  label_dim = 2,\n",
    "  dense_dim = 2,\n",
    "  num_slot = 3,\n",
    "  i64_input_key = True,\n",
    "  nnz_array = [2, 1, 3],\n",
    "  source = \"./multi_hot_parquet/file_list.txt\",\n",
    "  eval_source = \"./multi_hot_parquet/file_list_test.txt\",\n",
    "  slot_size_array = [10000, 10000, 10000],\n",
    "  check_type = hugectr.Check_t.Non,\n",
    "  dist_type = hugectr.Distribution_t.PowerLaw,\n",
    "  power_law_type = hugectr.PowerLaw_t.Short,\n",
    "  num_files = 16,\n",
    "  eval_num_files = 4)\n",
    "data_generator = DataGenerator(data_generator_params)\n",
    "data_generator.generate()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19109028",
   "metadata": {},
   "source": [
    "## Train from Scratch\n",
    "\n",
    "We can train fom scratch by performing the following steps with Python APIs:\n",
    "\n",
    "1. Create the solver, reader and optimizer, then initialize the model.\n",
    "2. Construct the model graph by adding input, sparse embedding and dense layers in order.\n",
    "3. Compile the model and have an overview of the model graph.\n",
    "4. Dump the model graph to a JSON file.\n",
    "5. Fit the model, save the model weights and optimizer states implicitly.\n",
    "6. Dump one batch of evaluation results to files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "e2b0d9d6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Overwriting multi_hot_train.py\n"
     ]
    }
   ],
   "source": [
    "%%writefile multi_hot_train.py\n",
    "import hugectr\n",
    "from mpi4py import MPI\n",
    "solver = hugectr.CreateSolver(model_name = \"multi_hot\",\n",
    "                              max_eval_batches = 1,\n",
    "                              batchsize_eval = 16384,\n",
    "                              batchsize = 16384,\n",
    "                              lr = 0.001,\n",
    "                              vvgpu = [[0]],\n",
    "                              i64_input_key = True,\n",
    "                              repeat_dataset = True,\n",
    "                              use_cuda_graph = True)\n",
    "reader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Parquet,\n",
    "                                  source = [\"./multi_hot_parquet/file_list.txt\"],\n",
    "                                  eval_source = \"./multi_hot_parquet/file_list_test.txt\",\n",
    "                                  check_type = hugectr.Check_t.Non,\n",
    "                                  slot_size_array = [10000, 10000, 10000])\n",
    "optimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.Adam)\n",
    "model = hugectr.Model(solver, reader, optimizer)\n",
    "model.add(hugectr.Input(label_dim = 2, label_name = \"label\",\n",
    "                        dense_dim = 2, dense_name = \"dense\",\n",
    "                        data_reader_sparse_param_array = \n",
    "                        [hugectr.DataReaderSparseParam(\"data1\", [2, 1], False, 2),\n",
    "                        hugectr.DataReaderSparseParam(\"data2\", 3, False, 1),]))\n",
    "model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, \n",
    "                            workspace_size_per_gpu_in_mb = 4,\n",
    "                            embedding_vec_size = 16,\n",
    "                            combiner = \"sum\",\n",
    "                            sparse_embedding_name = \"sparse_embedding1\",\n",
    "                            bottom_name = \"data1\",\n",
    "                            optimizer = optimizer))\n",
    "model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, \n",
    "                            workspace_size_per_gpu_in_mb = 2,\n",
    "                            embedding_vec_size = 16,\n",
    "                            combiner = \"sum\",\n",
    "                            sparse_embedding_name = \"sparse_embedding2\",\n",
    "                            bottom_name = \"data2\",\n",
    "                            optimizer = optimizer))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,\n",
    "                            bottom_names = [\"sparse_embedding1\"],\n",
    "                            top_names = [\"reshape1\"],\n",
    "                            leading_dim=32))                            \n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,\n",
    "                            bottom_names = [\"sparse_embedding2\"],\n",
    "                            top_names = [\"reshape2\"],\n",
    "                            leading_dim=16))                            \n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Concat,\n",
    "                            bottom_names = [\"reshape1\", \"reshape2\", \"dense\"], top_names = [\"concat1\"]))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n",
    "                            bottom_names = [\"concat1\"],\n",
    "                            top_names = [\"fc1\"],\n",
    "                            num_output=1024))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n",
    "                            bottom_names = [\"fc1\"],\n",
    "                            top_names = [\"relu1\"]))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n",
    "                            bottom_names = [\"relu1\"],\n",
    "                            top_names = [\"fc2\"],\n",
    "                            num_output=2))\n",
    "model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.MultiCrossEntropyLoss,\n",
    "                            bottom_names = [\"fc2\", \"label\"],\n",
    "                            top_names = [\"loss\"],\n",
    "                            target_weight_vec = [0.5, 0.5]))\n",
    "model.compile()\n",
    "model.summary()\n",
    "model.graph_to_json(\"multi_hot.json\")\n",
    "model.fit(max_iter = 1100, display = 200, eval_interval = 1000, snapshot = 1000, snapshot_prefix = \"multi_hot\")\n",
    "model.export_predictions(\"multi_hot_pred_\" + str(1000), \"multi_hot_label_\" + str(1000))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "d0f29350",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "HugeCTR Version: 3.4\n",
      "====================================================Model Init=====================================================\n",
      "[HCTR][15:04:04][INFO][RK0][main]: Initialize model: multi_hot\n",
      "[HCTR][15:04:04][INFO][RK0][main]: Global seed is 2258929170\n",
      "[HCTR][15:04:04][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 0 ->  node 0\n",
      "[HCTR][15:04:05][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:04:05][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:04:05][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:04:05][INFO][RK0][main]: Using All-reduce algorithm: NCCL\n",
      "[HCTR][15:04:05][INFO][RK0][main]: Device 0: Tesla V100-SXM2-32GB\n",
      "[HCTR][15:04:05][INFO][RK0][main]: num of DataReader workers: 1\n",
      "[HCTR][15:04:05][INFO][RK0][main]: Vocabulary size: 30000\n",
      "[HCTR][15:04:05][INFO][RK0][main]: max_vocabulary_size_per_gpu_=65536\n",
      "[HCTR][15:04:05][INFO][RK0][main]: max_vocabulary_size_per_gpu_=32768\n",
      "[HCTR][15:04:05][INFO][RK0][main]: Graph analysis to resolve tensor dependency\n",
      "===================================================Model Compile===================================================\n",
      "[HCTR][15:04:14][INFO][RK0][main]: gpu0 start to init embedding\n",
      "[HCTR][15:04:14][INFO][RK0][main]: gpu0 init embedding done\n",
      "[HCTR][15:04:14][INFO][RK0][main]: gpu0 start to init embedding\n",
      "[HCTR][15:04:14][INFO][RK0][main]: gpu0 init embedding done\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Starting AUC NCCL warm-up\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Warm-up done\n",
      "[HCTR][15:04:14][INFO][RK0][main]: ===================================================Model Summary===================================================\n",
      "label                                   Dense                         Sparse                        \n",
      "label                                   dense                          data1,data2                   \n",
      "(None, 2)                               (None, 2)                               \n",
      "——————————————————————————————————————————————————————————————————————————————————————————————————————————————————\n",
      "Layer Type                              Input Name                    Output Name                   Output Shape                  \n",
      "——————————————————————————————————————————————————————————————————————————————————————————————————————————————————\n",
      "DistributedSlotSparseEmbeddingHash      data1                         sparse_embedding1             (None, 2, 16)                 \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "DistributedSlotSparseEmbeddingHash      data2                         sparse_embedding2             (None, 1, 16)                 \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Reshape                                 sparse_embedding1             reshape1                      (None, 32)                    \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Reshape                                 sparse_embedding2             reshape2                      (None, 16)                    \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "Concat                                  reshape1                      concat1                       (None, 50)                    \n",
      "                                        reshape2                                                                                  \n",
      "                                        dense                                                                                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "InnerProduct                            concat1                       fc1                           (None, 1024)                  \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "ReLU                                    fc1                           relu1                         (None, 1024)                  \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "InnerProduct                            relu1                         fc2                           (None, 2)                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "MultiCrossEntropyLoss                   fc2                           loss                                                        \n",
      "                                        label                                                                                     \n",
      "------------------------------------------------------------------------------------------------------------------\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Save the model graph to multi_hot.json successfully\n",
      "=====================================================Model Fit=====================================================\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Use non-epoch mode with number of iterations: 1100\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Training batchsize: 16384, evaluation batchsize: 16384\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Evaluation interval: 1000, snapshot interval: 1000\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Dense network trainable: True\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Sparse embedding sparse_embedding1 trainable: True\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Sparse embedding sparse_embedding2 trainable: True\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Use mixed precision: False, scaler: 1.000000, use cuda graph: True\n",
      "[HCTR][15:04:14][INFO][RK0][main]: lr: 0.001000, warmup_steps: 1, end_lr: 0.000000\n",
      "[HCTR][15:04:14][INFO][RK0][main]: decay_start: 0, decay_steps: 1, decay_power: 2.000000\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Training source file: ./multi_hot_parquet/file_list.txt\n",
      "[HCTR][15:04:14][INFO][RK0][main]: Evaluation source file: ./multi_hot_parquet/file_list_test.txt\n",
      "[HCTR][15:04:17][INFO][RK0][main]: Iter: 200 Time(200 iters): 2.73086s Loss: 0.342286 lr:0.001\n",
      "[HCTR][15:04:20][INFO][RK0][main]: Iter: 400 Time(200 iters): 2.57674s Loss: 0.339907 lr:0.001\n",
      "[HCTR][15:04:22][INFO][RK0][main]: Iter: 600 Time(200 iters): 2.59306s Loss: 0.338068 lr:0.001\n",
      "[HCTR][15:04:25][INFO][RK0][main]: Iter: 800 Time(200 iters): 2.56907s Loss: 0.334571 lr:0.001\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Iter: 1000 Time(200 iters): 2.57584s Loss: 0.331733 lr:0.001\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Evaluation, AUC: 0.500278\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Eval Time for 1 iters: 0.001344s\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Rank0: Write hash table to file\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Rank0: Write hash table to file\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Dumping sparse weights to files, successful\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Done\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][15:04:27][INFO][RK0][main]: Done\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Done\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Rank0: Write optimzer state to file\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Done\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Dumping sparse optimzer states to files, successful\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Dumping dense weights to file, successful\n",
      "[HCTR][15:04:28][INFO][RK0][main]: Dumping dense optimizer states to file, successful\n",
      "[HCTR][15:04:29][INFO][RK0][main]: Finish 1100 iterations with batchsize: 16384 in 14.54s.\n"
     ]
    }
   ],
   "source": [
    "!python3 multi_hot_train.py"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0bb3b86c",
   "metadata": {},
   "source": [
    "### Multi-GPU Offline Inference\n",
    "\n",
    "We can demonstrate multi-GPU offline inference by performing the following steps with Python APIs:\n",
    "\n",
    "1. Configure the inference hyperparameters.\n",
    "2. Initialize the inference model. The model is a collection of inference sessions deployed on multiple devices.\n",
    "3. Make an inference from the evaluation dataset.\n",
    "4. Check the correctness of the inference by comparing it with the dumped evaluation results.\n",
    "\n",
    "**Note**: The `max_batchsize` configured within `InferenceParams` is the global batch size.\n",
    "The value for `max_batchsize` should be divisible by the number of deployed devices.\n",
    "The numpy array returned by `InferenceModel.predict` is of the shape `(max_batchsize * num_batches, label_dim)`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "8e25d216",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HCTR][15:04:58][INFO][RK0][main]: Global seed is 3101700364\n",
      "[HCTR][15:04:58][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 0 ->  node 0\n",
      "  GPU 1 ->  node 0\n",
      "  GPU 2 ->  node 0\n",
      "  GPU 3 ->  node 0\n",
      "[HCTR][15:05:01][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: default_emb_vec_value is not specified using default: 0\n",
      "[HCTR][15:05:02][INFO][RK0][main]: default_emb_vec_value is not specified using default: 0\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Creating ParallelHashMap CPU database backend...\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Created parallel (16 partitions) blank database backend in local memory!\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Volatile DB: initial cache rate = 1\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Volatile DB: cache missed embeddings = 0\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Table: hctr_et.multi_hot.sparse_embedding1; cached 16597 / 16597 embeddings in volatile database (ParallelHashMap); load: 16597 / 18446744073709551615 (0.00%).\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Table: hctr_et.multi_hot.sparse_embedding2; cached 9253 / 9253 embeddings in volatile database (ParallelHashMap); load: 9253 / 18446744073709551615 (0.00%).\n",
      "[HCTR][15:05:02][DEBUG][RK0][main]: Real-time subscribers created!\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create embedding cache in device 0.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use GPU embedding cache: True, cache size percentage: 0.500000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Configured cache hit rate threshold: 1.000000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create embedding cache in device 1.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use GPU embedding cache: True, cache size percentage: 0.500000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Configured cache hit rate threshold: 1.000000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create embedding cache in device 2.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use GPU embedding cache: True, cache size percentage: 0.500000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Configured cache hit rate threshold: 1.000000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create embedding cache in device 3.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use GPU embedding cache: True, cache size percentage: 0.500000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Configured cache hit rate threshold: 1.000000\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Global seed is 1801008028\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 0 ->  node 0\n",
      "[HCTR][15:05:02][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create inference session on device: 0\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Model name: multi_hot\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use mixed precision: False\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use cuda graph: True\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Max batchsize: 256\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use I64 input key: True\n",
      "[HCTR][15:05:02][INFO][RK0][main]: start create embedding for inference\n",
      "[HCTR][15:05:02][INFO][RK0][main]: sparse_input name data1\n",
      "[HCTR][15:05:02][INFO][RK0][main]: sparse_input name data2\n",
      "[HCTR][15:05:02][INFO][RK0][main]: create embedding for inference success\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Inference stage skip MultiCrossEntropyLoss layer, replaced by Sigmoid layer\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Global seed is 1395008125\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 1 ->  node 0\n",
      "[HCTR][15:05:02][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Create inference session on device: 1\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Model name: multi_hot\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use mixed precision: False\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use cuda graph: True\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Max batchsize: 256\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Use I64 input key: True\n",
      "[HCTR][15:05:02][INFO][RK0][main]: start create embedding for inference\n",
      "[HCTR][15:05:02][INFO][RK0][main]: sparse_input name data1\n",
      "[HCTR][15:05:02][INFO][RK0][main]: sparse_input name data2\n",
      "[HCTR][15:05:02][INFO][RK0][main]: create embedding for inference success\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Inference stage skip MultiCrossEntropyLoss layer, replaced by Sigmoid layer\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Global seed is 3124827580\n",
      "[HCTR][15:05:02][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 2 ->  node 0\n",
      "[HCTR][15:05:03][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Create inference session on device: 2\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Model name: multi_hot\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use mixed precision: False\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use cuda graph: True\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Max batchsize: 256\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use I64 input key: True\n",
      "[HCTR][15:05:03][INFO][RK0][main]: start create embedding for inference\n",
      "[HCTR][15:05:03][INFO][RK0][main]: sparse_input name data1\n",
      "[HCTR][15:05:03][INFO][RK0][main]: sparse_input name data2\n",
      "[HCTR][15:05:03][INFO][RK0][main]: create embedding for inference success\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Inference stage skip MultiCrossEntropyLoss layer, replaced by Sigmoid layer\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Global seed is 355752151\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 3 ->  node 0\n",
      "[HCTR][15:05:03][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Create inference session on device: 3\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Model name: multi_hot\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use mixed precision: False\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use cuda graph: True\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Max batchsize: 256\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Use I64 input key: True\n",
      "[HCTR][15:05:03][INFO][RK0][main]: start create embedding for inference\n",
      "[HCTR][15:05:03][INFO][RK0][main]: sparse_input name data1\n",
      "[HCTR][15:05:03][INFO][RK0][main]: sparse_input name data2\n",
      "[HCTR][15:05:03][INFO][RK0][main]: create embedding for inference success\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Inference stage skip MultiCrossEntropyLoss layer, replaced by Sigmoid layer\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Global seed is 3474526165\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Device to NUMA mapping:\n",
      "  GPU 0 ->  node 0\n",
      "[HCTR][15:05:03][WARNING][RK0][main]: Peer-to-peer access cannot be fully enabled.\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Start all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: End all2all warmup\n",
      "[HCTR][15:05:03][INFO][RK0][main]: Vocabulary size: 30000\n",
      "\n",
      "pred:  [[0.6733939  0.43605337]\n",
      " [0.5189075  0.4978796 ]\n",
      " [0.39680484 0.16554658]\n",
      " ...\n",
      " [0.3779142  0.669542  ]\n",
      " [0.46529922 0.44098482]\n",
      " [0.58435297 0.45384815]]\n",
      "grount_truth:  [0.673394 0.436053 0.518908 ... 0.440985 0.584353 0.453848]\n",
      "mse:  0.0012302037921078574\n"
     ]
    }
   ],
   "source": [
    "import hugectr\n",
    "from hugectr.inference import InferenceModel, InferenceParams\n",
    "import numpy as np\n",
    "from mpi4py import MPI\n",
    "\n",
    "model_config = \"multi_hot.json\"\n",
    "inference_params = InferenceParams(\n",
    "    model_name = \"multi_hot\",\n",
    "    max_batchsize = 1024,\n",
    "    hit_rate_threshold = 1.0,\n",
    "    dense_model_file = \"multi_hot_dense_1000.model\",\n",
    "    sparse_model_files = [\"multi_hot0_sparse_1000.model\", \"multi_hot1_sparse_1000.model\"],\n",
    "    deployed_devices = [0, 1, 2, 3],\n",
    "    use_gpu_embedding_cache = True,\n",
    "    cache_size_percentage = 0.5,\n",
    "    i64_input_key = True\n",
    ")\n",
    "inference_model = InferenceModel(model_config, inference_params)\n",
    "pred = inference_model.predict(\n",
    "    16,\n",
    "    \"./multi_hot_parquet/file_list_test.txt\",\n",
    "    hugectr.DataReaderType_t.Parquet,\n",
    "    hugectr.Check_t.Non,\n",
    "    [10000, 10000, 10000]\n",
    ")\n",
    "grount_truth = np.loadtxt(\"multi_hot_pred_1000\")\n",
    "print(\"pred: \", pred)\n",
    "print(\"grount_truth: \", grount_truth)\n",
    "diff = pred.flatten()-grount_truth\n",
    "mse = np.mean(diff*diff)\n",
    "print(\"mse: \", mse)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
