{
    "cells": [
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "<i>Copyright (c) Recommenders contributors.</i>\n",
                "\n",
                "<i>Licensed under the MIT License.</i>"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "# Bilateral Variational Autoencoder (BiVAE)\n",
                "\n",
                "This notebook serves as a tutorial on Bilateral Variational Autoencoder (BiVAE) model for collaborative filtering. The research paper of BiVAE [1] is presented at WSDM'21 conference. For all experiments related to BiVAE model, please refer to [this repository](https://github.com/PreferredAI/bi-vae).\n",
                "\n",
                "The implementation of the model is from [Cornac](https://github.com/PreferredAI/cornac) [2], which is a framework for multimodal recommender systems focusing on models that utilize auxiliary data (e.g., item descriptive text and image, social network, etc)."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 0 Global Settings and Imports"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 10,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "System version: 3.6.11 | packaged by conda-forge | (default, Nov 27 2020, 18:57:37) \n",
                        "[GCC 9.3.0]\n",
                        "PyTorch version: 1.4.0\n",
                        "Cornac version: 1.11.0\n"
                    ]
                }
            ],
            "source": [
                "import os\n",
                "import sys\n",
                "import torch\n",
                "import cornac\n",
                "\n",
                "from recommenders.datasets import movielens\n",
                "from recommenders.datasets.python_splitters import python_random_split\n",
                "from recommenders.models.cornac.cornac_utils import predict_ranking\n",
                "from recommenders.utils.timer import Timer\n",
                "from recommenders.utils.constants import SEED\n",
                "from recommenders.evaluation.python_evaluation import (\n",
                "    map,\n",
                "    ndcg_at_k,\n",
                "    precision_at_k,\n",
                "    recall_at_k,\n",
                ")\n",
                "from recommenders.utils.notebook_utils import store_metadata\n",
                "\n",
                "print(f\"System version: {sys.version}\")\n",
                "print(f\"PyTorch version: {torch.__version__}\")\n",
                "print(f\"Cornac version: {cornac.__version__}\")\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 2,
            "metadata": {
                "tags": [
                    "parameters"
                ]
            },
            "outputs": [],
            "source": [
                "# Select MovieLens data size: 100k, 1m, 10m, or 20m\n",
                "MOVIELENS_DATA_SIZE = '100k'\n",
                "\n",
                "# top k items to recommend\n",
                "TOP_K = 10\n",
                "\n",
                "# Model parameters\n",
                "LATENT_DIM = 50\n",
                "ENCODER_DIMS = [100]\n",
                "ACT_FUNC = \"tanh\"\n",
                "LIKELIHOOD = \"pois\"\n",
                "NUM_EPOCHS = 500\n",
                "BATCH_SIZE = 128\n",
                "LEARNING_RATE = 0.001"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 1 Theory behind BiVAE\n",
                "\n",
                "### 1.1 Motivation from Dyadic Data\n",
                "\n",
                "Preference data in collaborative filtering (CF) typically consists of a set of users, a set of items, and a set of interactions, e.g., ratings, clicks, purchases between some user-item pairs.  Most of the time, preference data is being represented as an interaction matrix between user and item.  Generally, it is a form of dyadic data, with measurements associated with pairs of elements arising from two discrete sets of objects.  Naturally, there are two ways to view such data, by users (row-wise) and by items (column-wise).\n",
                "\n",
                "<img alt=\"Dyadic Data\" width=\"600\" style=\"display: block;margin-left: auto;margin-right: auto;\" src=\"\"/>\n",
                "\n",
                "\n",
                "Ultimately, we would like to seek for representations for both sides of dyadic data (users and items) whose combination would be capable of explaining user-item affinities. To tackle this objective, latent factor or matrix factorization models are predominant in the context of CF. The latter owe their success mainly to their simplicity, efficiency, effectiveness, and extensibility. Nevertheless, this category of models is also known to suffer from a limited modeling capacity as it can\n",
                "only capture linear patterns both in the data and latent spaces."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "To go beyond this limitation, there has recently been a surge of interest in using non-linear neural-based approaches. Notably, Variational Autoencoder (VAE) [3] model has been recently applied to CF with strong performance improvements over several competitive approaches. One plausible explanation for the good results achieved by VAE on the CF task is its probabilistic nature. Indeed, the key difference of this model with neural networks is that VAE does not seek to learn deterministic representations, but rather learns distributions over these representations, thereby allowing it to account for uncertainty in the latent space. That property is particularly beneficial when dealing with sparse data where few observations are available. \n",
                "\n",
                "<img alt=\"Standard VAE\" width=\"800\" style=\"display: block;margin-left: auto;margin-right: auto;\" src=\"\">\n",
                "\n",
                "Despite its remarkable performance, VAE was originally designed for vector based-data, and thus is not in complete fidelity to the two-way nature of dyadic data. Specifically, in the User-VAE, only users are explicitly represented while items are treated as features in a vector space of users, and similarly for Item-VAE. In consequence of this mismatch between VAE and the two-way nature of preference data, it is not clear how one would extend such model on the item side in a principled way, for example to represent side information such as item textual descriptions, images, etc."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 1.2 BiVAE Formulation\n",
                "\n",
                "As remedy to the drawback of VAE discussed earlier, Bilateral Variational Autoencoder (BiVAE) is proposed. It consists of a generative model of user-item interactions (or dyads), and a pair of inference models (user- and item-based respectively) parameterized using multilayer neural networks, all combined together in a unified framework to auto-encode dyadic preference data. As opposed to the vanilla VAE, the proposed BiVAE is “bilateral” in that it treats users and items symmetrically, making it more apt for two-way or dyadic data. In particular, BiVAE can capture uncertainty on both sides of dyadic data, which would improve its robustness and performance on sparse preference data, compared to classical one-sided variational autoencoders.\n",
                "\n",
                "<img alt=\"BiVAE\" width=\"800\" style=\"display: block;margin-left: auto;margin-right: auto;\" src=\"\">"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "**Notation.** The data that we seek to learn from is the user-item preference matrix, of size $U\\times I$, denoted $\\mathbf{R} = (r_{ui})$, where $r_{ui}$ is the interaction, e.g., integer rating, between user $u$ and item $i$. We use the notation $\\mathbf{r}_{u*}$ to refer to the row in $\\mathbf{R}$ corresponding to user $u$. Similarly, $\\mathbf{r}_{*i}$ refers to the $i$th column of $\\mathbf{R}$. The latent variables are the per user and item representations denoted respectively $\\mathbf{\\theta}_{u}$, $\\mathbf{\\beta}_i \\in \\mathbb{R}^{K}$.\n",
                "\n",
                "<img alt=\"Graphical Models\" width=\"600\" style=\"display: block;margin-left: auto;margin-right: auto;\" src=\"\">"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "**Generative Model.** Figure above depicts BiVAE generative model, as compared to VAE, in plate notations. The latent variables are drawn from prior distributions. Without loss of generality, the Gaussian priors with diagonal covariance matrices are being used. BiVAE further adopts the standard multivariate isotropic Gaussian as the prior over all user/item latent variables. That is, $p(\\bf{\\theta}_u) = \\mathcal{N}(\\bf{0},\\bf{I})$  and $p(\\bf{\\beta}_i) = \\mathcal{N}(\\bf{0},\\bf{I})$, $\\forall i,u$. \n",
                "\n",
                "Conditional on the latent variables, the observations are drawn from a univariate exponential family,\n",
                "\n",
                "$$\n",
                "p(r_{ui}|\\bf{\\theta}_u,\\bf{\\beta}_i) = \\mathrm{EXPFAM}(r_{ui}; \\eta(\\bf{\\theta}_u; \\bf{\\beta}_i;\\omega)) = h(r_{ui})\\exp\\{\\eta(\\bf{\\theta}_u;\\bf{\\beta}_i;\\omega)r_{ui} - a(\\eta(\\bf{\\theta}_u;\\bf{\\beta}_i;\\omega))\\}\n",
                "$$\n",
                "\n",
                "where $h(\\cdot)$, $\\eta(\\cdot)$ and $a(\\cdot)$ denote respectively the base measure, natural parameter and log-normalizer of the exponential family. For simplicity, $r_{ui}$ is assumed to be the sufficient statistic by itself. This form of the exponential family still encompasses many popular univariate distributions, including the Poisson, Bernoulli, Gaussian with unit variance, Gamma with fixed shape parameter, etc. Therefore, BiVAE framework can accommodate various types of preference data, such as counts, binary, continuous, etc. The conditional likelihood is further parameterized in such a way that,\n",
                "\n",
                "$$\n",
                "\\mathbb{E}(r_{ui}|\\bf{\\theta}_u,\\bf{\\beta}_i) = \\frac{d a(\\eta)}{d \\eta} = g_{\\omega}(\\bf{\\theta}_u;\\bf{\\beta}_i)\n",
                "$$\n",
                "\n",
                "where $g_\\omega(\\cdot)$ is some differentiable function (e.g., inner product, neural network, etc.) parameterized by $\\omega$, combining the latent representations to output the mean of the observation $r_{ui}$.\n",
                " \n",
                "Given some $\\bf{R}$, the goal is to find the values of the parameters $\\omega$ that would most likely have generated the observations, and to infer the posterior over the latent variables $p(\\bf{\\theta}_{1:U},\\bf{\\beta}_{1:I}|\\bf{R})$. The latter will allow us to make predictions about unknown preferences and form recommendations. However, the posterior and likelihood are intractable and thereby, exact inference and learning are infeasible.  Therefore, BiVAE relies on variational Bayes (VB), a popular and efficient approach to deal with complex probabilistic models."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "**Inference Model.**\n",
                "The starting point of VB is to introduce a tractable inference model $q$, governed by a set of *variational parameters* $\\nu$, which will be used as a proxy for the true but intractable posterior. A variational distribution, which breaks the coupling between $\\beta$ and $\\theta$ - a main source of intractability in our model, is chosen as:\n",
                "\n",
                "$$\n",
                "q(\\bf{\\theta}_{1:U},\\bf{\\beta}_{1:I}|\\bf{R}) = q(\\bf{\\theta}_{1:U}|\\bf{R})q(\\bf{\\beta}_{1:I}|\\bf{R})\n",
                "$$\n",
                "\n",
                "with \n",
                "\n",
                "$$\n",
                "\\begin{align}\n",
                "q(\\bf{\\theta}_{1:U}|\\bf{R}) &= \\prod_{u} q(\\bf{\\theta}_u|\\bf{R}_{u*})\\nonumber\\\\\n",
                "q(\\bf{\\beta}_{1:I}|\\bf{R})  &= \\prod_{i} q(\\bf{\\beta}_i|\\bf{R}_{*i})\\nonumber\n",
                "\\end{align}\n",
                "$$\n",
                "\n",
                "Without loss of generality, the following forms are adopted:\n",
                "\n",
                "$$\n",
                "\\begin{align}\n",
                "q(\\bf{\\theta}_u|\\bf{R}_{u*}) &= \\mathcal{N}(\\tilde{\\bf{\\mu}}_{\\tilde\\psi}(\\bf{R}_{u*}),\\tilde{\\bf\\sigma}_{\\tilde\\psi}(\\bf{R}_{u*}))\\nonumber\\\\\n",
                "q(\\bf{\\beta}_i|\\bf{R}_{*i}) &= \\mathcal{N}(\\tilde{\\bf\\mu}_{\\tilde\\phi}(\\bf{R}_{*i}),\\tilde{\\bf\\sigma}_{\\tilde\\phi}(\\bf{R}_{*i}))\\nonumber\n",
                "\\end{align}\n",
                "$$\n",
                "\n",
                "where $\\nu=\\{\\tilde\\phi,\\tilde\\psi\\}$, $\\tilde{\\bf\\mu}(\\cdot)$ and $\\tilde{\\bf\\sigma}(\\cdot)$ are vector-valued functions (e.g., multilayer perceptrons) parameterized by $\\tilde\\phi$/$\\tilde\\psi$, outputting respectively the mean and covariance parameters of the variational distributions.\n",
                "\n",
                "With $q$ in place, we can proceed with approximate inference by optimizing the Evidence Lower BOund (ELBO), w.r.t. the model $\\omega$ and variational $\\nu$ parameters, given in our case by,\n",
                "\n",
                "$$\n",
                "\\mathcal{L} = \\sum_{u,i}\\mathbb{E}_{q(\\bf{\\theta}_u|\\bf{R}_{u*})}\\mathbb{E}_{q(\\bf{\\beta}_i|\\bf{R}_{*i})}[\\log p(r_{ui}|\\bf{\\theta}_u,\\bf{\\beta}_i)]\n",
                "- \\sum_{u} \\mathrm{KL}(q(\\bf{\\theta}_u|\\bf{R}_{u*})|| p(\\bf{\\theta}_u)) - \\sum_{i} \\mathrm{KL}(q(\\bf{\\beta}_i|\\bf{R}_{*i})|| p(\\bf{\\beta}_i))\n",
                "$$"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 1.3 Optimization\n",
                "\n",
                "In practice, stochastic optimization is used to fit BiVAE to observations. While the KL terms in the ELBO are available analytically, the expectations over the conditional log-likelihood are intractable and thereby, the direct optimization of ELBO is not possible. To overcome this difficulty, the *reparameterization trick* is used to build an unbiased Monte Carlo estimator of the ELBO:\n",
                "\n",
                "$$\n",
                "\\tilde{\\mathcal{L}} = \\sum_{u,i} \\log p(r_{ui} | \\tilde{\\bf\\theta}_u,\\tilde{\\bf\\beta}_i) - \\sum_{u} \\mathrm{KL}(q(\\bf{\\theta}_u|\\bf{r}_{u*}) ||  p(\\bf{\\theta}_u)) - \\sum_{i} \\mathrm{KL}(q(\\bf{\\beta}_i|\\bf{r}_{*i})|| p(\\bf{\\beta}_i))\n",
                "$$\n",
                "\n",
                "where \n",
                "\n",
                "$$\n",
                "\\begin{align}\n",
                "\\tilde{\\bf\\theta}_u &= \\mathcal{T}(\\bf\\epsilon,\\tilde{\\bf\\psi})= \\tilde{\\bf\\mu}_{\\tilde\\psi}(\\bf{r}_{u*}) + \\tilde{\\bf\\sigma}_{\\tilde\\psi}(\\bf{r}_{u*})\\odot\\bf\\epsilon \\nonumber\\\\\n",
                "\\tilde{\\bf\\beta}_{i} &= \\mathcal{T}(\\bf\\epsilon,\\tilde{\\bf\\phi})= \\tilde{\\bf\\mu}_{\\tilde\\phi}(\\bf{r}_{*i}) + \\tilde{\\bf\\sigma}_{\\tilde\\phi}(\\bf{r}_{*i})\\odot\\bf\\epsilon \\nonumber\n",
                "\\end{align}\n",
                "$$\n",
                "\n",
                "with $\\bf\\epsilon \\sim \\mathcal{N}(\\bf{0},\\bf{I})$.\n",
                "\n",
                "Now all the quantities involved are tractable. However, performing unbiased stochastic optimization over the above objective is not convenient, due to the mixing between $r_{ui}$, $\\bf{r}_{u*}$ and $\\bf{r}_{*i}$. To overcome this difficulty and ease subsampling of observations, the two-way nature of BiVAE model is exploited to perform alternate optimization in a Gauss-Seidel fashion. Precisely, the parameters are organized into two blocks consisting of user-related and item-related parameters respectively, then alternate the optimization of each block while holding the other one fixed.  For more details on the optimization procedure, please refer to the original paper [1]."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 2 Cornac implementation of BiVAE\n",
                "\n",
                "BiVAE is implemented in the Cornac framework as part of the [model collections](https://github.com/PreferredAI/cornac#models).\n",
                "* Detailed documentations of the BiVAE model in Cornac can be found [here](https://cornac.readthedocs.io/en/latest/models.html#module-cornac.models.bivaecf.recom_bivaecf).\n",
                "* Source codes of the BiVAE implementation is available on [Cornac](https://github.com/PreferredAI/cornac/tree/master/cornac/models/bivaecf).\n",
                "* For all experiments related to BiVAE, please refer to [this repository](https://github.com/PreferredAI/bi-vae).\n"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 3 Experiments on MovieLens\n",
                "\n",
                "\n",
                "### 3.1 Load and split data\n",
                "\n",
                "To evaluate the performance of item recommendation, we adopted the provided `python_random_split` tool for the consistency.  Data is randomly split into training and test sets with the ratio of 75/25.\n",
                "\n",
                "\n",
                "Note that Cornac also cover different [built-in schemes](https://cornac.readthedocs.io/en/latest/eval_methods.html) for model evaluation."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 3,
            "metadata": {},
            "outputs": [
                {
                    "name": "stderr",
                    "output_type": "stream",
                    "text": [
                        "100%|██████████| 4.81k/4.81k [00:01<00:00, 2.42kKB/s]\n"
                    ]
                },
                {
                    "data": {
                        "text/html": [
                            "<div>\n",
                            "<style scoped>\n",
                            "    .dataframe tbody tr th:only-of-type {\n",
                            "        vertical-align: middle;\n",
                            "    }\n",
                            "\n",
                            "    .dataframe tbody tr th {\n",
                            "        vertical-align: top;\n",
                            "    }\n",
                            "\n",
                            "    .dataframe thead th {\n",
                            "        text-align: right;\n",
                            "    }\n",
                            "</style>\n",
                            "<table border=\"1\" class=\"dataframe\">\n",
                            "  <thead>\n",
                            "    <tr style=\"text-align: right;\">\n",
                            "      <th></th>\n",
                            "      <th>userID</th>\n",
                            "      <th>itemID</th>\n",
                            "      <th>rating</th>\n",
                            "    </tr>\n",
                            "  </thead>\n",
                            "  <tbody>\n",
                            "    <tr>\n",
                            "      <th>0</th>\n",
                            "      <td>196</td>\n",
                            "      <td>242</td>\n",
                            "      <td>3.0</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>1</th>\n",
                            "      <td>186</td>\n",
                            "      <td>302</td>\n",
                            "      <td>3.0</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>2</th>\n",
                            "      <td>22</td>\n",
                            "      <td>377</td>\n",
                            "      <td>1.0</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>3</th>\n",
                            "      <td>244</td>\n",
                            "      <td>51</td>\n",
                            "      <td>2.0</td>\n",
                            "    </tr>\n",
                            "    <tr>\n",
                            "      <th>4</th>\n",
                            "      <td>166</td>\n",
                            "      <td>346</td>\n",
                            "      <td>1.0</td>\n",
                            "    </tr>\n",
                            "  </tbody>\n",
                            "</table>\n",
                            "</div>"
                        ],
                        "text/plain": [
                            "   userID  itemID  rating\n",
                            "0     196     242     3.0\n",
                            "1     186     302     3.0\n",
                            "2      22     377     1.0\n",
                            "3     244      51     2.0\n",
                            "4     166     346     1.0"
                        ]
                    },
                    "execution_count": 3,
                    "metadata": {},
                    "output_type": "execute_result"
                }
            ],
            "source": [
                "data = movielens.load_pandas_df(\n",
                "    size=MOVIELENS_DATA_SIZE,\n",
                "    header=[\"userID\", \"itemID\", \"rating\"]\n",
                ")\n",
                "\n",
                "data.head()"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 4,
            "metadata": {},
            "outputs": [],
            "source": [
                "train, test = python_random_split(data, 0.75)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 3.2 Cornac Dataset\n",
                "\n",
                "To work with models implemented in Cornac, we need to construct an object from [Dataset](https://cornac.readthedocs.io/en/latest/data.html#module-cornac.data.dataset) class.\n",
                "\n",
                "Dataset Class in Cornac serves as the main object that the models will interact with.  In addition to data transformations, Dataset provides a bunch of useful iterators for looping through the data, as well as supporting different negative sampling techniques."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 5,
            "metadata": {
                "scrolled": true
            },
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "Number of users: 943\n",
                        "Number of items: 1642\n"
                    ]
                }
            ],
            "source": [
                "train_set = cornac.data.Dataset.from_uir(train.itertuples(index=False), seed=SEED)\n",
                "\n",
                "print('Number of users: {}'.format(train_set.num_users))\n",
                "print('Number of items: {}'.format(train_set.num_items))"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 3.3 Train the BiVAE model\n",
                "\n",
                "The BiVAE has a few important parameters that we need to consider:\n",
                "\n",
                "- `k`: dimension of the latent space (i.e. the size of $\\bf{\\theta}_u$  and  $\\bf{\\beta}_i$ ).\n",
                "- `encoder_structure`: dimension(s) of hidden layer(s) of the user and item encoders.\n",
                "- `act_fn`: non-linear activation function used in the encoders.\n",
                "- `likelihood`: choice of the likelihood function being optimized.\n",
                "- `n_epochs`: number of passes through training data.\n",
                "- `batch_size`: size of mini-batches of data during training.\n",
                "- `learning_rate`: step size in the gradient update rules.\n",
                "\n",
                "To train the model, we simply need to call the `fit()` method."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 6,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "application/vnd.jupyter.widget-view+json": {
                            "model_id": "132fcc4256dc43a8bc010c9d535d046c",
                            "version_major": 2,
                            "version_minor": 0
                        },
                        "text/plain": [
                            "  0%|          | 0/500 [00:00<?, ?it/s]"
                        ]
                    },
                    "metadata": {},
                    "output_type": "display_data"
                },
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "Took 69.1472 seconds for training.\n"
                    ]
                }
            ],
            "source": [
                "bivae = cornac.models.BiVAECF(\n",
                "    k=LATENT_DIM,\n",
                "    encoder_structure=ENCODER_DIMS,\n",
                "    act_fn=ACT_FUNC,\n",
                "    likelihood=LIKELIHOOD,\n",
                "    n_epochs=NUM_EPOCHS,\n",
                "    batch_size=BATCH_SIZE,\n",
                "    learning_rate=LEARNING_RATE,\n",
                "    seed=SEED,\n",
                "    use_gpu=torch.cuda.is_available(),\n",
                "    verbose=True\n",
                ")\n",
                "\n",
                "with Timer() as t:\n",
                "    bivae.fit(train_set)\n",
                "print(\"Took {} seconds for training.\".format(t))"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "### 3.4 Prediction and Evaluation\n",
                "\n",
                "Now that our model is trained, we can produce the ranked lists for recommendation.  Every recommender models in Cornac provide `rate()` and `rank()` methods for predicting item rated value as well as item ranked list for a given user.  To make use of the current evaluation schemes, we will through `predict()` and `predict_ranking()` functions inside `cornac_utils` to produce the predictions.\n",
                "\n",
                "Let's measure recommendation performance of the model using top-K ranking metrics."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 7,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "Took 1.7215 seconds for prediction.\n"
                    ]
                }
            ],
            "source": [
                "with Timer() as t:\n",
                "    all_predictions = predict_ranking(bivae, train, usercol='userID', itemcol='itemID', remove_seen=True)\n",
                "print(\"Took {} seconds for prediction.\".format(t))"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 8,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "MAP:\t0.146552\n",
                        "NDCG:\t0.474124\n",
                        "Precision@K:\t0.412527\n",
                        "Recall@K:\t0.225064\n"
                    ]
                }
            ],
            "source": [
                "eval_map = map(test, all_predictions, col_prediction='prediction', k=TOP_K)\n",
                "eval_ndcg = ndcg_at_k(test, all_predictions, col_prediction='prediction', k=TOP_K)\n",
                "eval_precision = precision_at_k(test, all_predictions, col_prediction='prediction', k=TOP_K)\n",
                "eval_recall = recall_at_k(test, all_predictions, col_prediction='prediction', k=TOP_K)\n",
                "\n",
                "print(\"MAP:\\t%f\" % eval_map,\n",
                "      \"NDCG:\\t%f\" % eval_ndcg,\n",
                "      \"Precision@K:\\t%f\" % eval_precision,\n",
                "      \"Recall@K:\\t%f\" % eval_recall, sep='\\n')"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 9,
            "metadata": {},
            "outputs": [
                {
                    "data": {
                        "application/scrapbook.scrap.json+json": {
                            "data": 0.14655205218191958,
                            "encoder": "json",
                            "name": "map",
                            "version": 1
                        }
                    },
                    "metadata": {
                        "scrapbook": {
                            "data": true,
                            "display": false,
                            "name": "map"
                        }
                    },
                    "output_type": "display_data"
                },
                {
                    "data": {
                        "application/scrapbook.scrap.json+json": {
                            "data": 0.4741242642717392,
                            "encoder": "json",
                            "name": "ndcg",
                            "version": 1
                        }
                    },
                    "metadata": {
                        "scrapbook": {
                            "data": true,
                            "display": false,
                            "name": "ndcg"
                        }
                    },
                    "output_type": "display_data"
                },
                {
                    "data": {
                        "application/scrapbook.scrap.json+json": {
                            "data": 0.41252653927813165,
                            "encoder": "json",
                            "name": "precision",
                            "version": 1
                        }
                    },
                    "metadata": {
                        "scrapbook": {
                            "data": true,
                            "display": false,
                            "name": "precision"
                        }
                    },
                    "output_type": "display_data"
                },
                {
                    "data": {
                        "application/scrapbook.scrap.json+json": {
                            "data": 0.22506395891693493,
                            "encoder": "json",
                            "name": "recall",
                            "version": 1
                        }
                    },
                    "metadata": {
                        "scrapbook": {
                            "data": true,
                            "display": false,
                            "name": "recall"
                        }
                    },
                    "output_type": "display_data"
                }
            ],
            "source": [
                "# Record results for tests - ignore this cell\n",
                "store_metadata(\"map\", eval_map)\n",
                "store_metadata(\"ndcg\", eval_ndcg)\n",
                "store_metadata(\"precision\", eval_precision)\n",
                "store_metadata(\"recall\", eval_recall)"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 4 Discussion\n",
                "\n",
                "BiVAE is a new variational autoencoder tailored for dyadic data, where observations consist of measurements associated with two sets of objects, e.g., users, items and corresponding ratings.  The model is symmetric, which makes it easier to extend auxiliary data from both sides of users and items.  In addition to preference data, the model can be applied to other types of dyadic data such as document-word matrices, and other tasks such as co-clustering.  \n",
                "\n",
                "In the paper, there is also a discussion on Constrained Adaptive Priors (CAP), a proposed method to build informative priors to mitigate the well-known posterior collapse problem. We have left out that part purposely, not to distract the audiences.  Nevertheless, it is very interesting and worth taking a look.  \n",
                "\n",
                "[This repository](https://github.com/PreferredAI/bi-vae) will provide you a more comprehensive set of experiments related to BiVAE."
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## References\n",
                "\n",
                "1. Quoc-Tuan Truong, Salah, Aghiles, and Hady W. Lauw. \"Bilateral Variational Autoencoder for Collaborative Filtering.\" Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 2021. https://dl.acm.org/doi/pdf/10.1145/3437963.3441759\n",
                "2. Salah, Aghiles, Quoc-Tuan Truong, and Hady W. Lauw. \"Cornac: A Comparative Framework for Multimodal Recommender Systems.\" Journal of Machine Learning Research 21.95 (2020): 1-5. https://cornac.preferred.ai\n",
                "3. Liang, Dawen, et al. \"Variational autoencoders for collaborative filtering.\" Proceedings of the 2018 World Wide Web Conference. 2018."
            ]
        }
    ],
    "metadata": {
        "celltoolbar": "Tags",
        "kernelspec": {
            "display_name": "Python (reco_full)",
            "language": "python",
            "name": "reco_full"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.6.11"
        }
    },
    "nbformat": 4,
    "nbformat_minor": 4
}
