{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Bayes by Backprop for Recurrent Neural Networks (RNNs)\n",
    "\n",
    "In this chapter, we apply [Bayes by Backprop](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter18_variational-methods-and-uncertainty/bayes-by-backprop-gluon.ipynb), or \"BBB\" for short, to a more challenging modeling problem, learning [recurrent neural networks](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb) for sequence prediction.\n",
    "\n",
    "As we've seen, Bayes-by-backprop lets us fit expressive models efficiently and lets us represent uncertainty about  our model's parameters. Representing uncertainty not only helps to avoid overfitting, it is an important part of sound decision making.\n",
    "\n",
    "Thankfully, ``BBB`` for RNNs is not much more difficult than in the feed-forward case. It really just requires replacing the feed-forward neural network with a recurrent one, and changing the log-likelihood to something appropriate for sequence modeling.\n",
    "\n",
    "In what follows, we reimplement the sequence model from [''Bayesian Recurrent Neural Networks'', by Fortunato et al.](https://arxiv.org/pdf/1704.02798.pdf) and rerun the authors' experiments on the Penn Treebank dataset, which you may recall using in chapter, [recurrent neural networks](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb).\n",
    "\n",
    "If you have not looked at the chapters [Bayes by Backprop](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter18_variational-methods-and-uncertainty/bayes-by-backprop-gluon.ipynb) and [Recurrent Neural Networks](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb), it is worth doing so since we reuse a lot that code."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import Packages and Initialize Configuration and Hyperparameters\n",
    "\n",
    "First we make some necessary package imports, perform basic configuration and set some model hyperparameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "import os\n",
    "import time\n",
    "import numpy as np\n",
    "import mxnet as mx\n",
    "from mxnet import gluon, autograd\n",
    "from mxnet.gluon import nn, rnn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "context = mx.gpu(0)\n",
    "args_data = '../data/nlp/ptb.'\n",
    "args_model = 'lstm'\n",
    "args_emsize = 100\n",
    "args_nhid = 100\n",
    "args_nlayers = 2\n",
    "args_lr = 10.0\n",
    "args_clip = 0.2\n",
    "args_epochs = 2\n",
    "args_batch_size = 32\n",
    "args_bptt = 5\n",
    "args_dropout = 0.2\n",
    "args_tied = True\n",
    "args_cuda = 'store_true'\n",
    "args_log_interval = 500\n",
    "args_save = 'model.param'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Define Classes for Loading the Language Data\n",
    "\n",
    "Now let's load the Penn Treebank data as we did in [chapter 5, recurrent neural networks](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Dictionary(object):\n",
    "    def __init__(self):\n",
    "        self.word2idx = {}\n",
    "        self.idx2word = []\n",
    "\n",
    "    def add_word(self, word):\n",
    "        if word not in self.word2idx:\n",
    "            self.idx2word.append(word)\n",
    "            self.word2idx[word] = len(self.idx2word) - 1\n",
    "        return self.word2idx[word]\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.idx2word)\n",
    "\n",
    "\n",
    "class Corpus(object):\n",
    "    def __init__(self, path):\n",
    "        self.dictionary = Dictionary()\n",
    "        self.train = self.tokenize(path + 'train.txt')\n",
    "        self.valid = self.tokenize(path + 'valid.txt')\n",
    "        self.test = self.tokenize(path + 'test.txt')\n",
    "\n",
    "    def tokenize(self, path):\n",
    "        \"\"\"Tokenizes a text file.\"\"\"\n",
    "        assert os.path.exists(path)\n",
    "        # Add words to the dictionary\n",
    "        with open(path, 'r') as f:\n",
    "            tokens = 0\n",
    "            for line in f:\n",
    "                words = line.split() + ['<eos>']\n",
    "                tokens += len(words)\n",
    "                for word in words:\n",
    "                    self.dictionary.add_word(word)\n",
    "\n",
    "        # Tokenize file content\n",
    "        with open(path, 'r') as f:\n",
    "            ids = np.zeros((tokens,), dtype='int32')\n",
    "            token = 0\n",
    "            for line in f:\n",
    "                words = line.split() + ['<eos>']\n",
    "                for word in words:\n",
    "                    ids[token] = self.dictionary.word2idx[word]\n",
    "                    token += 1\n",
    "\n",
    "        return mx.nd.array(ids, dtype='int32')\n",
    "\n",
    "\n",
    "def batchify(data, batch_size):\n",
    "    \"\"\"Reshape data into (num_example, batch_size)\"\"\"\n",
    "    nbatch = data.shape[0] // batch_size\n",
    "    data = data[:nbatch * batch_size]\n",
    "    data = data.reshape((batch_size, nbatch)).T\n",
    "    return data\n",
    "\n",
    "def get_batch(source, i):\n",
    "    seq_len = min(args_bptt, source.shape[0] - 1 - i)\n",
    "    data = source[i : i + seq_len]\n",
    "    target = source[i + 1 : i + 1 + seq_len]\n",
    "    return data, target.reshape((-1,))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "corpus = Corpus(args_data)\n",
    "ntokens = len(corpus.dictionary)\n",
    "train_data = batchify(corpus.train, args_batch_size).as_in_context(context)\n",
    "val_data = batchify(corpus.valid, args_batch_size).as_in_context(context)\n",
    "test_data = batchify(corpus.test, args_batch_size).as_in_context(context)\n",
    "num_batches = int(np.ceil( (train_data.shape[0] - 1)/args_bptt) )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Define the Recurrent Neural Network Model\n",
    "\n",
    "Now let's resurrect our recurrent neural network from [chapter 5, recurrent neural networks](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb).\n",
    "\n",
    "Here we've added a convenience method to the RNN model class called ``set_params_to``. This method is used by Bayes-by-backprop to set the RNN parameters to ones sampled from our variational posterior (details below).\n",
    "\n",
    "We've also defined an auxiliary function, ``detach``, that detaches a hidden state from the computation graph. By detaching the hidden state after each minibatch, we relieve MXNet of trying to back-propagate the gradient across minibatches, and thus, indefinitely far back in time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RNNModel(gluon.Block):\n",
    "    \"\"\"A model with an encoder, recurrent layer, and a decoder.\"\"\"\n",
    "\n",
    "    def __init__(self, mode, vocab_size, num_embed, num_hidden,\n",
    "                 num_layers, dropout=0.5, tie_weights=False, **kwargs):\n",
    "        super(RNNModel, self).__init__(**kwargs)\n",
    "        with self.name_scope():\n",
    "            self.drop = nn.Dropout(dropout)\n",
    "            self.encoder = nn.Embedding(vocab_size, num_embed,\n",
    "                                        weight_initializer = mx.init.Uniform(0.1))\n",
    "            if mode == 'rnn_relu':\n",
    "                self.rnn = rnn.RNN(num_hidden, num_layers, activation='relu',\n",
    "                                   dropout=dropout, input_size=num_embed)\n",
    "            elif mode == 'rnn_tanh':\n",
    "                self.rnn = rnn.RNN(num_hidden, num_layers, dropout=dropout,\n",
    "                                   input_size=num_embed)\n",
    "            elif mode == 'lstm':\n",
    "                self.rnn = rnn.LSTM(num_hidden, num_layers, dropout=dropout,\n",
    "                                    input_size=num_embed)\n",
    "            elif mode == 'gru':\n",
    "                self.rnn = rnn.GRU(num_hidden, num_layers, dropout=dropout,\n",
    "                                   input_size=num_embed)\n",
    "            else:\n",
    "                raise ValueError(\"Invalid mode %s. Options are rnn_relu, \"\n",
    "                                 \"rnn_tanh, lstm, and gru\"%mode)\n",
    "            if tie_weights:\n",
    "                self.decoder = nn.Dense(vocab_size, in_units = num_hidden,\n",
    "                                        params = self.encoder.params)\n",
    "            else:\n",
    "                self.decoder = nn.Dense(vocab_size, in_units = num_hidden)\n",
    "            self.num_hidden = num_hidden\n",
    "\n",
    "    def forward(self, inputs, hidden):\n",
    "        emb = self.drop(self.encoder(inputs))\n",
    "        output, hidden = self.rnn(emb, hidden)\n",
    "        output = self.drop(output)\n",
    "        decoded = self.decoder(output.reshape((-1, self.num_hidden)))\n",
    "        return decoded, hidden\n",
    "\n",
    "    def begin_state(self, *args, **kwargs):\n",
    "        return self.rnn.begin_state(*args, **kwargs)\n",
    "\n",
    "    def set_params_to(self, new_values):\n",
    "        for model_param, new_value in zip(self.collect_params().values(), new_values):\n",
    "            model_param_ctx = model_param.list_ctx()[0]\n",
    "            model_param._data[ model_param_ctx ] = new_value\n",
    "        return\n",
    "\n",
    "\n",
    "\n",
    "def detach(hidden):\n",
    "    if isinstance(hidden, (tuple, list)):\n",
    "        hidden = [i.detach() for i in hidden]\n",
    "    else:\n",
    "        hidden = hidden.detach()\n",
    "    return hidden"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Initialize a Baseline RNN\n",
    "\n",
    "For comparison purposes, let's initialize our RNN from [chapter 5](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb) so we can verify that our \"BBB RNN\" performs just as well. Of course we also need to train and evaluate this baseline model. The ``train_baseline`` and ``evaluate`` routines, also from chapter 5, do this."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "baseline_model = RNNModel(args_model, ntokens, args_emsize, args_nhid, args_nlayers, args_dropout, args_tied)\n",
    "baseline_model.collect_params().initialize(mx.init.Xavier(), ctx=context)\n",
    "\n",
    "trainer = gluon.Trainer(\n",
    "    baseline_model.collect_params(), 'sgd',\n",
    "    {'learning_rate': args_lr, 'momentum': 0, 'wd': 0})\n",
    "\n",
    "smce_loss = gluon.loss.SoftmaxCrossEntropyLoss()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_baseline(model):\n",
    "    global args_lr\n",
    "    best_val = float(\"Inf\")\n",
    "    for epoch in range(args_epochs):\n",
    "        total_L = 0.0\n",
    "        start_time = time.time()\n",
    "        hidden = model.begin_state(func = mx.nd.zeros, batch_size = args_batch_size, ctx = context)\n",
    "        for ibatch, i in enumerate(range(0, train_data.shape[0] - 1, args_bptt)):\n",
    "            data, target = get_batch(train_data, i)\n",
    "            hidden = detach(hidden)\n",
    "            with autograd.record():\n",
    "                output, hidden = model(data, hidden)\n",
    "                L = smce_loss(output, target)\n",
    "                L.backward()\n",
    "\n",
    "            grads = [i.grad(context) for i in model.collect_params().values()]\n",
    "            # Here gradient is for the whole batch.\n",
    "            # So we multiply max_norm by batch_size and bptt size to balance it.\n",
    "            gluon.utils.clip_global_norm(grads, args_clip * args_bptt * args_batch_size)\n",
    "\n",
    "            trainer.step(args_batch_size * args_bptt)\n",
    "            total_L += mx.nd.sum(L).asscalar()\n",
    "\n",
    "            if ibatch % args_log_interval == 0 and ibatch > 0:\n",
    "                cur_L = total_L / args_bptt / args_batch_size / args_log_interval\n",
    "                print('[Epoch %d Batch %d] loss %.2f, perplexity %.2f' % (\n",
    "                    epoch + 1, ibatch, cur_L, math.exp(cur_L)))\n",
    "                total_L = 0.0\n",
    "\n",
    "        val_L = evaluate(val_data, model)\n",
    "\n",
    "        print('[Epoch %d] time cost %.2fs, validation loss %.2f, validation perplexity %.2f' % (\n",
    "            epoch + 1, time.time() - start_time, val_L, math.exp(val_L)))\n",
    "\n",
    "        if val_L < best_val:\n",
    "            best_val = val_L\n",
    "            test_L = evaluate(test_data, model)\n",
    "            model.save_parameters(args_save)\n",
    "            print('test loss %.2f, test perplexity %.2f' % (test_L, math.exp(test_L)))\n",
    "        else:\n",
    "            args_lr = args_lr * 0.25\n",
    "            trainer._init_optimizer('sgd',\n",
    "                                    {'learning_rate': args_lr,\n",
    "                                     'momentum': 0,\n",
    "                                     'wd': 0})\n",
    "            model.load_parameters(args_save, context)\n",
    "    return\n",
    "\n",
    "\n",
    "def evaluate(data_source, model):\n",
    "    total_L = 0.0\n",
    "    ntotal = 0\n",
    "    hidden = model.begin_state(func = mx.nd.zeros, batch_size = args_batch_size, ctx=context)\n",
    "    for i in range(0, data_source.shape[0] - 1, args_bptt):\n",
    "        data, target = get_batch(data_source, i)\n",
    "        output, hidden = model(data, hidden)\n",
    "        L = smce_loss(output, target)\n",
    "        total_L += mx.nd.sum(L).asscalar()\n",
    "        ntotal += L.size\n",
    "    return total_L / ntotal"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train and Evaluate our Baseline RNN\n",
    "\n",
    "Okay, let's refresh our memory on how well the RNN from [chapter 5](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter05_recurrent-neural-networks/rnns-gluon.ipynb) performs on the Penn Treebank data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Epoch 1 Batch 500] loss 6.73, perplexity 834.37\n",
      "[Epoch 1 Batch 1000] loss 6.14, perplexity 462.27\n",
      "[Epoch 1 Batch 1500] loss 5.89, perplexity 360.56\n",
      "[Epoch 1 Batch 2000] loss 5.81, perplexity 333.35\n",
      "[Epoch 1 Batch 2500] loss 5.68, perplexity 292.76\n",
      "[Epoch 1 Batch 3000] loss 5.56, perplexity 260.22\n",
      "[Epoch 1 Batch 3500] loss 5.56, perplexity 260.70\n",
      "[Epoch 1 Batch 4000] loss 5.43, perplexity 227.89\n",
      "[Epoch 1 Batch 4500] loss 5.41, perplexity 222.73\n",
      "[Epoch 1 Batch 5000] loss 5.40, perplexity 222.27\n",
      "[Epoch 1 Batch 5500] loss 5.41, perplexity 223.43\n",
      "[Epoch 1] time cost 56.27s, validation loss 5.30, validation perplexity 199.93\n",
      "test loss 5.27, test perplexity 194.90\n",
      "[Epoch 2 Batch 500] loss 5.38, perplexity 217.45\n",
      "[Epoch 2 Batch 1000] loss 5.31, perplexity 202.51\n",
      "[Epoch 2 Batch 1500] loss 5.27, perplexity 194.25\n",
      "[Epoch 2 Batch 2000] loss 5.31, perplexity 201.45\n",
      "[Epoch 2 Batch 2500] loss 5.26, perplexity 192.06\n",
      "[Epoch 2 Batch 3000] loss 5.18, perplexity 177.65\n",
      "[Epoch 2 Batch 3500] loss 5.22, perplexity 185.72\n",
      "[Epoch 2 Batch 4000] loss 5.13, perplexity 169.52\n",
      "[Epoch 2 Batch 4500] loss 5.12, perplexity 167.37\n",
      "[Epoch 2 Batch 5000] loss 5.16, perplexity 174.14\n",
      "[Epoch 2 Batch 5500] loss 5.18, perplexity 178.52\n",
      "[Epoch 2] time cost 54.73s, validation loss 5.12, validation perplexity 168.12\n",
      "test loss 5.09, test perplexity 162.15\n",
      "Best test loss 5.09, test perplexity 162.15\n"
     ]
    }
   ],
   "source": [
    "train_baseline(baseline_model)\n",
    "baseline_model.load_parameters(args_save, context)\n",
    "test_L = evaluate(test_data, baseline_model)\n",
    "print('Best test loss %.2f, test perplexity %.2f'%(test_L, math.exp(test_L)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Bayes-by-Backprop for RNNs\n",
    "\n",
    "With our baseline RNN trained and evaluated, we can now move on to Bayes-by-backprop for RNNs.\n",
    "\n",
    "Being good Bayesians, the first thing we should do is define a prior probability distribution over the parameters of our model.\n",
    "\n",
    "As in [''Bayesian Recurrent Neural Networks'', by Fortunato et al.](https://arxiv.org/pdf/1704.02798.pdf) and [chapter 18](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter18_variational-methods-and-uncertainty/bayes-by-backprop-gluon.ipynb), we define a \"scale mixture\" prior over the parameters to be a mixture of two Gaussians with different scales, or variances.\n",
    "\n",
    "\\begin{equation*}\n",
    "\\text{Prior}(w_i) = \\prod_i \\bigg ( \\alpha \\mathcal{N}(w_i\\ |\\ 0,\\sigma_1^2) + (1 - \\alpha) \\mathcal{N}(w_i\\ |\\ 0,\\sigma_2^2)\\bigg )\n",
    "\\end{equation*}\n",
    "\n",
    "The first Gaussian has a small scale, preferring parameters which are close to zero. The second has a larger scale, allowing parameter values to stray from zero. By making the prior be a mixture of these two scales, we can induce models where many parameters are close to zero, but some are far from zero. In other words, this prior prefers sparse models, models in which many parameters are effectively zero.\n",
    "\n",
    "The amount of sparsity is determined by the hyperparameter $\\alpha \\in [0,1]$ which controls how much emphasis is placed on each Gaussian in the prior. Of course, the scale parameters $\\sigma_1$ and $\\sigma_2$ also control the sparsity since smaller $\\sigma$'s induce smaller parameter values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ScaleMixturePrior(object):\n",
    "\n",
    "    def __init__(self, alpha, sigma1, sigma2):\n",
    "        self.alpha = mx.nd.array([alpha], ctx=context)\n",
    "        self.one_minus_alpha = mx.nd.array([1 - alpha], ctx=context)\n",
    "        self.zero = mx.nd.array([0.0], ctx=context)\n",
    "        self.sigma1 = mx.nd.array([sigma1], ctx=context)\n",
    "        self.sigma2 = mx.nd.array([sigma2], ctx=context)\n",
    "        return\n",
    "\n",
    "    def log_prob(self, model_params):\n",
    "        total_log_prob = None\n",
    "        for i, model_param in enumerate(model_params):\n",
    "            p1 = gaussian_prob(model_param, self.zero, self.sigma1)\n",
    "            p2 = gaussian_prob(model_param, self.zero, self.sigma2)\n",
    "            log_prob = mx.nd.sum(mx.nd.log(self.alpha * p1 + self.one_minus_alpha * p2))\n",
    "            if i == 0: total_log_prob = log_prob\n",
    "            else: total_log_prob = total_log_prob + log_prob\n",
    "        return total_log_prob\n",
    "\n",
    "\n",
    "# Define some auxiliary functions\n",
    "def log_gaussian_prob(x, mu, sigma):\n",
    "    return - mx.nd.log(sigma) - (x - mu) ** 2 / (2 * sigma ** 2)\n",
    "\n",
    "def gaussian_prob(x, mu, sigma):\n",
    "    scaling = 1.0 / mx.nd.sqrt(2.0 * np.pi * (sigma ** 2))\n",
    "    bell = mx.nd.exp(-(x - mu)**2 / (2.0 * sigma ** 2))\n",
    "    return scaling * bell"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Define the Variational Posterior\n",
    "\n",
    "What comes after the prior? Why, the posterior of course!\n",
    "\n",
    "In this case, since we are doing variational Bayes, we will define a _variational posterior_. A variational posterior is just a parametric distribution that we choose which we will fit to the actual the posterior distribution during learning. The variational posterior is itself parameterized with a set of parameters, aptly named the \"variational parameters\". Variational inference consists of finding the set of variational parameters that best match the variational posterior to the actual posterior distribution of the model parameters.\n",
    "\n",
    "As in [chapter 18](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter18_variational-methods-and-uncertainty/bayes-by-backprop-gluon.ipynb), we define the variational posterior so that the posterior for each parameter in the model is Gaussian with its own mean and variance. Given our scale mixture prior and the data $\\mathcal{D}$, we define the variational posterior as:\n",
    "\n",
    "\\begin{equation*}\n",
    "P(w_i\\ |\\ \\mathcal{D}, \\alpha, \\sigma_1, \\sigma_2) \n",
    "= \\mathcal{N}\\left(w_i\\ \\big|\\ \\mu^{\\text{var}}_i, \\left(\\sigma^{\\text{var}}_i\\right)^2\\right)\n",
    "\\end{equation*}\n",
    "\n",
    "where $\\mu^{\\text{var}}_i$ and $\\left(\\sigma^{\\text{var}}_i\\right)^2$ are the variational parameters determining the variational posterior for model parameter, $w_i$.\n",
    "\n",
    "As in [chapter 18](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter18_variational-methods-and-uncertainty/bayes-by-backprop-gluon.ipynb), we avoid the need for positivity constraints on the variational scale parameters by reparameterizing $\\sigma^{\\text{var}}_i$ as $\\rho^{\\text{var}}_i$ such that\n",
    "\n",
    "\\begin{equation*}\n",
    "\\sigma^{\\text{var}}_i = \\log(1 + \\exp(\\rho^{\\text{var}}_i))\n",
    "\\end{equation*}\n",
    "\n",
    "You might recognize $f(x) = \\log(1 + \\exp(x))$ as the \"softplus\" function.\n",
    "\n",
    "### The Variational Posterior Class\n",
    "\n",
    "Like our ``ScaleMixturePrior`` class, the ``VariationalPosterior`` class needs a method for computing the log-probability of the model parameters. But in order to run Bayes-by-backprop, it also needs a method for sampling a set of model parameters from the variational posterior distribution. We've implemented this method and named it ``sample_model_params``."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "class VariationalPosterior(object):\n",
    "\n",
    "    def __init__(self, model, var_mu_init_scale, var_sigma_init_scale):\n",
    "        self.var_mus = []\n",
    "        self.var_rhos = []\n",
    "        self.raw_var_mus = []\n",
    "        self.raw_var_rhos = []\n",
    "        var_rho_init_scale = inv_softplus(var_sigma_init_scale)\n",
    "\n",
    "        for i, model_param in enumerate(model.collect_params().values()):\n",
    "\n",
    "            var_mu = gluon.Parameter(\n",
    "                'var_mu_{}'.format(i), shape=model_param.shape,\n",
    "                init=mx.init.Normal(var_mu_init_scale))\n",
    "            var_mu.initialize(ctx=context)\n",
    "            self.var_mus.append(var_mu)\n",
    "            self.raw_var_mus.append(var_mu.data(context))\n",
    "\n",
    "            var_rho = gluon.Parameter(\n",
    "                'var_rho_{}'.format(i), shape=model_param.shape,\n",
    "                init=mx.init.Constant(var_rho_init_scale))\n",
    "            var_rho.initialize(ctx=context)\n",
    "            self.var_rhos.append(var_rho)\n",
    "            self.raw_var_rhos.append(var_rho.data(context))\n",
    "\n",
    "        self.var_params = self.var_mus + self.var_rhos\n",
    "        return\n",
    "\n",
    "    def log_prob(self, model_params):\n",
    "        log_probs = [\n",
    "            mx.nd.sum(log_gaussian_prob(model_param, raw_var_mu, softplus(raw_var_rho)))\n",
    "            for (model_param, raw_var_mu, raw_var_rho)\n",
    "            in zip(model_params, self.raw_var_mus, self.raw_var_rhos)\n",
    "        ]\n",
    "        total_log_prob = log_probs[0]\n",
    "        for log_prob in log_probs[1:]:\n",
    "            total_log_prob = total_log_prob + log_prob\n",
    "        return total_log_prob\n",
    "\n",
    "    def sample_model_params(self):\n",
    "        model_params = []\n",
    "        for raw_var_mu, raw_var_rho in zip(self.raw_var_mus, self.raw_var_rhos):\n",
    "            epsilon = mx.nd.random_normal(shape=raw_var_mu.shape, loc=0., scale=1.0, ctx=context)\n",
    "            var_sigma = softplus(raw_var_rho)\n",
    "            model_param = raw_var_mu + var_sigma * epsilon\n",
    "            model_params.append(model_param)\n",
    "        return model_params\n",
    "\n",
    "    def num_params(self):\n",
    "        return sum([\n",
    "            2 * np.prod(param.shape)\n",
    "            for param in self.var_mus\n",
    "        ])\n",
    "\n",
    "\n",
    "# Define some auxiliary functions\n",
    "def softplus(x):\n",
    "    return mx.nd.log(1. + mx.nd.exp(x))\n",
    "\n",
    "def inv_softplus(x):\n",
    "    if x <= 0: raise ValueError(\"x must be > 0: {}\".format(x))\n",
    "    return np.log(np.exp(x) - 1.0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Implementing the Bayes-by-Backprop Loss\n",
    "\n",
    "We're almost done setting up the Bayes-by-backprop infrastructure. The final piece is approximating the variational loss, which is defined as the expected negative log-likelihood of the data (under the variational posterior) plus the KL-divergence between the variational posterior and the prior over the model parameters.\n",
    "\n",
    "Denoting the set of variational parameters $\\theta = \\{(\\mu_{\\text{var}}^{(i)}, \\sigma_{\\text{var}}^{(i)}) \\}$, we write the variational loss as\n",
    "\n",
    "\\begin{equation*}\n",
    "\\begin{split}\n",
    "\\text{loss}_{\\text{var}}(\\theta) = \n",
    "  \\mathbb{E}_{q(\\mathbf{w}\\ |\\ \\mathbf{\\theta})}[- \\log P(\\mathcal{D}\\ |\\ \\mathbf{w})] +\n",
    "  \\text{KL}[q(\\mathbf{w}\\ |\\ \\mathbf{\\theta})\\ ||\\ P(\\mathbf{w})] .\n",
    "\\end{split}\n",
    "\\end{equation*}\n",
    "\n",
    "Notice that computing this loss involves an integral over $\\mathbf{w}$. In Bayes-by-backprop, we approximate this integral with a Monte Carlo estimate obtained by drawing samples of the model parameters from the variational posterior and approximating the loss on these samples.\n",
    "\n",
    "Specifically, let $\\{ \\mathbf{w}^{(1)}, \\ldots, \\mathbf{w}^{(M)} \\}$ be a sample of model parameters drawn from $q(\\mathbf{w}\\ |\\ \\theta)$. Then we can approximate the variational loss with the Monte Carlo estimate\n",
    "\n",
    "\\begin{equation*}\n",
    "\\text{loss}_{\\text{mc}}(\\theta\\ ;\\ \\mathbf{w}^{(1)}, \\ldots, \\mathbf{w}^{(M)} ) =\n",
    "\\frac{1}{M} \\sum_{m=1}^M \\left(\n",
    " -\\log P(\\mathcal{D}\\ |\\ \\mathbf{w}^{(m)}) \n",
    " +\\log q(\\mathbf{w}^{(m)}\\ |\\ \\theta)\n",
    " -\\log \\text{Prior}(\\mathbf{w}^{(m)})\n",
    "\\right).\n",
    "\\end{equation*}\n",
    "\n",
    "Of course, this requires evaluating the negative log-likelihood on the full data, $\\mathcal{D}$. We can make a further approximation by merely evaluating the negative log-likelihood on a randomly sampled mini-batch of data, $\\mathcal{D}^{(n)}$, at each iteration. Assuming that $N$ minibatches constitute a full pass over the entire data set, the Bayes-by-backprop loss function we seek to minimize is\n",
    "\n",
    "\\begin{equation*}\n",
    "\\text{loss}_{\\text{bbb}}(\\theta\\ ;\\ \\mathcal{D}^{(n)}, \\mathbf{w}^{(1)}, \\ldots, \\mathbf{w}^{(M)} ) =\n",
    "\\frac{1}{M} \\sum_{m=1}^M \\left(\n",
    " -\\log P(\\mathcal{D}^{(n)}\\ |\\ \\mathbf{w}^{(m)}) \n",
    " +\\frac{1}{N} \\left( \\log q(\\mathbf{w}^{(m)}\\ |\\ \\theta)\n",
    " -\\log \\text{Prior}(\\mathbf{w}^{(m)}) \\right)\n",
    "\\right)\n",
    "\\end{equation*}\n",
    "\n",
    "where we scale the approximation to the KL term by $1/N$ so that it has the right magnitude after we sum over the $N$ minibatches in a full pass over the data. Also, in practice, we set $M = 1$ so the outer sum from $m = 1 \\ldots M$ disappears.\n",
    "\n",
    "The ``forward`` method of the ``BBB_Loss`` class implements the $\\text{loss}_{\\text{bbb}}$ function.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "class BBB_Loss(gluon.loss.Loss):\n",
    "\n",
    "    def __init__(self, prior, var_posterior, log_likelihood, num_batches, weight=None, batch_axis=0, **kwargs):\n",
    "        super(BBB_Loss, self).__init__(weight, batch_axis, **kwargs)\n",
    "        self.prior = prior\n",
    "        self.var_posterior = var_posterior\n",
    "        self.log_likelihood = log_likelihood\n",
    "        self.num_batches = num_batches\n",
    "        return\n",
    "    \n",
    "    def forward(self, yhat, y, sampled_params, sample_weight=None):\n",
    "        neg_log_likelihood = mx.nd.sum(self.log_likelihood(yhat, y))\n",
    "        prior_log_prob = mx.nd.sum(self.prior.log_prob(sampled_params))\n",
    "        var_post_log_prob = mx.nd.sum(self.var_posterior.log_prob(sampled_params))\n",
    "        kl_loss = var_post_log_prob - prior_log_prob\n",
    "        var_loss = neg_log_likelihood + kl_loss / self.num_batches\n",
    "        return var_loss, neg_log_likelihood"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Bayes-by-Backprop Training for Recurrent Neural Nets\n",
    "\n",
    "Okay, now that we've defined the classes needed for Bayes-by-backprop, let's write our BBB training and evaluation routine. It's much the same as what we used for the baseline model. However, the ``autograd.record`` block and evaluation code have some differences.\n",
    "\n",
    "Here, since we are doing BBB, we need to sample the model parameters from our variational posterior and set the model's parameters to these sampled parameters. Then we can run the training forward pass with this \"sampled model\" and update the variational parameters accordingly to mininize the BBB-loss.\n",
    "\n",
    "Additionally, at evaluation time, rather than sampling a set of model parameters like we do for training, we set the model's parameters to the variational $\\mu$'s, since these represent typical model parameters.\n",
    "\n",
    "One more final detail. We need to be careful about the size of our gradient step. Since the BBB-loss is a function of both the training data and the variational parameters, the effective sample size for each minibatch is proportional to the number of training instances plus the number of variational parameters. We therefore set the variable ``effective_sample_size`` to this quantity and use it to control the step size and gradient norm accordingly.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_bbb(model):\n",
    "    global args_lr\n",
    "    global args_ess_multiplier\n",
    "    best_val = float(\"Inf\")\n",
    "\n",
    "    for epoch in range(args_epochs):\n",
    "        total_L = 0.0\n",
    "        start_time = time.time()\n",
    "        hidden = model.begin_state(func = mx.nd.zeros, batch_size = args_batch_size, ctx = context)\n",
    "\n",
    "        for ibatch, i in enumerate(range(0, train_data.shape[0] - 1, args_bptt)):\n",
    "            x, y = get_batch(train_data, i)\n",
    "            hidden = detach(hidden)\n",
    "\n",
    "            with autograd.record():\n",
    "                sampled_params = var_posterior.sample_model_params()\n",
    "                model.set_params_to(sampled_params)\n",
    "                yhat, hidden = model(x, hidden)\n",
    "                var_loss, L = bbb_loss(yhat, y, sampled_params)\n",
    "                var_loss.backward()\n",
    "\n",
    "            grads = [var_mu.grad(context) for var_mu in var_posterior.var_mus]\n",
    "            effective_batch_size = (args_bptt * args_batch_size) + (var_posterior.num_params() / num_batches)\n",
    "            gluon.utils.clip_global_norm(grads, args_clip * effective_batch_size)\n",
    "            trainer.step(args_clip * effective_batch_size)\n",
    "            total_L += mx.nd.sum(L).asscalar()\n",
    "\n",
    "            if ibatch % args_log_interval == 0 and ibatch > 0:\n",
    "                cur_L = total_L / args_bptt / args_batch_size / args_log_interval\n",
    "                print('[Epoch %d Batch %d] loss %.2f, perplexity %.2f' % (\n",
    "                    epoch + 1, ibatch, cur_L, math.exp(cur_L)))\n",
    "                total_L = 0.0\n",
    "\n",
    "        model.set_params_to(var_posterior.raw_var_mus)\n",
    "        val_L = evaluate(val_data, model)\n",
    "\n",
    "        print('[Epoch %d] time cost %.2fs, validation loss %.2f, validation perplexity %.2f' % (\n",
    "            epoch + 1, time.time() - start_time, val_L, math.exp(val_L)))\n",
    "\n",
    "        if val_L < best_val:\n",
    "            best_val = val_L\n",
    "            model.set_params_to(var_posterior.raw_var_mus)\n",
    "            test_L = evaluate(test_data, model)\n",
    "            model.save_parameters(args_save)\n",
    "            print('test loss %.2f, test perplexity %.2f' % (test_L, math.exp(test_L)))\n",
    "        else:\n",
    "            args_lr = args_lr * 0.25\n",
    "            trainer._init_optimizer('sgd',\n",
    "                                    {'learning_rate': args_lr,\n",
    "                                     'momentum': 0,\n",
    "                                     'wd': 0})\n",
    "            model.load_parameters(args_save, context)\n",
    "    return"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We are just about ready to go. We can instantiate our model, the scale-mixture prior, and the variational posterior.\n",
    "\n",
    "Some things to note before we pull the trigger are:\n",
    "* Since we are learning the variational parameters, these are what need to be updated by the model trainer.\n",
    "* Since dropout can be interpreted as a different approach to Bayesian learning [\\[Gal and Ghahramani, 2016\\]](http://proceedings.mlr.press/v48/gal16.pdf), we should turn it off for training.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "bbb_model = RNNModel(args_model, ntokens, args_emsize, args_nhid, args_nlayers, dropout=0.0, tie_weights=args_tied)\n",
    "bbb_model.collect_params().initialize(mx.init.Xavier(), ctx=context)\n",
    "\n",
    "prior = ScaleMixturePrior(alpha = 0.75, sigma1 = 0.001, sigma2 = 0.75)\n",
    "\n",
    "var_posterior = VariationalPosterior(bbb_model,\n",
    "                                     var_mu_init_scale = 0.05,\n",
    "                                     var_sigma_init_scale = 0.01)\n",
    "\n",
    "bbb_loss = BBB_Loss(prior,\n",
    "                    var_posterior,\n",
    "                    gluon.loss.SoftmaxCrossEntropyLoss(),\n",
    "                    num_batches)\n",
    "\n",
    "trainer = gluon.Trainer(\n",
    "    var_posterior.var_params, 'sgd',\n",
    "    { 'learning_rate': args_lr, 'momentum': 0, 'wd': 0 })"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, let's start training!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Epoch 1 Batch 500] loss 6.56, perplexity 706.71\n",
      "[Epoch 1 Batch 1000] loss 5.89, perplexity 362.56\n",
      "[Epoch 1 Batch 1500] loss 5.64, perplexity 280.99\n",
      "[Epoch 1 Batch 2000] loss 5.58, perplexity 264.55\n",
      "[Epoch 1 Batch 2500] loss 5.45, perplexity 233.16\n",
      "[Epoch 1 Batch 3000] loss 5.33, perplexity 206.05\n",
      "[Epoch 1 Batch 3500] loss 5.33, perplexity 207.40\n",
      "[Epoch 1 Batch 4000] loss 5.21, perplexity 182.22\n",
      "[Epoch 1 Batch 4500] loss 5.18, perplexity 177.47\n",
      "[Epoch 1 Batch 5000] loss 5.19, perplexity 179.08\n",
      "[Epoch 1 Batch 5500] loss 5.20, perplexity 182.08\n",
      "[Epoch 1] time cost 278.34s, validation loss 5.24, validation perplexity 189.40\n",
      "test loss 5.21, test perplexity 183.07\n",
      "[Epoch 2 Batch 500] loss 5.19, perplexity 178.91\n",
      "[Epoch 2 Batch 1000] loss 5.10, perplexity 163.53\n",
      "[Epoch 2 Batch 1500] loss 5.05, perplexity 156.11\n",
      "[Epoch 2 Batch 2000] loss 5.10, perplexity 164.57\n",
      "[Epoch 2 Batch 2500] loss 5.07, perplexity 158.48\n",
      "[Epoch 2 Batch 3000] loss 4.98, perplexity 144.95\n",
      "[Epoch 2 Batch 3500] loss 5.01, perplexity 150.35\n",
      "[Epoch 2 Batch 4000] loss 4.92, perplexity 137.12\n",
      "[Epoch 2 Batch 4500] loss 4.90, perplexity 134.37\n",
      "[Epoch 2 Batch 5000] loss 4.95, perplexity 140.53\n",
      "[Epoch 2 Batch 5500] loss 4.99, perplexity 147.09\n",
      "[Epoch 2] time cost 281.63s, validation loss 5.12, validation perplexity 167.05\n",
      "test loss 5.08, test perplexity 160.53\n",
      "Best test loss 5.08, test perplexity 160.53\n"
     ]
    }
   ],
   "source": [
    "train_bbb(bbb_model)\n",
    "bbb_model.load_parameters(args_save, context)\n",
    "bbb_model.set_params_to(var_posterior.raw_var_mus)\n",
    "test_L = evaluate(test_data, bbb_model)\n",
    "print('Best test loss %.2f, test perplexity %.2f'%(test_L, math.exp(test_L)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Okay, not bad. We do about as well as dropout. But with BBB, we also have the ability to assess the certainty of our model parameters. This is useful in pruning weights from a model, for example, or in applications where the model needs to interact with its environment. In such cases, having a model that \"knows what it knows\" is quite useful."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "We have implemented Bayes-for-backprop for recurrent neural networks as described in [''Bayesian Recurrent Neural Networks'' by Fortunato et al.](https://arxiv.org/pdf/1704.02798.pdf), and rerun the authors' experiments on the Penn Treebank data. The comparable results shows Bayes-by-backprop's applicability to problems more sophisticated than classification and regression."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For whinges or inquiries, [open an issue on  GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
