{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# A Full Recipe for Recurrent LSTM Model\n",
    "\n",
    "## Introduction\n",
    "\n",
    "In this demo, instead of providing a black box of all high level functions, you will build a LSTM Model from scratch. You will learn how to use MXNet low-level API to build the LSTM unit, then build a fixed length recurrent network, and furthermore a recurrent network which support variable length. After that we hope you will know more detail of recurrent LSTM model and how it works, then we hope to can use it in finding language embedding feature or session embedding on your sequential data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from collections import namedtuple\n",
    "import time\n",
    "import math\n",
    "\n",
    "import mxnet as mx\n",
    "import numpy as np\n",
    "\n",
    "# from pprint import pprint as print"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Symbolic Model\n",
    "\n",
    "The code below shows how to build a basic basic LSTM network. For variation of LSTMs, we can modify the symbol code to build the new one.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"https://devblogs.nvidia.com/parallelforall/wp-content/uploads/2016/04/image08-e1459972481323-624x326.png\" >\n",
    "\n",
    "Image Credit: CuDNN\n",
    "\n",
    "We can use MXNet symbolic API to assemble such a complex LSTM unit. The input to the LSTM unit are data, previous cell and hidden. Then with \"input to hidden\" and \"hidden to hidden\" transform, we will get gates. After we have gates, we will do logic transform for input, output, input_transform and forget. After transformation, we will get output hidden and cell."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "LSTMState = namedtuple(\"LSTMState\", [\"c\", \"h\"])\n",
    "LSTMParam = namedtuple(\"LSTMParam\", [\"i2h_weight\", \"i2h_bias\",\n",
    "                                     \"h2h_weight\", \"h2h_bias\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "The following code is a basic LSTM Unit."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lstm(num_hidden, indata, prev_state, param, seqidx, layeridx):\n",
    "    \"\"\"LSTM Unit symbol\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    num_hidden: int\n",
    "        Hidden node in the LSTM unit\n",
    "    \n",
    "    in_data: mx.symbol\n",
    "        Input data symbol\n",
    "    \n",
    "    prev_state: LSTMState\n",
    "        Cell and hidden from previous LSTM unit\n",
    "        \n",
    "    param: LSTMParam\n",
    "        Parameters of LSTM network\n",
    "        \n",
    "    seqidx: int\n",
    "        The horizental index of the LSTM unit in the recurrent network\n",
    "        \n",
    "    layeridx: int\n",
    "        The vertical index of the LSTM unit in the recurrent network\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    ret: LSTMState\n",
    "        Current LSTM unit state\n",
    "    \"\"\"\n",
    "    i2h = mx.sym.FullyConnected(data=indata,\n",
    "                                weight=param.i2h_weight,\n",
    "                                bias=param.i2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_i2h\" % (seqidx, layeridx))\n",
    "    h2h = mx.sym.FullyConnected(data=prev_state.h,\n",
    "                                weight=param.h2h_weight,\n",
    "                                bias=param.h2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_h2h\" % (seqidx, layeridx))\n",
    "    gates = i2h + h2h\n",
    "    slice_gates = mx.sym.SliceChannel(gates, num_outputs=4,\n",
    "                                      name=\"t%d_l%d_slice\" % (seqidx, layeridx))\n",
    "    in_gate = mx.sym.Activation(slice_gates[0], act_type=\"sigmoid\")\n",
    "    in_transform = mx.sym.Activation(slice_gates[1], act_type=\"tanh\")\n",
    "    forget_gate = mx.sym.Activation(slice_gates[2], act_type=\"sigmoid\")\n",
    "    out_gate = mx.sym.Activation(slice_gates[3], act_type=\"sigmoid\")\n",
    "    next_c = (forget_gate * prev_state.c) + (in_gate * in_transform)\n",
    "    next_h = out_gate * mx.sym.Activation(next_c, act_type=\"tanh\")\n",
    "    return LSTMState(c=next_c, h=next_h)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The implementation of a single unit is straightforward. The hardest part is how to represent the “recurrence” in recurrent neural network\n",
    "\n",
    "<img src=\"https://raw.githubusercontent.com/antinucleon/web-data/master/mxnet/notebook/recurrent.jpg\">\n",
    "\n",
    "The figure above shows why it is called \"recurrent\" network. The <font color='red'>red circle </font> and <font color='blue'>blue circle </font> is the recurrent in the LSTM network. We can do inference from LSTM by using the LSTM symbol we write above with imperative copy code in MXNet. However for training recurrent network, we need to store the intermediate step gradient for back-propagation. Directly use imperative with a single unit is hard, so we decide to \"unroll\" a recurrent network. For example, for a sequence with length of 3, we may unroll the recurrent LSTM to the following multi-input & multi-output feedforward network but share all parameters between LSTM unit.\n",
    "\n",
    "<img src=\"https://raw.githubusercontent.com/antinucleon/web-data/master/mxnet/notebook/unroll.jpg\">\n",
    "\n",
    "As we can see, after unrolling, the DAG doesn’t have circle any more. So we can reuse the feedforward module to train a recurrent neural network.\n",
    "\n",
    "Stacking LSTM means we use previous LSTM's output ```h``` as the input for the next layer. Also to regularize LSTM, we usually add dropout for the output ```h```. Here is a modified LSTM unit, with Dropout support."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lstm(num_hidden, indata, prev_state, param, seqidx, layeridx, dropout=0.):\n",
    "    \"\"\"LSTM Unit symbol\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    num_hidden: int\n",
    "        Hidden node in the LSTM unit\n",
    "    \n",
    "    in_data: mx.symbol\n",
    "        Input data symbol\n",
    "    \n",
    "    prev_state: LSTMState\n",
    "        Cell and hidden from previous LSTM unit\n",
    "        \n",
    "    param: LSTMParam\n",
    "        Parameters of LSTM network\n",
    "        \n",
    "    seqidx: int\n",
    "        The horizental index of the LSTM unit in the recurrent network\n",
    "        \n",
    "    layeridx: int\n",
    "        The vertical index of the LSTM unit in the recurrent network\n",
    "        \n",
    "    dropout: float, optional in range (0, 1)\n",
    "        Dropout rate on the hidden unit\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    ret: LSTMState\n",
    "        Current LSTM unit state\n",
    "    \"\"\"\n",
    "    i2h = mx.sym.FullyConnected(data=indata,\n",
    "                                weight=param.i2h_weight,\n",
    "                                bias=param.i2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_i2h\" % (seqidx, layeridx))\n",
    "    h2h = mx.sym.FullyConnected(data=prev_state.h,\n",
    "                                weight=param.h2h_weight,\n",
    "                                bias=param.h2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_h2h\" % (seqidx, layeridx))\n",
    "    gates = i2h + h2h\n",
    "    slice_gates = mx.sym.SliceChannel(gates, num_outputs=4,\n",
    "                                      name=\"t%d_l%d_slice\" % (seqidx, layeridx))\n",
    "    in_gate = mx.sym.Activation(slice_gates[0], act_type=\"sigmoid\")\n",
    "    in_transform = mx.sym.Activation(slice_gates[1], act_type=\"tanh\")\n",
    "    forget_gate = mx.sym.Activation(slice_gates[2], act_type=\"sigmoid\")\n",
    "    out_gate = mx.sym.Activation(slice_gates[3], act_type=\"sigmoid\")\n",
    "    next_c = (forget_gate * prev_state.c) + (in_gate * in_transform)\n",
    "    next_h = out_gate * mx.sym.Activation(next_c, act_type=\"tanh\")\n",
    "    # dropout the hidden h\n",
    "    next_h = mx.sym.Dropout(next_h, p=dropout)\n",
    "    return LSTMState(c=next_c, h=next_h)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Unroll bring the benefit to compute gradient, but it has a big problem. In real world, sequences are in variable length. How can we fix it? Also we'd like batch training neural network, how can we build batches with variable length of sequences? The answer is: Padding and Bucketing.\n",
    "\n",
    "The idea comes from hash table. Unlike hash table, our goal is to make as many as collisions with fewest padding. The following figure shows how to do bucketing and padding with a simple example.\n",
    "\n",
    "<img src=\"https://raw.githubusercontent.com/antinucleon/web-data/master/mxnet/notebook/bucket.jpg\">\n",
    "\n",
    "Before we move on to next step, we will discuss about how to make a distinct word as input of neural network. The technique we use is called ```embedding```. Generally, an ```embedding``` is a large lookup table in matrix format. We encode input char/word in one-hot representation, and \"take\" a special column of the matrix as the embedding vector of the one-hot input. Then we can use the vector as input to the neural network. During training, the embedding matrix will be updated together with the other part of the network.\n",
    "\n",
    "<img src=\"https://raw.githubusercontent.com/antinucleon/web-data/master/mxnet/notebook/embed.jpg\">\n",
    "\n",
    "After bucketing and padding on data, we know how many scenarios we need to unroll the recurrent network into feedforward network. Before we move to the bucketing execution parts, we need think about how to deal with padding.\n",
    "\n",
    "For padding, we don't want the padding has supervision signal, also we don't want padding to make change of previous input hidden and cell. So we use a mask to filter out the padding instance in a batch.\n",
    "\n",
    "For example, we define label 0 to PAD, and we have a map between word to number, which is:\n",
    "```\n",
    "PAD -> 0\n",
    "the -> 1\n",
    "a -> 2\n",
    "big -> 3\n",
    "gpu -> 4\n",
    "```\n",
    "\n",
    "For input batch ```[the, PAD, a, PAD, big]```, it will be converted to ```[1, 0, 2, 0, 3]```. Together with this input, we can pass a mask as input to indicate which instance in this batch should be ignored (set to 0 for output), so the mask will be like ```[1,0,1,0,1]```. Also we need to set label 0 as ignored label in ```SoftmaxOutput``` so that the padding will not provide supervision signal.\n",
    "\n",
    "\n",
    "Put all together, our final LSTM unit is like:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lstm(num_hidden, indata, mask, prev_state, param, seqidx, layeridx, dropout=0.):\n",
    "    \"\"\"LSTM Unit symbol\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    num_hidden: int\n",
    "        Hidden node in the LSTM unit\n",
    "    \n",
    "    in_data: mx.symbol\n",
    "        Input data symbol\n",
    "    \n",
    "    prev_state: LSTMState\n",
    "        Cell and hidden from previous LSTM unit\n",
    "        \n",
    "    param: LSTMParam\n",
    "        Parameters of LSTM network\n",
    "        \n",
    "    seqidx: int\n",
    "        The horizental index of the LSTM unit in the recurrent network\n",
    "        \n",
    "    layeridx: int\n",
    "        The vertical index of the LSTM unit in the recurrent network\n",
    "        \n",
    "    dropout: float, optional in range (0, 1)\n",
    "        Dropout rate on the hidden unit\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    ret: LSTMState\n",
    "        Current LSTM unit state\n",
    "    \"\"\"\n",
    "    i2h = mx.sym.FullyConnected(data=indata,\n",
    "                                weight=param.i2h_weight,\n",
    "                                bias=param.i2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_i2h\" % (seqidx, layeridx))\n",
    "    h2h = mx.sym.FullyConnected(data=prev_state.h,\n",
    "                                weight=param.h2h_weight,\n",
    "                                bias=param.h2h_bias,\n",
    "                                num_hidden=num_hidden * 4,\n",
    "                                name=\"t%d_l%d_h2h\" % (seqidx, layeridx))\n",
    "    gates = i2h + h2h\n",
    "    slice_gates = mx.sym.SliceChannel(gates, num_outputs=4,\n",
    "                                      name=\"t%d_l%d_slice\" % (seqidx, layeridx))\n",
    "    in_gate = mx.sym.Activation(slice_gates[0], act_type=\"sigmoid\")\n",
    "    in_transform = mx.sym.Activation(slice_gates[1], act_type=\"tanh\")\n",
    "    forget_gate = mx.sym.Activation(slice_gates[2], act_type=\"sigmoid\")\n",
    "    out_gate = mx.sym.Activation(slice_gates[3], act_type=\"sigmoid\")\n",
    "    next_c = (forget_gate * prev_state.c) + (in_gate * in_transform)\n",
    "    next_h = out_gate * mx.sym.Activation(next_c, act_type=\"tanh\")\n",
    "    # dropout the hidden h\n",
    "    if dropout > 0.:\n",
    "        next_h = mx.sym.Dropout(next_h, p=dropout)\n",
    "    # mask out the output\n",
    "    next_c = mx.sym.element_mask(next_c, mask, name=\"t%d_l%d_c\" % (seqidx, layeridx))\n",
    "    next_h = mx.sym.element_mask(next_h, mask, name=\"t%d_l%d_h\" % (seqidx, layeridx))\n",
    "    return LSTMState(c=next_c, h=next_h)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next step is provide an unrolled symbol function. The purpose of this function is for each bucket in the data, we can call this function to get a unrolled network symbol."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lstm_unroll(num_lstm_layer, seq_len, input_size,\n",
    "                num_hidden, num_embed, num_label, ignore_label=0, dropout=0.):\n",
    "    \"\"\"\n",
    "    The unrolling function to provide a multi-layer LSTM network for a specify sequence length\n",
    "    Parameters\n",
    "    ----------\n",
    "    num_lstm_layer: int\n",
    "        number of lstm layers we will stack\n",
    "    seq_len: int\n",
    "        length of RNN we want to unroll\n",
    "    input_size: int\n",
    "        the input vocabulary size\n",
    "    num_hidden: int\n",
    "        number of hidden unit in a LSTM unit\n",
    "    num_embed: int\n",
    "        dimention of word embedding vector \n",
    "    num_label: int\n",
    "        target output label number \n",
    "    ignore_label: int, optional\n",
    "        which label should not be used for calculating loss\n",
    "    dropout: float, optional\n",
    "        dropout rate in LSTM\n",
    "        \n",
    "    Returns\n",
    "    -------\n",
    "    sm: mx.symbol\n",
    "        An unrolled LSTM network\n",
    "    \"\"\"\n",
    "    \n",
    "    # For weight we will share over whole network, we use ```mx.sym.Variable``` to represent it\n",
    "    embed_weight = mx.sym.Variable(\"embed_weight\") # embedding lookup table\n",
    "    cls_weight = mx.sym.Variable(\"cls_weight\") # classifier weight\n",
    "    cls_bias = mx.sym.Variable(\"cls_bias\") # classifier bias\n",
    "    # Vertical initalization states and weights for LSTM unit\n",
    "    param_cells = []\n",
    "    last_states = []\n",
    "    for i in range(num_lstm_layer):\n",
    "        param_cells.append(LSTMParam(i2h_weight=mx.sym.Variable(\"l%d_i2h_weight\" % i),\n",
    "                                     i2h_bias=mx.sym.Variable(\"l%d_i2h_bias\" % i),\n",
    "                                     h2h_weight=mx.sym.Variable(\"l%d_h2h_weight\" % i),\n",
    "                                     h2h_bias=mx.sym.Variable(\"l%d_h2h_bias\" % i)))\n",
    "        state = LSTMState(c=mx.sym.Variable(\"l%d_init_c\" % i),\n",
    "                          h=mx.sym.Variable(\"l%d_init_h\" % i))\n",
    "        last_states.append(state)\n",
    "    assert(len(last_states) == num_lstm_layer)\n",
    "\n",
    "    # Input data\n",
    "    data = mx.sym.Variable('data') # input data, shape (batch, seq_length)\n",
    "    mask = mx.sym.Variable('mask') # input mask, shape (batch, seq_length)\n",
    "    label = mx.sym.Variable('softmax_label') # labels, shape (batch, seq_length)\n",
    "    # Embedding calculation\n",
    "    # We take the input and get all the embedding once\n",
    "    # Which means the output will be in shape (batch, seq_length, output_embedding_dim)\n",
    "    # Then we slice it will ```seq_len``` output\n",
    "    # Which means seq_len output symbol, each's output shape is (batch, output_embedding_dim)\n",
    "    embed = mx.sym.Embedding(data=data, input_dim=input_size,\n",
    "                             weight=embed_weight, output_dim=num_embed, name='embed')\n",
    "    wordvec = mx.sym.SliceChannel(data=embed, num_outputs=seq_len, squeeze_axis=1)\n",
    "    maskvec = mx.sym.SliceChannel(data=mask, num_outputs=seq_len, squeeze_axis=1)\n",
    "\n",
    "    # Now we can unroll the network\n",
    "    hidden_all = []\n",
    "    for seqidx in range(seq_len):\n",
    "        hidden = wordvec[seqidx] # input to LSTM cell, comes from embedding\n",
    "\n",
    "        # stack LSTM\n",
    "        for i in range(num_lstm_layer):\n",
    "            next_state = lstm(num_hidden, indata=hidden,\n",
    "                              mask=maskvec[seqidx],\n",
    "                              prev_state=last_states[i],\n",
    "                              param=param_cells[i],\n",
    "                              seqidx=seqidx, layeridx=i, dropout=dropout)\n",
    "            hidden = next_state.h\n",
    "            last_states[i] = next_state\n",
    "        # decoder\n",
    "        hidden_all.append(hidden) # last output of stack LSTM units\n",
    "\n",
    "    hidden_concat = mx.sym.Concat(*hidden_all, dim=0)\n",
    "    # If we want to have attention, add it here.\n",
    "    pred = mx.sym.FullyConnected(data=hidden_concat, num_hidden=num_label,\n",
    "                                 weight=cls_weight, bias=cls_bias, name='pred')\n",
    "\n",
    "\n",
    "    label = mx.sym.transpose(data=label)\n",
    "    label = mx.sym.Reshape(data=label, target_shape=(0,))\n",
    "\n",
    "    sm = mx.sym.SoftmaxOutput(data=pred, label=label, ignore_label=ignore_label, name='softmax')\n",
    "\n",
    "    return sm\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's try our unroll function to get a 2 layer LSTM network with sequence length of 3 for char-RNN (char has 128 possible value). Our network will first map each char to a vector of 256 dimention; Each LSTM layer has 384 hidden unit, and we ignore label 0 as padding input."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "batch_size = 32\n",
    "seq_len = 3\n",
    "num_lstm_layer = 2\n",
    "vocab_size = 128\n",
    "num_embed = 256\n",
    "num_hidden = 384\n",
    "\n",
    "sym = lstm_unroll(num_lstm_layer=num_lstm_layer,\n",
    "                  seq_len=seq_len,\n",
    "                  input_size=vocab_size,\n",
    "                  num_hidden=num_hidden,\n",
    "                  num_embed=num_embed,\n",
    "                  num_label=vocab_size,\n",
    "                  ignore_label=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Let's see the arguments and output of the network\n",
    "\n",
    "# intput shapes\n",
    "init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_states = init_c + init_h\n",
    "\n",
    "data_shape = dict([('data', (batch_size, seq_len)),\n",
    "                   ('mask', (batch_size, seq_len)),\n",
    "                   ('softmax_label', (batch_size, seq_len))] + init_states)\n",
    "\n",
    "arg_names = sym.list_arguments()\n",
    "out_names = sym.list_outputs()\n",
    "arg_shape, out_shape, aux_shape = sym.infer_shape(**data_shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('data', (32L, 3L)), ('embed_weight', (128L, 256L)), ('l0_i2h_weight', (1536L, 256L)), ('l0_i2h_bias', (1536L,)), ('l0_init_h', (32L, 384L)), ('l0_h2h_weight', (1536L, 384L)), ('l0_h2h_bias', (1536L,)), ('l0_init_c', (32L, 384L)), ('mask', (32L, 3L)), ('l1_i2h_weight', (1536L, 384L)), ('l1_i2h_bias', (1536L,)), ('l1_init_h', (32L, 384L)), ('l1_h2h_weight', (1536L, 384L)), ('l1_h2h_bias', (1536L,)), ('l1_init_c', (32L, 384L)), ('cls_weight', (128L, 384L)), ('cls_bias', (128L,)), ('softmax_label', (32L, 3L))]\n"
     ]
    }
   ],
   "source": [
    "# the argument of the unrolled network\n",
    "print(list(zip(arg_names, arg_shape)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('softmax_output', (96L, 128L))]\n"
     ]
    }
   ],
   "source": [
    "# the output of the unrolled network\n",
    "print(list(zip(out_names, out_shape)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For sequence to sequence learning, we may need the last hidden states to initalize the next sequence. So we can modify our unroll function to do this"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lstm_unroll_with_state(num_lstm_layer, seq_len, input_size,\n",
    "                           num_hidden, num_embed, num_label, ignore_label=0, dropout=0.):\n",
    "    \"\"\"\n",
    "    The unrolling function to provide a multi-layer LSTM network for a specify sequence length\n",
    "    Parameters\n",
    "    ----------\n",
    "    num_lstm_layer: int\n",
    "        number of lstm layers we will stack\n",
    "    seq_len: int\n",
    "        length of RNN we want to unroll\n",
    "    input_size: int\n",
    "        the input vocabulary size\n",
    "    num_hidden: int\n",
    "        number of hidden unit in a LSTM unit\n",
    "    num_embed: int\n",
    "        dimention of word embedding vector \n",
    "    num_label: int\n",
    "        target output label number \n",
    "    ignore_label: int, optional\n",
    "        which label should not be used for calculating loss\n",
    "    dropout: float, optional\n",
    "        dropout rate in LSTM\n",
    "        \n",
    "    Returns\n",
    "    -------\n",
    "    sm: mx.symbol\n",
    "        An unrolled LSTM network\n",
    "    \"\"\"\n",
    "    \n",
    "    # For weight we will share over whole network, we use ```mx.sym.Variable``` to represent it\n",
    "    embed_weight = mx.sym.Variable(\"embed_weight\") # embedding lookup table\n",
    "    cls_weight = mx.sym.Variable(\"cls_weight\") # classifier weight\n",
    "    cls_bias = mx.sym.Variable(\"cls_bias\") # classifier bias\n",
    "    # Vertical initalization states and weights for LSTM unit\n",
    "    param_cells = []\n",
    "    last_states = []\n",
    "    for i in range(num_lstm_layer):\n",
    "        param_cells.append(LSTMParam(i2h_weight=mx.sym.Variable(\"l%d_i2h_weight\" % i),\n",
    "                                     i2h_bias=mx.sym.Variable(\"l%d_i2h_bias\" % i),\n",
    "                                     h2h_weight=mx.sym.Variable(\"l%d_h2h_weight\" % i),\n",
    "                                     h2h_bias=mx.sym.Variable(\"l%d_h2h_bias\" % i)))\n",
    "        state = LSTMState(c=mx.sym.Variable(\"l%d_init_c\" % i),\n",
    "                          h=mx.sym.Variable(\"l%d_init_h\" % i))\n",
    "        last_states.append(state)\n",
    "    assert(len(last_states) == num_lstm_layer)\n",
    "\n",
    "    # Input data\n",
    "    data = mx.sym.Variable('data') # input data, shape (batch, seq_length)\n",
    "    mask = mx.sym.Variable('mask') # input mask, shape (batch, seq_length)\n",
    "    label = mx.sym.Variable('softmax_label') # labels, shape (batch, seq_length)\n",
    "    # Embedding calculation\n",
    "    # We take the input and get all the embedding once\n",
    "    # Which means the output will be in shape (batch, seq_length, output_embedding_dim)\n",
    "    # Then we slice it will ```seq_len``` output\n",
    "    # Which means seq_len output symbol, each's output shape is (batch, output_embedding_dim)\n",
    "    embed = mx.sym.Embedding(data=data, input_dim=input_size,\n",
    "                             weight=embed_weight, output_dim=num_embed, name='embed')\n",
    "    wordvec = mx.sym.SliceChannel(data=embed, num_outputs=seq_len, squeeze_axis=1)\n",
    "    maskvec = mx.sym.SliceChannel(data=mask, num_outputs=seq_len, squeeze_axis=1)\n",
    "\n",
    "    # Now we can unroll the network\n",
    "    hidden_all = []\n",
    "    for seqidx in range(seq_len):\n",
    "        hidden = wordvec[seqidx] # input to LSTM cell, comes from embedding\n",
    "\n",
    "        # stack LSTM\n",
    "        for i in range(num_lstm_layer):\n",
    "            next_state = lstm(num_hidden, indata=hidden,\n",
    "                              mask=maskvec[seqidx],\n",
    "                              prev_state=last_states[i],\n",
    "                              param=param_cells[i],\n",
    "                              seqidx=seqidx, layeridx=i, dropout=dropout)\n",
    "            hidden = next_state.h\n",
    "            last_states[i] = next_state\n",
    "        # decoder\n",
    "        hidden_all.append(hidden) # last output of stack LSTM units\n",
    "\n",
    "    hidden_concat = mx.sym.Concat(*hidden_all, dim=0)\n",
    "    # If we want to have attention, add it here.\n",
    "    pred = mx.sym.FullyConnected(data=hidden_concat, num_hidden=num_label,\n",
    "                                 weight=cls_weight, bias=cls_bias, name='pred')\n",
    "\n",
    "\n",
    "    label = mx.sym.transpose(data=label)\n",
    "    label = mx.sym.Reshape(data=label, target_shape=(0,))\n",
    "\n",
    "    sm = mx.sym.SoftmaxOutput(data=pred, label=label, ignore_label=ignore_label, name='softmax')\n",
    "\n",
    "    outputs = [sm]\n",
    "    # In the input we use init_c + init_h, so we will keep output in same convention\n",
    "    for i in range(num_lstm_layer):\n",
    "        state = last_states[i]\n",
    "        outputs.append(mx.sym.BlockGrad(state.c, name=\"layer_%d_c\" % i)) # stop back prop for last state\n",
    "    for i in range(num_lstm_layer):\n",
    "        state = last_states[i]\n",
    "        outputs.append(mx.sym.BlockGrad(state.h, name=\"layer_%d_h\" % i)) # stop back prop for last state\n",
    "    return mx.sym.Group(outputs)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Let's test our new symbol\n",
    "sym = lstm_unroll_with_state(num_lstm_layer=num_lstm_layer,\n",
    "                             seq_len=seq_len,\n",
    "                             input_size=vocab_size,\n",
    "                             num_hidden=num_hidden,\n",
    "                             num_embed=num_embed,\n",
    "                             num_label=vocab_size,\n",
    "                             ignore_label=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# intput shapes\n",
    "init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_states = init_c + init_h\n",
    "\n",
    "data_shape = dict([('data', (batch_size, seq_len)),\n",
    "                   ('mask', (batch_size, seq_len)),\n",
    "                   ('softmax_label', (batch_size, seq_len))] + init_states)\n",
    "\n",
    "arg_names = sym.list_arguments()\n",
    "out_names = sym.list_outputs()\n",
    "arg_shape, out_shape, aux_shape = sym.infer_shape(**data_shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('data', (32L, 3L)), ('embed_weight', (128L, 256L)), ('l0_i2h_weight', (1536L, 256L)), ('l0_i2h_bias', (1536L,)), ('l0_init_h', (32L, 384L)), ('l0_h2h_weight', (1536L, 384L)), ('l0_h2h_bias', (1536L,)), ('l0_init_c', (32L, 384L)), ('mask', (32L, 3L)), ('l1_i2h_weight', (1536L, 384L)), ('l1_i2h_bias', (1536L,)), ('l1_init_h', (32L, 384L)), ('l1_h2h_weight', (1536L, 384L)), ('l1_h2h_bias', (1536L,)), ('l1_init_c', (32L, 384L)), ('cls_weight', (128L, 384L)), ('cls_bias', (128L,)), ('softmax_label', (32L, 3L))]\n"
     ]
    }
   ],
   "source": [
    "# the argument of the unrolled network\n",
    "print(list(zip(arg_names, arg_shape)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('softmax_output', (96L, 128L)), ('layer_0_c_output', (32L, 384L)), ('layer_1_c_output', (32L, 384L)), ('layer_0_h_output', (32L, 384L)), ('layer_1_h_output', (32L, 384L))]\n"
     ]
    }
   ],
   "source": [
    "# the output of the unrolled network\n",
    "print(list(zip(out_names, out_shape)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Different to all other framework, In MXNet, these states can be copyed to the decoder network in async way during sequence to sequence learning. Implmentation can be done by using custom training loop of bucketing modules. We left this as homework this time."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Bucketing Execution \n",
    "\n",
    "Unlike other computation graph toolkit, MXNet doesn't require complex control flow in the graph. Instead, MXNet is able to utilize host language features. For example, implement bucketing execution in MXNet doesn't require hundreds or thousands lines of code, a quick naive prototype is like:\n",
    "\n",
    "```\n",
    "# lstm hyper-param\n",
    "num_lstm_layer = 2\n",
    "input_size = 128\n",
    "num_hidden = 256\n",
    "num_embed = 256\n",
    "num_label = 128\n",
    "ignore_label = 0\n",
    "# bucket param\n",
    "batch_size = 16\n",
    "bucket_candidate = [3, 5, 11, 25]\n",
    "# model param\n",
    "args_params = [...] # initialized args ndarrays\n",
    "grad_params = [...] # initialized grad ndarrays\n",
    "exec_bucket = {}\n",
    "for key in bucket_candidate:\n",
    "    exec_bucket[key] = lstm_unroll(num_lstm_layer, key,\n",
    "                                   input_size, num_hidden, num_embed, num_label,\n",
    "                                   ignore_label).bind(...) # data, mask shape and params\n",
    "                                   \n",
    "```\n",
    "During running time, we can select correct executor according to the given squence length.\n",
    "\n",
    "In MXNet, we have a higher level API ```mx.mod.BucketingModule```. The idea is similar to the code above but it is much easier to use. Implmentation can be found at https://github.com/dmlc/mxnet/blob/master/python/mxnet/module/bucketing_module.py#L16. Later we will use ```mx.mod.BucketingModule``` for demo.\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "To use ```BucketingModule```, we need to an unrolling function for difference length, default bucket key and running context (cpu, gpu, or multi-gpu). Here is an example of unrolling function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# params\n",
    "num_lstm_layer = 2\n",
    "num_hidden = 256\n",
    "num_embed = 128\n",
    "batch_size = 64\n",
    "\n",
    "# state shape\n",
    "init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)]\n",
    "init_states = init_c + init_h\n",
    "state_names = [x[0] for x in init_states]\n",
    "\n",
    "# symbolic generate function\n",
    "def sym_gen(seq_len):\n",
    "    sym = lstm_unroll(num_lstm_layer, seq_len, len(vocab),\n",
    "                      num_hidden=num_hidden, num_embed=num_embed,\n",
    "                      num_label=len(vocab))\n",
    "    data_names = ['data', 'mask'] + state_names\n",
    "    label_names = ['softmax_label']\n",
    "    return (sym, data_names, label_names)\n",
    "\n",
    "# bucketing execution module\n",
    "mod = mx.mod.BucketingModule(sym_gen, default_bucket_key=[10,20,30,40,50], context=mx.cpu())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# IO for bucketing LSTM"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "The final step is building a bucket iterator. Here we use text as example. The text is formatted into  each sentence in a line, and for each token, is separated by exactly one space, eg:\n",
    "\n",
    "```\n",
    "wait for it , gabe says .\n",
    "he is leaning back on his elbows .\n",
    "i try to push away the discordant stimuli , but this only increases my awareness of gabes energy next to me .\n",
    "i could reach over , snatch away that vibrant blue .\n",
    "instead , i study his hat .\n",
    "the thing might have been white some decades ago .\n",
    "the rim is frayed .\n",
    "the symbol on the crest is worn away , almost unintelligible .\n",
    "it looks like a salmon-colored s with a green triangle over it .\n",
    "the boy i was with last night , i say and cant finish .\n",
    "```\n",
    "\n",
    "The iterator will pad the input, generate mask and label. \n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import collections\n",
    "# simple batch is used for module to get data name, label name, data, label and bucket key\n",
    "class SimpleBatch(object):\n",
    "    def __init__(self, data_names, data, label_names, label, bucket_key):\n",
    "        self.data = data\n",
    "        self.label = label\n",
    "        self.data_names = data_names\n",
    "        self.label_names = label_names\n",
    "        self.bucket_key = bucket_key\n",
    "\n",
    "    @property\n",
    "    def provide_data(self):\n",
    "        return [(n, x.shape) for n, x in zip(self.data_names, self.data)]\n",
    "\n",
    "    @property\n",
    "    def provide_label(self):\n",
    "        return [(n, x.shape) for n, x in zip(self.label_names, self.label)]\n",
    "\n",
    "\n",
    "class BucketSentenceIter(mx.io.DataIter):\n",
    "    def __init__(self, path, buckets, vocab_size, batch_size, init_states):\n",
    "        super(BucketSentenceIter, self).__init__()\n",
    "        self.path = path\n",
    "        self.buckets = sorted(buckets)\n",
    "        self.vocab_size = vocab_size\n",
    "        self.batch_size = batch_size\n",
    "        # init\n",
    "        self.data_name = ['data', 'mask']\n",
    "        self.label_name = 'softmax_label'\n",
    "        self._preprocess()\n",
    "        self._build_vocab()\n",
    "        sentences = self.content.split('<eos>')\n",
    "        self.data = [[] for _ in self.buckets]\n",
    "        self.mask = [[] for _ in self.buckets]\n",
    "        # pre-allocate with the largest bucket for better memory sharing\n",
    "        self.default_bucket_key = max(buckets)\n",
    "\n",
    "        discard_cnt = 0\n",
    "\n",
    "        for sentence in sentences:\n",
    "            sentence= self._text2id(sentence)\n",
    "            bkt_idx = self._find_bucket(len(sentence))\n",
    "            if bkt_idx == -1:\n",
    "                discard_cnt += 1\n",
    "                continue\n",
    "            d, m = self._make_data(sentence, self.buckets[bkt_idx])\n",
    "            self.data[bkt_idx].append(d)\n",
    "            self.mask[bkt_idx].append(m)\n",
    "\n",
    "\n",
    "        # convert data into ndarrays for better speed during training\n",
    "        data = [np.zeros((len(x), buckets[i])) for i, x in enumerate(self.data)]\n",
    "        mask = [np.zeros((len(x), buckets[i])) for i, x in enumerate(self.data)]\n",
    "        for i_bucket in range(len(self.buckets)):\n",
    "            for j in range(len(self.data[i_bucket])):\n",
    "                data[i_bucket][j, :] = self.data[i_bucket][j]\n",
    "                mask[i_bucket][j, :] = self.mask[i_bucket][j]\n",
    "\n",
    "        self.data = data\n",
    "        self.mask = mask\n",
    "\n",
    "        # Get the size of each bucket, so that we could sample\n",
    "        # uniformly from the bucket\n",
    "        bucket_sizes = [len(x) for x in self.data]\n",
    "\n",
    "        print(\"Summary of dataset ==================\")\n",
    "        print(\"Discard instance: %3d\" % discard_cnt)\n",
    "        for bkt, size in zip(buckets, bucket_sizes):\n",
    "            print(\"bucket of len %3d : %d samples\" % (bkt, size))\n",
    "\n",
    "        self.batch_size = batch_size\n",
    "        self.make_data_iter_plan()\n",
    "\n",
    "        self.init_states = init_states\n",
    "        self.init_state_arrays = [mx.nd.zeros(x[1]) for x in init_states]\n",
    "\n",
    "        self.provide_data = [('data', (batch_size, self.default_bucket_key)),\n",
    "                             ('mask', (batch_size, self.default_bucket_key))] + init_states\n",
    "        self.provide_label = [('softmax_label', (self.batch_size, self.default_bucket_key))]\n",
    "\n",
    "        self.reset()\n",
    "\n",
    "    def _preprocess(self):\n",
    "        self.content = open(self.path).read().lower().replace('\\n', '<eos>')\n",
    "\n",
    "    def _find_bucket(self, val):\n",
    "        # lazy to use O(n) way\n",
    "        for i, bkt in enumerate(self.buckets):\n",
    "            if bkt > val:\n",
    "                return i\n",
    "        return -1\n",
    "\n",
    "    def _make_data(self, sentence, bucket):\n",
    "        # pad at the begining of the sequence\n",
    "        mask = [1] * bucket\n",
    "        data = [0] * bucket\n",
    "        pad = bucket - len(sentence)\n",
    "        data[pad:] = sentence\n",
    "        mask[:pad] = [0 for i in range(pad)]\n",
    "        return data, mask\n",
    "\n",
    "    def _gen_bucket(self, sentence):\n",
    "        # you can think about how to generate bucket candidtes in heuristic way\n",
    "        # here we directly use manual defined buckets\n",
    "        return self.buckets\n",
    "\n",
    "\n",
    "    def _build_vocab(self):\n",
    "        cnt = collections.Counter(self.content.split(' '))\n",
    "        # take top k and abandon others as unknown\n",
    "        # 0 is left for padding\n",
    "        # last is left for unknown\n",
    "        keys = cnt.most_common(self.vocab_size - 2)\n",
    "        self.dic = {'PAD' : 0}\n",
    "        self.reverse_dic = {0 : 'PAD', self.vocab_size - 1 : \"<UNK>\"} # is useful for inference from RNN\n",
    "        for i in range(len(keys)):\n",
    "            k = keys[i][0]\n",
    "            v = i + 1\n",
    "            self.dic[k] = v\n",
    "            self.reverse_dic[v] = k\n",
    "        print(\"Total tokens: %d, keep %d\" % (len(cnt), self.vocab_size))\n",
    "\n",
    "\n",
    "    def _text2id(self, sentence):   \n",
    "        sentence += \" <eos>\"\n",
    "        words = sentence.split(' ')\n",
    "        idx = [0] * len(words)\n",
    "        for i in range(len(words)):\n",
    "            if words[i] in self.dic:\n",
    "                idx[i] = self.dic[words[i]]\n",
    "            else:\n",
    "                idx[i] = self.vocab_size - 1\n",
    "        return idx\n",
    "        \n",
    "\n",
    "\n",
    "    def next(self):\n",
    "        init_state_names = [x[0] for x in self.init_states]\n",
    "        for i_bucket in self.bucket_plan:\n",
    "            data = self.data_buffer[i_bucket]\n",
    "            i_idx = self.bucket_curr_idx[i_bucket]\n",
    "            idx = self.bucket_idx_all[i_bucket][i_idx:i_idx+self.batch_size]\n",
    "            self.bucket_curr_idx[i_bucket] += self.batch_size\n",
    "            init_state_names = [x[0] for x in self.init_states]\n",
    "            data[:] = self.data[i_bucket][idx]\n",
    "\n",
    "            for sentence in data:\n",
    "                assert len(sentence) == self.buckets[i_bucket]\n",
    "                \n",
    "            label = self.label_buffer[i_bucket]\n",
    "            label[:, :-1] = data[:, 1:]\n",
    "            label[:, -1] = 0\n",
    "\n",
    "            mask = self.mask_buffer[i_bucket]\n",
    "            mask[:] = self.mask[i_bucket][idx]\n",
    "\n",
    "            data_all = [mx.nd.array(data), mx.nd.array(mask)] + self.init_state_arrays\n",
    "            label_all = [mx.nd.array(label)]\n",
    "            data_names = ['data', 'mask'] + init_state_names\n",
    "            label_names = ['softmax_label']\n",
    "\n",
    "            data_batch = SimpleBatch(data_names, data_all, label_names, label_all,\n",
    "                                         self.buckets[i_bucket])\n",
    "            yield data_batch\n",
    "\n",
    "    __iter__ = next\n",
    "\n",
    "    def reset(self):\n",
    "        self.bucket_curr_idx = [0 for x in self.data]\n",
    "        \n",
    "\n",
    "    def make_data_iter_plan(self):\n",
    "        \"make a random data iteration plan\"\n",
    "        # truncate each bucket into multiple of batch-size\n",
    "        bucket_n_batches = []\n",
    "        for i in range(len(self.data)):\n",
    "            bucket_n_batches.append(int(len(self.data[i]) / self.batch_size))\n",
    "            self.data[i] = self.data[i][:int(bucket_n_batches[i]*self.batch_size)]\n",
    "        bucket_plan = np.hstack([np.zeros(n, int)+i for i, n in enumerate(bucket_n_batches)])\n",
    "        np.random.shuffle(bucket_plan)\n",
    "\n",
    "        bucket_idx_all = [np.random.permutation(len(x)) for x in self.data]\n",
    "\n",
    "        self.bucket_plan = bucket_plan\n",
    "        self.bucket_idx_all = bucket_idx_all\n",
    "        self.bucket_curr_idx = [0 for x in self.data]\n",
    "\n",
    "        self.data_buffer = []\n",
    "        self.label_buffer = []\n",
    "        self.mask_buffer = []\n",
    "\n",
    "        for i_bucket in range(len(self.data)):\n",
    "            data = np.zeros((self.batch_size, self.buckets[i_bucket]))\n",
    "            label = np.zeros((self.batch_size, self.buckets[i_bucket]))\n",
    "            mask = np.zeros((self.batch_size, self.buckets[i_bucket]))\n",
    "            self.data_buffer.append(data)\n",
    "            self.label_buffer.append(label)\n",
    "            self.mask_buffer.append(mask)\n",
    "\n",
    "\n",
    "    def reset_states(states_data=None):\n",
    "        if states_data == None:\n",
    "            for arr in self.init_state_arrays:\n",
    "                arr[:] = 0\n",
    "        else:\n",
    "            assert len(states_data) == len(self.init_state_arrays)\n",
    "            for i in range(len(states_data)):\n",
    "                states_data[i].copyto(self.init_state_arrays[i])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Put all together\n",
    "\n",
    "For computation cost consideration, we don't provide training block \n",
    "\n",
    "```python\n",
    "You may need to setup logging correctly to see the result\n",
    "\n",
    "data_train = BucketSentenceIter(path=\"./book\",\n",
    "                                buckets=[10,20,30,40,50], \n",
    "                                vocab_size=10000,\n",
    "                                batch_size=batch_size,\n",
    "                                init_states=init_states)\n",
    "\n",
    "mod = mx.mod.BucketingModule(sym_gen, default_bucket_key=data_train.default_bucket_key, context=mx.gpu())\n",
    "\n",
    "mod.fit(data_train, num_epoch=1,\n",
    "        eval_metric=mx.metric.np(Perplexity),\n",
    "        batch_end_callback=mx.callback.Speedometer(batch_size, 50),\n",
    "        initializer=mx.init.Xavier(factor_type=\"in\", magnitude=2.34),\n",
    "        optimizer='sgd',\n",
    "        optimizer_params={'learning_rate':0.01, 'momentum': 0.9, 'wd': 0.00001})\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Hint for implmentation of Sequence 2 Sequence\n",
    "\n",
    "- Learn how to use custom training loop in module\n",
    "- Change IO to provide data for encoder and decoder seperately. 2D bucketing need better huristic way to search buckets.\n",
    "- Padding encoder data at the beginning, and padding decoder data at the end\n",
    "- After encoder forward pass, get states from outputs of module, then set state for data iter, so in decoder pass will be straightforward."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
