{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Restricted Boltzman Machines\n",
    "=================\n",
    "\n",
    "A [Restricted Boltzmann machine (RBM)](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. They are a particular form of [Boltzman Machines](https://en.wikipedia.org/wiki/Boltzmann_machine) subject to a **restriction**. The restriction is that there are no connections between nodes within a group of units (meaning that the network form a bipartite graph, see below).\n",
    "\n",
    "The RBM is made of two layers $\\boldsymbol{v}$ and $\\boldsymbol{h}$, each one having a certain number of units. The input units are called **visible units** of the RBM because their states are observed. The feature detectors correspond to non-observed **hidden units**. The hidden units are often referred to as **latent variables**, as they do not result directly from the observed data. The two layers are connected through a matrix of weights $\\boldsymbol{W}$. The units inside the layer are not connected, meaning that the network form a [bipartite graph](https://en.wikipedia.org/wiki/Bipartite_graph). The image below represent a RBM with $i$ visible units and $j$ hidden units:\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"../etc/img/rbm_architecture.png\" width=\"500\">\n",
    "</p>\n",
    "\n",
    "\n",
    "When RBMs are viewed as neural networks they are used as function approximators that can map the observations to the expectations of the latent variables in the top layer. The top layer can be interpreted as the learned features which can serve as inputs for a supervised learning system.\n",
    "\n",
    "However, the main application of RBMs is as **generative models**.  Learning an RBM corresponds to fitting its parameters such that the RBM approximates the distribution underlying the training data. After learning, the RBM can be used to generate new samples. For instance, we can fix some visible units corresponding to a partial observation and sample the remaining visible units to complete the observation.\n",
    "\n",
    "A question arise: is it possible to use RBMs for **classification**? There are some tricks that allows a RBM to be used in this way. One of the most interesting is to extend the input variable using an **indicator variable** as follows:\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"../etc/img/rbm_labeled_data.png\" width=\"500\">\n",
    "</p>\n",
    "\n",
    "The input image of a digit is extended using a graphical $5 \\times 2$ matrix, where the location related to the number is marked in black. In this way we can train the RBM on the marked training set, and then test the classification capability on the unmarked test set. If the RBM correctly modelled the data distribution it will be able to return the marked version of the test image. Unfortunately this trick cannot be used so easily, because it implies to find a way to include the indicator variable in the training set and to easily read its value in the returned sample.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Energy of a configuration\n",
    "-----------------------------\n",
    "\n",
    "The visible and hidden units are often organised as vectors, and a pair of visible-hidden vectors is called a **configuration**. A joint configuration of the visible and hidden units has an **energy** (see Hopfield, 1982) given by:\n",
    "\n",
    "$$E(v,h) = -a^{T}v -b^{T}h -v^{T} Wh $$\n",
    "\n",
    "where the matrix of weights $W$ (size $m \\times n$) associated with the connection between hidden unit $h$ and visible unit $v$, as well as bias weights $a$ for the visible units and $b$ for the hidden units.\n",
    "This definition of Energy is the same used in [Hopefield Networks](https://en.wikipedia.org/wiki/Hopfield_network)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Joint probability distribution over units\n",
    "------------------------------------------------\n",
    "\n",
    "Using the energy it is possible to define a series of probability distribution of both visible and hidden units. Here we consider both $v$ and $h$ as Bernoulli random variables, meaning that they are binary units (howevever there are ways to generalize to real units).\n",
    "\n",
    "The [joint probability](https://en.wikipedia.org/wiki/Joint_probability_distribution) of every possible pair of a visible and a hidden vector can be defined as follows:\n",
    "\n",
    "$$ P(v,h) = \\frac{1}{Z} e^{-E(v,h)}$$\n",
    "\n",
    "where the **partition function** $Z$ is a normalisation factor, given by summing over all possible pairs of visible and hidden vectors:\n",
    "\n",
    "$$ Z = \\sum_{v,h} e^{-E(v,h)}$$\n",
    "\n",
    "An important problem arises when the estimation of $Z$ is required. Since both $v$ and $h$ are binary there is an exponential number of values that they can take. It is possible to understand this point considering $v$ and $h$ as bit arrays. If the array has $8$ elements then there are $2^8$ possible combination to consider. It is intuitive to understand that increasing the number of bits makes the number of possible combination explode (since there is an exponential factor involved). This problem makes the estimation of the partition function **intractable**. The intractability of $Z$ can be managed using different forms of approximation (introduced later)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Marginal probability distributions over units\n",
    "---------------------------------------------------\n",
    "\n",
    "Using the energy it is also possible to compute the [marginal probability distribution](https://en.wikipedia.org/wiki/Marginal_distribution) on the visible units:\n",
    "\n",
    "$$ P(v) = \\frac{1}{Z} \\sum_{h} e^{-E(v,h)}$$\n",
    "\n",
    "that is also called the **Gibbs distribution**. Similarly we can get the marginal distribution of the hidden units:\n",
    "\n",
    "$$ P(h) = \\frac{1}{Z} \\sum_{v} e^{-E(v,h)}$$\n",
    "\n",
    "Since both $P(v)$ and $P(h)$ require the partition function, we cannot directly estimate them.\n",
    "\n",
    "It is possible to show that the marginal distribution of $P(v)$, showed above, can be rearranged and expressed in another form:\n",
    "\n",
    "$$ P(v) = \\frac{1}{Z} \\ \\text{exp} \\big[ a^{T} v + \\sum_{i=1}^{H} \\text{log} \\big( 1 + \\text{exp} ( b_{j} + W_{[j,:]} v) \\big) \\big] $$\n",
    "\n",
    "where $H$ is the total number of hidden units and $W_{[j,:]}$ means we are accessing the $j$-th row of $W$. Here I used $\\text{exp}()$ insted of $e$ only to make the notation less cluttered. You must notice that the expression $\\text{log}(1+\\text{exp}())$ is also known as the [softplus function](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) that is a smooth version of the ReLu activation function. We can rewrite $p(x)$ based on this observation:\n",
    "\n",
    "$$ P(v) = \\frac{1}{Z} \\ \\text{exp} \\big[ a^{T} v + \\sum_{i=1}^{H} \\text{sofplus} ( b_{i} + W_{[j,:]} v) \\big) \\big] $$\n",
    "\n",
    "Now it is useful to analyse the last equation and try to understand when the **probability** $P(x)$ **get larger**. A first possible way to increase the probability is to increse the outcome of the product $a^{T} v$. Let's recall that $a$ is a vector of bias terms, and $v$ is a vector of binary units. Another way to increase the probability is to increase the term inside the softmax function. One way to do it is to increase the bias vector $b$. Another way is to make the visible vector $v$ well alligned with the rows of $W$.\n",
    "\n",
    "It turns out that in literature the content of the exponential function above is often called the **free energy** $F()$ and for this reason we can simply use the notation:\n",
    "\n",
    "$$ P(v) = \\frac{1}{Z} \\ \\text{exp} \\big[ -F(x) \\big] $$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Example of joint probability and marginal probability estimation\n",
    "------------------------------------------------------------------------\n",
    "\n",
    "In this example we estimate the probability of a configuration of the visible units. Here I use the great Coursera tutorial of Hinton, based on a simple network with 2 visible and 2 hidden units (with no biases). It is necessary to point out that this network is not a proper RBM because there are connections between the two hidden units, and visible and hidden layer are not fully connected. The network is not even a proper Hopefield net because there are hidden units and no connection between the visible units. Nevertheless this example is helpful in understanding how a joint and marginal distribution can be estimated.\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"../etc/img/rbm_probability_visible_configuration.png\" width=\"800\">\n",
    "</p>\n",
    "\n",
    "Since there are 4 binary units, we have $2^4=16$ possible configurations of visible-hidden units (each row of the table is a configuration). Our estimation proceed in this way: for each one of the 16 configurations it is possible to compute the negative energy, then we can exponentiate the results. \n",
    "\n",
    "To obtain the value of the **partition function** we have to sum up all the values in the column $e^{-E}$ that in this particular example gives $39.70$. \n",
    "\n",
    "To get the **joint probability** $P(v,h)$ we divide all the 16 values in the column $e^{-E}$ by $39.70$. Remember that $v$ and $h$ are vectors of binary units in our case.\n",
    "\n",
    "Finally we can marginalize over the visible units to get the **marginal probability** of the visible units, meaning that we sum up the values of the first four rows (green) to get the probability of $P(v=[1,1])$, we sum the the rows from 5 to 8 (blue) to get the probability of the visible configuration $P(v=[1,0])$, we sum the rows 9-12 (yellow) to get the probability of $P(v=[0,1])$, and finally we sum the last four rows (blue) to get the probability of $P(v=[0,0])$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Conditional probability distributions (inference)\n",
    "-------------------------------------------------------\n",
    "\n",
    "Using the energy we can also obtain the [conditional probability distribution](https://en.wikipedia.org/wiki/Conditional_probability_distribution) of the $m$ visible units given the hidden units:\n",
    "\n",
    "\n",
    "$$ P(v|h) = \\prod_{i=1}^{m} P(v_{i} | h) $$\n",
    "\n",
    "and the of the $n$ hidden units with respect to the visible units:\n",
    "\n",
    "$$ P(h|v) = \\prod_{j=1}^{n} P(h_{j} | v) $$\n",
    "\n",
    "In both cases we used the assumption that the visible units are independent given the hidden units (and viceversa). Differently from the joint and marginal probability distributions, the conditional distribution is **tractable** because there is not any partition function involved in both $P(v|h)$ and $P(h|v)$.\n",
    "\n",
    "Through some rearrangements we can rewrite $P(v|h)$ and $P(h|v)$ in terms of a compact notation involving the sigmoid function:\n",
    "\n",
    "\n",
    "$$ P(v_{i}|h) = \\text{sig}(a_{i} + h^{T} W_{[:,i]}) $$\n",
    "\n",
    "where $\\text{sig}()$ is the sigmoid function and $W_{[:,i]}$ means that we are accessing the $i$-th column of $W$. The same thing can be done when we need to estimate $P(h|v)$:\n",
    "\n",
    "$$ P(h_{j}|v) = \\text{sig}(b_{j} + W_{[j,:]} v) $$\n",
    "\n",
    "where $W_{[j,:]}$ mean we are accessing the $j$-th row of $W$. This compact notation is computaitonally friendly and allows us to easily sample from the two distributions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "RBM as an undirected graphical model (Markov Random Field)\n",
    "-------------------------------------------------------------\n",
    "\n",
    "An RBM can be described in term of graphical models, in particular in terms of undirected graphs such as **Markov Random Fields (MRFs)**. A RBM is an MRF\n",
    "\n",
    "\n",
    "Let's recall here that in an undirected graph the vertices, two vertices are said to be **adjacent** if they are connected by an edge. A **clique** is a subset of vertices of an undirected graph such that every two distinct vertices in the clique are adjacent.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Training objective\n",
    "---------------------\n",
    "\n",
    "The idea is to apply gradient descent on the log-likelihood of $P(v)$ the marginal probability of the visible units. Let's recall what exacly $P(v)$ is:\n",
    "\n",
    "$$ P(\\boldsymbol{v}) = \\frac{1}{Z} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}$$\n",
    "\n",
    "Based on the principia of **Maximum Likelihood Estimation (MLE)** we want to *maximise* the average log-likelihood on all the $N$ samples of the dataset, where each sample can be expressed as part of the set: $D = \\{v^{(1)}, v^{(2)}, ...,  v^{(n)} \\}$. Here it is important to suppose that the samples are i.i.d. and drawn from an unknown distribution. Moreover we must remember that we have a set of weights, or parameters, called $\\theta$ that we want to adjust in order to maximise the log-likelihood $\\mathcal{L}$. Since in deep learning we often want to *minimise* instead of maximise, we take the negative of the log-likelihood. Given all these considerations we can now write our target as:\n",
    "\n",
    "$$ \\mathbb{E} \\big[ - \\text{log} \\ \\mathcal{L}(\\theta | D) \\big] = \\frac{1}{N} \\sum_{n} - \\text{log} \\big[ P(\\boldsymbol{v}^{(n)}) \\big]$$\n",
    "\n",
    "Using **gradient descent** is the usual way to find the parameters $\\theta$, since the analytical solution is not feasible. In order to use gradient descent we need to find the gradient of the negative log-likelihood $\\mathcal{L}(\\theta | D)$. Given an input vector for the visible units called $\\boldsymbol{v}$ we can write down the log-likelihood in this term:\n",
    "\n",
    "$$ - \\text{log} \\ \\mathcal{L}(\\theta | \\boldsymbol{v}) =  - \\text{log} \\frac{1}{Z} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} =  - \\text{log} \\frac{\\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}}{\\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}} = - \\text{log} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} + \\text{log} \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} $$\n",
    "\n",
    "Here I used a basic property of logarithms: $\\text{log}(a/b) = \\text{log}(a) - \\text{log}(b)$. Starting from the last simplification it is easier to estimate the gradient of the parameters:\n",
    "\n",
    "$$\\frac{ \\partial{-\\text{log} P(\\boldsymbol{v}) } }{\\partial{\\theta}} = \n",
    "- \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\bigg[ \\text{log} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\bigg] + \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\bigg[ \\text{log} \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\bigg]$$\n",
    "\n",
    "from the last form we have to apply the **chain rule** to an equation like: $\\text{log}(e^{-E(x)})$, meaning that we need to multiply the derivative of the log function, the derivative of the exponential, and the derivative of the energy function:\n",
    "\n",
    "$$ - \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\bigg[ \\text{log} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\bigg] + \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\bigg[ \\text{log} \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\bigg] =\n",
    "\\frac{1}{\\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}\n",
    "- \\frac{1}{\\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}} \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}\n",
    "$$\n",
    "\n",
    "the presence of the minus in front of the energy function makes the negative sign moves in front of the main term and this change the overall sign of the two components of the equation. We ca now apply a very simple fact regarding $P(\\boldsymbol{v},\\boldsymbol{h})$ that we know from previous sections:\n",
    "\n",
    "$$ P(\\boldsymbol{v},\\boldsymbol{h}) = \\frac{1}{Z} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \n",
    "= \\frac{ e^{-E(\\boldsymbol{v},\\boldsymbol{h})} }{ \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} }$$\n",
    "\n",
    "Another interesting fact we should consider is related to $P(\\boldsymbol{h} | \\boldsymbol{v})$, and is the following:\n",
    "\n",
    "$$ P(\\boldsymbol{h} | \\boldsymbol{v}) = \\frac{P(\\boldsymbol{v}, \\boldsymbol{h})}{P(\\boldsymbol{v})} \n",
    "= \\frac{ \\frac{1}{Z} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} }{ \\frac{1}{Z} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} } \n",
    "= \\frac{ e^{-E(\\boldsymbol{v},\\boldsymbol{h})} }{\\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} } \n",
    "$$\n",
    "\n",
    "This second term is particularly good for us because allows simplifying the two partition function terms in $P(\\boldsymbol{h}$ $\\boldsymbol{v})$, getting rid of an intractable estimation. Thanks to the two expressions we can now rewrite our gradient in a nicer form:\n",
    "\n",
    "$$\\frac{1}{\\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}} \\sum_{\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}\n",
    "- \\frac{1}{\\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})}} \\sum_{\\boldsymbol{v},\\boldsymbol{h}} e^{-E(\\boldsymbol{v},\\boldsymbol{h})} \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}\n",
    "= \\sum_{\\boldsymbol{h}} P(\\boldsymbol{h} | \\boldsymbol{v}) \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}\n",
    "- \\sum_{\\boldsymbol{v}, \\boldsymbol{h}} P(\\boldsymbol{v}, \\boldsymbol{h}) \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}$$\n",
    "\n",
    "It is now necessary to focus on the last form we obtained. This for is a difference between two expectations. The first term can be considered as the expected values of the energy function under the conditional distribution of the hidden variables given the training example. It must be considered an expectation because we only observed $\\boldsymbol{v}$, that is given as input to the RBM but we do not know the value of $\\boldsymbol{h}$. This expectaction can be written as follows:\n",
    "\n",
    "$$ \\sum_{\\boldsymbol{h}} P(\\boldsymbol{h} | \\boldsymbol{v}) \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta} \n",
    "= \\mathbb{E}_{\\boldsymbol{h}} \\bigg[ \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta}  \\ \\big| \\ \\boldsymbol{v}  \\bigg]$$\n",
    "\n",
    "The second term is the expected values of the energy function under the model distribution. This is an expectation with respect to both $\\boldsymbol{v}$ and $\\boldsymbol{h}$ that are not observed. This expectation can be written as follows:\n",
    "\n",
    "$$ \\sum_{\\boldsymbol{v}, \\boldsymbol{h}} P(\\boldsymbol{v}, \\boldsymbol{h}) \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta} \n",
    "= \\mathbb{E}_{\\boldsymbol{v}, \\boldsymbol{h}} \\bigg[ \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta} \\bigg]$$\n",
    "\n",
    "Given these two observations regarding the probabilities we are dealing with, we can now write the final form of the gradient of the negative log-likelihood:\n",
    "\n",
    "$$\\frac{ \\partial{-\\text{log} P(\\boldsymbol{v}) } }{\\partial{\\theta}}\n",
    "=  \\mathbb{E}_{\\boldsymbol{h}} \\bigg[ \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta} \\ \\big| \\ \\boldsymbol{v} \\bigg]\n",
    "-  \\mathbb{E}_{\\boldsymbol{v}, \\boldsymbol{h}} \\bigg[ \\frac{\\partial E(\\boldsymbol{v},\\boldsymbol{h})}{\\partial \\theta} \\bigg]\n",
    "$$\n",
    "\n",
    "The two terms involved in estimating the gradient are called the **positive phase** and the **negative phase**. Remember that here the idea is to minimise the last equation, meaning that we want to make the two expectations as much as similar as possible reducing the difference between them. This is extremely fascinating, because we are in fact reducing the difference between what has been estimated under an observation, and what has been estimated during an hallucination phase. There are some speculation on the fact that something similar may happen in our brain while dreaming in the REM phase.\n",
    "\n",
    "The **problem** now is with the negative phase, because it requires to estimate an expectation with respect to both $\\boldsymbol{v}$ and $\\boldsymbol{h}$ and this is intractable, due to the exponential grow of the number of terms involved. The solution to this problem is to use an algorithm called **Contrastive Divergence**."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Contrastive divergence\n",
    "-------------------------\n",
    "\n",
    "To solve the problem of estimating the expectation of the negative phase in a RBM we can use a method proposed by Hinton and called **Contrastive Divergence (CD)**. The general idea is not to compute the precise value of the expectation, but to approximate this value through a [point estimation](https://en.wikipedia.org/wiki/Point_estimation).\n",
    "\n",
    "\n",
    "Gibbs sampling is a simple **Markov Chain Monte Carlo (MCMC)** method for producing samples from the **joint probability distribution** of multiple random variables. The idea is to construct a Markov chain by updating each variable given the state of the others (conditional distribution).\n",
    "\n",
    "\n",
    "We said that due to the absence of connections between variables of the same layer, the conditional distributions $P(h | v)$ and $P(v |h)$ factorize nicely. This conditional independence makes Gibbs sampling especially easy. We do not need to sample a value for $h_{j}$ with respect to the other hidden units, because they are not connected. This mean that the states of all variables in one layer can be sampled jointly in just two steps: \n",
    "\n",
    "1. sampling a new state $h$ for the hidden neurons based on $P(h | v)$\n",
    "2. sampling a state $v$ for the visible layer based on $P(v |h)$ (reconstruction)\n",
    "\n",
    "This is also referred to as **block Gibbs sampling**.\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"../etc/img/rbm_sampling.png\" width=\"850\">\n",
    "</p>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Resources\n",
    "------------\n",
    "\n",
    "- *\"A Practical Guide to Training RBMs\"*, Geoffrey Hinton [[pdf]](https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf)\n",
    "- *\"Notes on Contrastive Divergence\"*, Woodford [[pdf]](http://www.robots.ox.ac.uk/~ojw/files/NotesOnCD.pdf)\n",
    "- *\"Training Products of Experts by Minimizing Contrastive Divergence\"*, 2002, Geoffrey Hinton [[web]](https://www.mitpressjournals.org/doi/abs/10.1162/089976602760128018)\n",
    "- *\"Training Restricted Boltzmann Machines: An Introduction\"*, Fisher and Igel [[pdf]](https://pdfs.semanticscholar.org/e1a1/c3d3e0336d6bfc48fd480a9d18a4f92edc60.pdf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
