\documentclass{article} \pdfoutput=1 \usepackage{iclr2017_conference,times} \usepackage{url} \usepackage{graphicx} \usepackage{subfig} \usepackage{amsmath} \usepackage{amssymb} \usepackage{verbatim} \usepackage[multiple]{footmisc} \usepackage{booktabs} \usepackage{pgfplots} \usepackage{tikz} \usetikzlibrary{arrows,automata,calc,backgrounds} \usepackage{array} \pgfplotsset{mystyle/.append style={axis x line=middle, axis y line=middle, xlabel={$x$}, ylabel={$y$}, axis equal }} \usepgfplotslibrary{external,statistics} \usetikzlibrary{pgfplots.external} \tikzexternalize \title{Discrete Variational Autoencoders} \author{Jason Tyler Rolfe \\ D-Wave Systems \\ Burnaby, BC V5G-4M9, Canada \\ \texttt{jrolfe@dwavesys.com} } \date{\today} \def\etal{{\textit{et~al.~}}} \def\normal{\mathcal{N}} \def\KL{\text{KL}} \def\cov{\text{cov}} \def\erf{\text{erf}} \def\sign{\text{sign}} \def\hz{\mathfrak{z}} \def\G{F} \def\J{W} \def\h{b} \iclrfinalcopy \begin{document} \maketitle \begin{abstract} Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and \mbox{Caltech-101} Silhouettes datasets. \end{abstract} \section{Introduction} Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classification~\citep{hinton2006fast, salakhutdinov2009deep, rasmus2015semi}. Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difficult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. It would be natural to represent the space within each disconnected component with continuous variables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables --- as do DBMs~\citep{salakhutdinov2009deep}, NADEs~\citep{larochelle2011neural}, sigmoid belief networks~\citep{spiegelhalter1990sequential, bornschein2016bidirectional}, and DARNs~\citep{gregor2014deep} --- or exclusively continuous variables --- as do VAEs~\citep{kingma2014auto, rezende2014stochastic} and GANs~\citep{goodfellow2014generative}.\footnote{Spike-and-slab RBMs~\citep{courville2011unsupervised} use both discrete and continuous latent variables.} Moreover, it would be desirable to apply the efficient variational autoencoder framework to models with discrete values, but this has proven difficult, since backpropagation through discrete variables is generally not possible~\citep{bengio2013estimating, raiko2015techniques}. We introduce a novel class of probabilistic models, comprising an undirected graphical model defined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific continuously deformable realization. Moreover, we show how these models can be trained efficiently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong correlations. Since these models efficiently marry the variational autoencoder framework with discrete latent variables, we call them \emph{discrete variational autoencoders} (discrete VAEs). \subsection{Variational autoencoders are incompatible with discrete distributions} \label{sec:ELBO} Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable~\citep{long2010restricted}, as is sampling from the posterior of a directed graphical model conditioned on its leaf variables~\citep{dagum1993approximating}. In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood \citep{jordan1999introduction}, such as the evidence lower bound~\citep[ELBO, $\mathcal{L}(x, \theta, \phi)$;][]{hinton1994amd}: \begin{equation} \mathcal{L}(x, \theta, \phi) = \log p(x | \theta) - \KL[q(z | x, \phi)||p(z|x,\theta)], \label{variational-inference-equation} \end{equation} where $q(z | x, \phi)$ is a computationally tractable approximation to the posterior distribution $p(z | x, \theta)$. We denote the observed random variables by $x$, the latent random variables by $z$, the parameters of the generative model by $\theta$, and the parameters of the approximating posterior by $\phi$. The variational autoencoder~\citep[VAE;][]{kingma2014auto, rezende2014stochastic, kingma2014semi} regroups the evidence lower bound of Equation~\ref{variational-inference-equation} as: \begin{equation} \label{eq:original-VAE} \mathcal{L}(x, \theta, \phi) = -\underbrace{\KL\left[q(z|x, \phi) || p(z|\theta) \right] \rule[-4pt]{0pt}{5pt}}_{\text{KL term}} + \underbrace{\mathbb{E}_q \left[ \log p(x| z, \theta) \right] \rule[-4pt]{0pt}{5pt}}_{\text{autoencoding term}} . \end{equation} In many cases of practical interest, such as Gaussian $q(z|x)$ and $p(z)$, the KL term of Equation~\ref{eq:original-VAE} can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior $q(z|x)$ can be drawn using a differentiable, deterministic function $f(x, \phi, \rho)$ of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables $\rho \sim D$. For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, $\mathcal{N}\left(m(x, \phi), v(x, \phi) \right)$, using $f(x, \phi, \rho) = m(x, \phi) + \sqrt{v(x, \phi)} \cdot \rho$, where $\rho \sim \mathcal{N}\left(0, 1 \right)$. When such an $f(x, \phi, \rho)$ exists, \begin{equation} \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)}\left[ \log p(x|z, \theta) \right] \approx \frac{1}{N} \sum_{\rho \sim \mathcal{D}} \frac{\partial}{\partial \phi} \log p(x | f(x, \rho, \phi), \theta) . \label{eq:vae-sampling} \end{equation} The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix~\ref{multivariable-VAE-inverse-CDF-section}, where we find that an analog of Equation~\ref{eq:vae-sampling} holds. Specifically, $\mathcal{D}_i$ is the uniform distribution between $0$ and $1$, and \begin{equation} \label{eq:inv-cdf} f(x) = \mathbf{\G}^{-1}(x) , \end{equation} where $\mathbf{\G}$ is the conditional-marginal cumulative distribution function (CDF) defined by: \begin{equation} \label{conditional-marginal-cdf-main-text} \G_i(\mathbf{x}) = \int_{x_i' = -\infty}^x p\left(x_i' | x_1, \dots, x_{i-1} \right) . \end{equation} However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable. A formulation comparable to Equation~\ref{eq:vae-sampling} is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs)~\citep{smolensky1986parallel}: \begin{equation} \label{eq:RBM-distribution} p(z) = \frac{1}{\mathcal{Z}_p} e^{-E_p(z)} = \frac{1}{\mathcal{Z}_p} \cdot e^{\left(z^\top \J z + \h^{\top} z \right)} , \end{equation} where $z \in \left\{0, 1 \right\}^n$, $\mathcal{Z}_p$ is the partition function of $p(z)$, and the lateral connection matrix $\J$ is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval $[0,1]$. The domain of the inverse CDF is thus also a proper subset of $[0,1]$, and its derivative is not defined, as required in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}.\footnote{ This problem remains even if we use the quantile function, $F_p^{-1}(\rho) = \inf \left\{ z \in \mathbb{R} : \int_{z' = -\infty}^z p(z') \geq \rho \right\} ,$ the derivative of which is either zero or infinite if $p$ is a discrete distribution. } In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierarchical probabilistic model consising of an RBM,\footnote{Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the ``visible'' units and the ``hidden'' units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome ``fully hidden bipartite Boltzmann machine.''} followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in Equation~\ref{eq:vae-sampling}, including backpropagation through its discrete latent variables. \subsection{Related work} Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders~\citep{burda2016importance}, Hamiltonian variational inference~\citep{salimans2015markov}, normalizing flows~\citep{rezende2015variational}, and variational Gaussian processes~\citep{tran2016variational} improve the approximation to the posterior distribution. Ladder variational autoencoders~\citep{sonderby2016ladder} increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling~\citep{du2015learning} and reweighted wake-sleep~\citep{bornschein2015reweighted} use sophisticated approximations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distributions~\citep{johnson2016composing}. It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance \citep{paisley2012variational}. The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature \citep{mnih2014neural, williams1992simple, mnih2016variational}, which we discuss in greater detail in Appendix~\ref{REINFORCE-appendix}. Prior efforts by~\citet{makhzani2015adversarial} to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal~\citep{salimans2016structured}. \citet{graves2016stochastic} computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units. The generative model underlying the discrete variational autoencoder resembles a deep belief network~\citep[DBN;][]{hinton2006fast}. A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltzmann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer $j$ receives connections from all previous layers $i < j$, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. \section{Backpropagating through discrete latent variables by adding continuous latent variables} \label{sec:continuous-variable} When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (defined by Equation~\ref{conditional-marginal-cdf-main-text} and Appendix~\ref{multivariable-VAE-inverse-CDF-section}) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter the fundamental form of the model, or the KL term of Equation~\ref{eq:original-VAE}; rather, it can be interpreted as adding a noisy nonlinearity, like dropout~\citep{srivastava2014dropout} or batch normalization with a small minibatch~\citep{ioffe2015batch}, to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix~\ref{augment-discrete-with-continuous-section}. \begin{figure}[tb] \centering \subfloat[Approximating posterior $q(\zeta, z | x)$]{ \tikzsetnextfilename{fig-basic-graphical-model-approx-post} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z1) {$z_1$}; \node[state] (Z2) [xshift=0.5cm,right of=Z1] {$z_2$}; \node[state] (Z3) [xshift=0.5cm,right of=Z2] {$z_3$}; \node[state] (X) [above of=Z1] {$x$}; \node[state] (zeta1) [below of=Z1] {$\zeta_1$}; \node[state] (zeta2) [below of=Z2] {$\zeta_2$}; \node[state] (zeta3) [below of=Z3] {$\zeta_3$}; \node[state, draw=none] (Xinv) [below of=zeta3] {}; \path (X) edge (Z1) edge (Z2) edge (Z3) (Z1) edge (zeta1) (Z2) edge (zeta2) (Z3) edge (zeta3); \end{tikzpicture}\label{fig:basic-graphical-model-approx-post}}\hfill \subfloat[Prior $p(x, \zeta, z)$] { \tikzsetnextfilename{fig-basic-graphical-model-prior} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z1) {$z_1$}; \node[state] (Z2) [xshift=0.5cm,right of=Z1] {$z_2$}; \node[state] (Z3) [xshift=0.5cm,right of=Z2] {$z_3$}; \node[state] (zeta1) [below of=Z1] {$\zeta_1$}; \node[state] (zeta2) [below of=Z2] {$\zeta_2$}; \node[state] (zeta3) [below of=Z3] {$\zeta_3$}; \node[state] (X) [below of=zeta3] {$x$}; \path (Z1) edge [>=,shorten >=0] (Z2) (Z1) edge [>=,bend left,shorten >=0] (Z3) (Z2) edge [>=,shorten >=0] (Z3) (Z1) edge (zeta1) (Z2) edge (zeta2) (Z3) edge (zeta3) (zeta1) edge (X) (zeta2) edge (X) (zeta3) edge (X); \end{tikzpicture}\label{fig:basic-graphical-model-prior}}\hfill \subfloat[Autoencoding term]{ \tikzsetnextfilename{fig-basic-autoencoder-loop} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Xstart) {$x$}; \node[state] (Z) [below of=Xstart] {$q$}; \node[state] (rho) [yshift=-0.4cm, right of=Z] {$\rho$}; \node[state] (zeta) [below of=Z] {$\zeta$}; \node[state] (Xend) [below of=zeta] {$x$}; \path (Xstart) edge node {$q(z=1|x,\phi)$} (Z) (Z) edge (zeta) (rho) edge [bend right=54] node[yshift=-0.2cm] {$\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$} (zeta) (zeta) edge node {$p(x|\zeta, \phi)$} (Xend); \end{tikzpicture}\label{fig:basic-autoencoder-loop}}\caption{Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation~\ref{eq:original-VAE} (c). Continuous latent variables~$\zeta_i$ are smoothed analogs of discrete latent variables $z_i$, and insulate $z$ from the observed variables $x$ in the prior (b). This facilitates the marginalization of the discrete $z$ in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input $\rho \sim U[0,1]$.} \end{figure} Specifically, as shown in Figure~\ref{fig:basic-graphical-model-approx-post}, we augment the latent representation in the approximating posterior with continuous random variables $\zeta$,\footnote{We always use a variant of $z$ for latent variables. This is zeta, or Greek $z$. The discrete latent variables $z$ can conveniently be thought of as English $z$.} conditioned on the discrete latent variables $z$ of the RBM: \begin{align*} q(\zeta, z | x, \phi) &= r(\zeta | z) \cdot q(z | x, \phi), \qquad \text{where} \\ r(\zeta | z) &= \prod_i r(\zeta_i | z_i) . \end{align*} The support of $r(\zeta | z)$ for all values of $z$ must be connected, so the marginal distribution \mbox{$q(\zeta | x, \phi ) = \sum_z r(\zeta | z) \cdot q(z | x, \phi)$} has a constant, connected support so long as $0 < q(z | x, \phi) < 1$. We further require that $r(\zeta|z)$ is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of $q(\zeta | x, \phi)$ is differentiable in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}, as we discuss in Appendix~\ref{multivariable-VAE-inverse-CDF-section}. As shown in Figure~\ref{fig:basic-graphical-model-prior}, we correspondingly augment the prior with $\zeta$: \begin{equation*} p(\zeta, z | \theta) = r(\zeta | z) \cdot p(z | \theta) , \end{equation*} where $r(\zeta | z)$ is the same as for the approximating posterior. Finally, we require that the conditional distribution over the observed variables only depends on $\zeta$: \begin{equation} \label{eq:final-decoder} p(x | \zeta, z, \theta) = p(x | \zeta, \theta) . \end{equation} The smoothing distribution $r(\zeta | z)$ transforms the model into a continuous function of the distribution over $z$, and allows us to use Equations~\ref{eq:original-VAE} and~\ref{eq:vae-sampling} directly to obtain low-variance stochastic approximations to the gradient. Given this expansion, we can simplify Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf} by dropping the dependence on $z$ and applying Equation~\ref{discrete-VAE-derivation} of Appendix~\ref{multivariable-VAE-inverse-CDF-section}, which generalizes Equation~\ref{eq:vae-sampling}: \begin{equation} \frac{\partial}{\partial \phi} \mathbb{E}_{q(\zeta, z | x, \phi)} \left[ \log p(x | \zeta, z, \theta) \right] \approx \frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p \left( x | \mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho), \theta \right). \label{discrete-VAE-approx-zeta} \end{equation} If the approximating posterior is factorial, then each $\G_i$ is an independent CDF, without conditioning or marginalization. As we shall demonstrate in Section~\ref{sec:spike-and-exponential}, $\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$ is a function of $q(z = 1 | x, \phi)$, where \mbox{$q(z = 1 | x, \phi)$} is a deterministic probability value calculated by a parameterized function, such as a neural network. The autoencoder implicit in Equation~\ref{discrete-VAE-approx-zeta} is shown in Figure~\ref{fig:basic-autoencoder-loop}. Initially, input $x$ is passed into a deterministic feedforward network $q(z = 1 | x, \phi)$, for which the final nonlinearity is the logistic function. Its output $q$, along with an independent random variable~$\rho \sim U[0,1]$, is passed into the deterministic function \mbox{$\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$} to produce a sample of $\zeta$. This $\zeta$, along with the original input~$x$, is finally passed to $\log p\left(x | \zeta, \theta \right)$. The expectation of this log probability with respect to $\rho$ is the autoencoding term of the VAE formalism, as in Equation~\ref{eq:original-VAE}. Moreover, conditioned on the input and the independent $\rho$, this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efficient approximation to the gradient. \subsection{Spike-and-exponential smoothing transformation} \label{sec:spike-and-exponential} As a concrete example consistent with sparse coding, consider the spike-and-exponential transformation from binary $z$ to continuous $\zeta$: \begin{align*} r(\zeta_i | z_i = 0) &= \begin{cases} \infty , & \text {if } \zeta_i = 0 \\ 0, & \text{otherwise} \end{cases} & F_{r(\zeta_i|z_i=0)}(\zeta') &= 1 \\ r(\zeta_i | z_i = 1) &= \begin{cases} \frac{\beta e^{\beta \zeta}}{e^{\beta} - 1}, & \text {if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} & F_{r(\zeta_i|z_i=1)}(\zeta') &= \left. \frac{e^{\beta \zeta}}{e^{\beta} - 1} \right|_0^{\zeta'} = \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} \end{align*} where $F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d \zeta$ is the CDF of probability distribution $p$ in the domain $\left[0,1\right]$. This transformation from $z_i$ to $\zeta_i$ is invertible: $\zeta_i = 0 \Leftrightarrow z_i = 0$, and $\zeta_i > 0 \Leftrightarrow z_i = 1$ almost surely.\footnote{In the limit $\beta \rightarrow \infty$, $\zeta_i = z_i$ almost surely, and the continuous variables $\zeta$ can effectively be removed from the model. This trick can be used after training with finite $\beta$ to produce a model without smoothing variables $\zeta$.} We can now find the CDF for $q(\zeta | x, \phi)$ as a function of $q(z = 1 | x, \phi)$ in the domain $\left[0,1\right]$, marginalizing out the discrete $z$: \begin{align*} F_{q(\zeta | x, \phi)}(\zeta') &= (1 - q(z=1 | x, \phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') + q(z=1 | x, \phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \\ &= q(z=1 | x, \phi) \cdot \left( \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} - 1 \right) + 1 . \end{align*} To evaluate the autoencoder of Figure~\ref{fig:basic-autoencoder-loop}, and through it the gradient approximation of Equation~\ref{discrete-VAE-approx-zeta}, we must invert the conditional-marginal CDF $F_{q(\zeta | x, \phi)}$: \begin{equation} \label{eq:probabilistic-encoder} \G_{q(\zeta | x, \phi)}^{-1}(\rho) = \begin{cases} \frac{1}{\beta} \cdot \log \left[ \left(\frac{\rho + q - 1}{q} \right) \cdot \left(e^{\beta} - 1 \right) + 1 \right], & \text{if } \rho \geq 1-q \\ 0, & \text{otherwise} \end{cases} \end{equation} where we use the substitution $q(z = 1 | x, \phi) \rightarrow q$ to simplify notation. For all values of the independent random variable~$\rho \sim U[0,1]$, the function $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ rectifies the input $q(z = 1 | x, \phi)$ if $q \leq 1 - \rho$ in a manner analogous to a rectified linear unit (ReLU), as shown in Figure~\ref{fig:inverse-cdf-spike-and-exp}. It is also quasi-sigmoidal, in that $\G^{-1}$ is increasing but concave-down if $q > 1 - \rho$. The effect of $\rho$ on $\G^{-1}$ is qualitatively similar to that of dropout~\citep{srivastava2014dropout}, depicted in Figure~\ref{fig:relu-and-dropout}, or the noise injected by batch normalization~\citep{ioffe2015batch} using small minibatches, shown in Figure~\ref{fig:relu-and-batch-norm}. \begin{figure}[tbh] \centering \subfloat[Spike-and-exp, $\beta \in \left\{ 1, 3, 5 \right\}$]{ \tikzsetnextfilename{fig-inverse-cdf-spike-and-exp} \begin{tikzpicture}\begin{axis}[ height=4.5cm, xlabel={$q(z = 1 | x, \phi)$}, y label style={at={(axis description cs:0.2,.5)},rotate=0,anchor=south}, ylabel={$\G_{q(\zeta | x, \phi)}^{-1}(\rho); \, f(x, \rho)$}, samples=200, xmin=0.0, xmax=1.0, ymin=-0.1, ymax=1.0]\addplot[red,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.2 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; \addplot[red,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.2 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; \addplot[red,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.2 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; \node[left, red] at (axis cs:0.82, 0.1) {\scriptsize $\rho = 0.2$};\addplot[green,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.5 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; \addplot[green,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.5 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; \addplot[green,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.5 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; \node[left, green] at (axis cs:0.51, 0.1) {\scriptsize $\rho = 0.5$};\addplot[blue,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.8 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; \addplot[blue,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.8 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; \addplot[blue,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.8 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; \node[left, blue] at (axis cs:0.3, 0.85) {\scriptsize $\rho = 0.8$};\end{axis}\end{tikzpicture}\label{fig:inverse-cdf-spike-and-exp}}\subfloat[ReLU with dropout]{ \tikzsetnextfilename{fig-relu-and-dropout} \begin{tikzpicture}\begin{axis}[ height=4.5cm, xlabel={$x$ \vphantom{$(q)$}}, yticklabels={,,}, samples=200, xmin=-1.0, xmax=1.0, ymin=-0.1, ymax=1.0]\addplot[red,smooth,thick,domain=-1.0:1.0] {0} node[pos=0.97, yshift=0.3cm](inactive){};\node[left, red] at (inactive) {\scriptsize $\rho < 0.5$};\addplot[blue,smooth,thick,domain=-1.0:1.0] {x * (x >= 0)} node[pos=0.8, xshift=-0.1cm](active){};\node[left, blue] at (active) {\scriptsize $\rho \geq 0.5$};\end{axis}\end{tikzpicture}\label{fig:relu-and-dropout}}\subfloat[ReLU with batch norm]{ \tikzsetnextfilename{fig-relu-and-batch-norm} \begin{tikzpicture}\begin{axis}[ height=4.5cm, xlabel={$x$ \vphantom{$(q)$}}, yticklabels={,,}, samples=200, xmin=-1.0, xmax=1.0, ymin=-0.1, ymax=1.0]\addplot[red,dotted,thick,domain=-1.0:1.0] {(x - 0.3) * ((x - 0.3) >= 0)}; \addplot[red,dashed,thick,domain=-1.0:1.0] {(x + 0.3) * ((x + 0.3) >= 0)} node[pos=0.65, yshift=0.3cm](shift){}node[pos=0.55, yshift=0.3cm](scale){}node[pos=0.45, yshift=0.3cm](nonoise){};\node[left, red] at (shift) {\scriptsize $x \pm 0.3$};\addplot[green,dotted,thick,domain=-1.0:1.0] {(0.7 * x) * ((0.7 * x) >= 0)}; \addplot[green,dashed,thick,domain=-1.0:1.0] {(1.3 * x) * ((1.3 * x) >= 0)}; \node[left, green] at (scale) {\scriptsize $x \cdot (1 \pm 0.3)$};\addplot[blue,smooth,thick,domain=-1.0:1.0] {x * (x >= 0)}; \node[left, blue] at (nonoise) {\scriptsize no noise};\end{axis}\end{tikzpicture}\label{fig:relu-and-batch-norm}}\caption{Inverse CDF of the spike-and-exponential smoothing transformation for \mbox{$\rho \in \left\{0.2, 0.5, 0.8\right\}$}; \mbox{$\beta = 1$ (dotted),} \mbox{$\beta = 3$ (solid),} and \mbox{$\beta = 5$ (dashed)} (a). Rectified linear unit with dropout rate $0.5$~(b). Shift (red) and scale (green) noise from batch normalization; with magnitude $0.3$~(dashed), $-0.3$~(dotted), or $0$~(solid blue); before a rectified linear unit~(c). In all cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ from Figure~\ref{fig:basic-autoencoder-loop}, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).} \label{fig:inverse-cdf} \end{figure} Other expansions to the continuous space are possible. In Appendix~\ref{two-sided-mixture}, we consider the case where both $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$ are linear functions of $\zeta$; in Appendix~\ref{sec:spike-and-slab}, we develop a spike-and-slab transformation; and in Appendix~\ref{input-dependent-tranformation-section}, we explore a spike-and-Gaussian transformation where the continuous $\zeta$ is directly dependent on the input $x$ in addition to the discrete $z$. \section{Accommodating explaining-away with a hierarchical approximating posterior} \label{sec:gradient-of-KL-divergence} When a probabilistic model is defined in terms of a prior distribution~$p(z)$ and a conditional distribution~$p(x|z)$, the observation of~$x$ often induces strong correlations in the posterior~$p(z|x)$ due to phenomena such as explaining-away~\citep{pearl1988probabilistic}. Moreover, we wish to use an RBM as the prior distribution (Equation~\ref{eq:RBM-distribution}), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions~(e.g., mean-field methods, but also~\citet{kingma2014auto, rezende2014stochastic}). To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior~$q(z|x)$ over the discrete latent variables. Specifically, we divide the latent variables $z$ of the RBM into disjoint groups, $z_1, \dots, z_k$,\footnote{The continuous latent variables $\zeta$ are divided into complementary disjoint groups $\zeta_1, \dots, \zeta_k$.} and define the approximating posterior via a directed acyclic graphical model over these groups: \begin{align} q(z_1, \zeta_1, \dots, z_k, \zeta_k | x, \phi) &= \prod_{1 \leq j \leq k} r(\zeta_j | z_j) \cdot q\left(z_j | \zeta_{i,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z1) {$z_1$}; \node[state] (Z2) [xshift=0.2cm,right of=Z1] {$z_2$}; \node[state] (Z3) [xshift=0.2cm,right of=Z2] {$z_3$}; \node[state] (X) [above of=Z1] {$x$}; \node[state] (zeta1) [below of=Z1] {$\zeta_1$}; \node[state] (zeta2) [below of=Z2] {$\zeta_2$}; \node[state] (zeta3) [below of=Z3] {$\zeta_3$}; \node[state, draw=none] (Xinv) [below of=zeta3] {}; \path (X) edge (Z1) edge (Z2) edge (Z3) (Z1) edge (zeta1) (Z2) edge (zeta2) (Z3) edge (zeta3) (zeta1) edge (Z2) (zeta2) edge (Z3); \draw [->] (zeta1) to [out=-30,in=180] ($(zeta2) - (0,0.7)$) to [out=0, in=-115] (Z3); \end{tikzpicture}\label{fig:hierarchical-approx-post}}\hfill \subfloat[Hierarchical ELBO autoencoding term] { \tikzsetnextfilename{fig-hierarchical-autoencoding-loop} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z1) {$q_1$}; \node[state] (Z2) [xshift=1.0cm,right of=Z1] {$q_2$}; \node[state] (Z3) [xshift=1.0cm,right of=Z2] {$q_3$}; \node[state] (Xstart) [above of=Z1] {$x$}; \node[state] (zeta1) [yshift=-0.2cm, below of=Z1] {$\zeta_1$}; \node[state] (zeta2) [yshift=-0.2cm, below of=Z2] {$\zeta_2$}; \node[state] (zeta3) [yshift=-0.2cm, below of=Z3] {$\zeta_3$}; \node[state] (rho1) [xshift=0.5cm, yshift=0.6cm, left of=zeta1] {$\rho$}; \node[state] (rho2) [xshift=0.5cm, yshift=0.6cm, left of=zeta2] {$\rho$}; \node[state] (rho3) [xshift=0.5cm, yshift=0.6cm, left of=zeta3] {$\rho$}; \node[state] (Xend) [below of=zeta3] {$x$}; \path (Xstart) edge node {$q$} (Z1) edge node[pos=0.81] {$q$} (Z2) edge node[pos=0.83] {$q(z_3=1|\zeta_{i<3}, x, \phi)$} (Z3) (Z1) edge node[pos=0.3] {$\mathbf{\G}^{-1}$} (zeta1) (Z2) edge node[pos=0.3] {$\mathbf{\G}^{-1}$} (zeta2) (Z3) edge node[pos=0.4] {$\mathbf{\G}_{q_3(\zeta_3 | \zeta_{i<3}, x, \phi)}^{-1}(\rho)$} (zeta3) (rho1) edge [bend left=58] (zeta1) (rho2) edge [bend left=58] (zeta2) (rho3) edge [bend left=58] (zeta3) (zeta3) edge node {$p(x | \zeta, \phi)$} (Xend); \draw [->] (zeta1) to [out=30,in=-110] ($(rho2) - (0.7,0)$) to [out=80, in=148] (Z2); \draw [->] (zeta2) to [out=30,in=-90] ($(rho3) - (0.8,0)$) to [out=80, in=162] (Z3); \draw [->] (zeta1) to [out=-30,in=190] ($(zeta2) - (0,0.7)$) to [out=10,in=-100] ($(rho3) - (0.6,0)$) to [out=80, in=162] (Z3); \draw [->] (zeta1) to [out=-60,in=-170] ($(zeta2) - (0,1.3)$) to [out=10,in=90] (Xend); \draw [->] (zeta2) to [out=-40,in=90] (Xend); \end{tikzpicture}\label{fig:hierarchical-autoencoding-loop}}\caption{Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation~\ref{eq:original-VAE}. Discrete latent variables $z_j$ only depend on the previous $z_{i,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z) {$z$}; \node[state] (zeta) [yshift=-0.0cm,below of=Z] {$\zeta$}; \node[state] (hz1) [xshift=0.0cm,right of=Z] {$\hz_1$}; \node[state] (hz2) [xshift=0.0cm,right of=hz1] {$\hz_2$}; \node[state] (hz3) [xshift=0.0cm,right of=hz2] {$\hz_3$}; \node[state] (X) [above of=Z] {$x$}; \node[state, draw=none] (Xinv) [yshift=-1.0cm, below of=hz3] {}; \path (X) edge (Z) edge [bend left=5] (hz1) edge [bend left=10] (hz2) edge [bend left=15] (hz3) (Z) edge (zeta) (zeta) edge [bend right] (hz1) edge [bend right] (hz2) edge [bend right=50] (hz3) edge [bend right, draw=none] (Xinv) (hz1) edge [bend right] (hz2) edge [bend right=50] (hz3) (hz2) edge [bend right] (hz3); \begin{pgfonlayer}{background} \filldraw [line width=6mm,join=round,black!10] (Z.north -| zeta.east) rectangle (zeta.south -| Z.west); \end{pgfonlayer} \end{tikzpicture}\label{fig:full-graphical-model-approx-post}}\hfill \subfloat[Prior w/ cont latent vars $p(x, \hz, \zeta, z)$]{ \tikzsetnextfilename{fig-full-graphical-model-prior} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black] \node[state] (Z) {$z$}; \node[state] (zeta) [below of=Z] {$\zeta$}; \node[state] (hz1) [xshift=0.0cm,right of=Z] {$\hz_1$}; \node[state] (hz2) [xshift=0.0cm,right of=hz1] {$\hz_2$}; \node[state] (hz3) [xshift=0.0cm,right of=hz2] {$\hz_3$}; \node[state] (X) [yshift=-1.0cm, below of=hz3] {$x$}; \path (Z) edge (zeta) (zeta) edge [bend right] (hz1) edge [bend right] (hz2) edge [bend right=50] (hz3) edge [bend right] (X) (hz1) edge [bend right=20] (X) edge [bend right] (hz2) edge [bend right=50] (hz3) (hz2) edge (X) edge [bend right] (hz3) (hz3) edge [bend left=10] (X); \begin{pgfonlayer}{background} \filldraw [line width=6mm,join=round,black!10] (Z.north -| zeta.east) rectangle (zeta.south -| Z.west); \end{pgfonlayer} \end{tikzpicture}\label{fig:full-graphical-model-prior}} \hspace*{\fill}\caption{Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures~\ref{fig:hierarchical-approx-post} and~\ref{fig:basic-graphical-model-prior} respectively. The continuous latent variables $\hz$ build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables $z$, which can represent the discrete types of objects in the image.} \label{fig:full-graphical-model} \end{figure} The directed graphical model of the approximating posterior and prior are defined by: \begin{align} q(\hz_0, \dots, \hz_n | x, \phi) &= \prod_{0 \leq m \leq n} q\left(\hz_m | \hz_{l