|
\documentclass{article} \pdfoutput=1 |
|
\usepackage{iclr2017_conference,times} |
|
\usepackage{url} |
|
|
|
|
|
|
|
\usepackage{graphicx} |
|
\usepackage{subfig} |
|
\usepackage{amsmath} |
|
\usepackage{amssymb} |
|
\usepackage{verbatim} |
|
\usepackage[multiple]{footmisc} |
|
\usepackage{booktabs} |
|
\usepackage{pgfplots} |
|
\usepackage{tikz} |
|
\usetikzlibrary{arrows,automata,calc,backgrounds} |
|
\usepackage{array} |
|
|
|
\pgfplotsset{mystyle/.append style={axis x line=middle, axis y line=middle, xlabel={$x$}, ylabel={$y$}, axis equal }} |
|
\usepgfplotslibrary{external,statistics} \usetikzlibrary{pgfplots.external} \tikzexternalize |
|
|
|
|
|
|
|
|
|
|
|
\title{Discrete Variational Autoencoders} |
|
|
|
\author{Jason Tyler Rolfe \\ |
|
D-Wave Systems \\ |
|
Burnaby, BC V5G-4M9, Canada \\ |
|
\texttt{jrolfe@dwavesys.com} |
|
} |
|
|
|
\date{\today} |
|
|
|
\def\etal{{\textit{et~al.~}}} |
|
\def\normal{\mathcal{N}} |
|
\def\KL{\text{KL}} |
|
\def\cov{\text{cov}} |
|
\def\erf{\text{erf}} |
|
\def\sign{\text{sign}} |
|
\def\hz{\mathfrak{z}} |
|
\def\G{F} |
|
\def\J{W} |
|
\def\h{b} |
|
|
|
\iclrfinalcopy |
|
|
|
|
|
|
|
\begin{document} |
|
|
|
\maketitle |
|
|
|
\begin{abstract} |
|
|
|
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. |
|
We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. |
|
The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. |
|
As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and \mbox{Caltech-101} Silhouettes datasets. |
|
|
|
\end{abstract} |
|
|
|
|
|
|
|
|
|
\section{Introduction} |
|
|
|
Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classification~\citep{hinton2006fast, salakhutdinov2009deep, rasmus2015semi}. |
|
Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difficult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. |
|
|
|
It would be natural to represent the space within each disconnected component with continuous variables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables --- as do DBMs~\citep{salakhutdinov2009deep}, NADEs~\citep{larochelle2011neural}, sigmoid belief networks~\citep{spiegelhalter1990sequential, bornschein2016bidirectional}, and DARNs~\citep{gregor2014deep} --- or exclusively continuous variables --- as do VAEs~\citep{kingma2014auto, rezende2014stochastic} and GANs~\citep{goodfellow2014generative}.\footnote{Spike-and-slab RBMs~\citep{courville2011unsupervised} use both discrete and continuous latent variables.} Moreover, it would be desirable to apply the efficient variational autoencoder framework to models with discrete values, but this has proven difficult, since backpropagation through discrete variables is generally not possible~\citep{bengio2013estimating, raiko2015techniques}. |
|
|
|
We introduce a novel class of probabilistic models, comprising an undirected graphical model defined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific continuously deformable realization. Moreover, we show how these models can be trained efficiently using the variational autoencoder framework, including backpropagation through the binary latent variables. |
|
We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong correlations. |
|
Since these models efficiently marry the variational autoencoder framework with discrete latent variables, we call them \emph{discrete variational autoencoders} (discrete VAEs). |
|
|
|
|
|
\subsection{Variational autoencoders are incompatible with discrete distributions} \label{sec:ELBO} |
|
|
|
Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable~\citep{long2010restricted}, as is sampling from the posterior of a directed graphical model conditioned on its leaf variables~\citep{dagum1993approximating}. |
|
|
|
In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood \citep{jordan1999introduction}, such as the evidence lower bound~\citep[ELBO, $\mathcal{L}(x, \theta, \phi)$;][]{hinton1994amd}: |
|
\begin{equation} |
|
\mathcal{L}(x, \theta, \phi) = \log p(x | \theta) - \KL[q(z | x, \phi)||p(z|x,\theta)], \label{variational-inference-equation} |
|
\end{equation} |
|
where $q(z | x, \phi)$ is a computationally tractable approximation to the posterior distribution $p(z | x, \theta)$. We denote the observed random variables by $x$, the latent random variables by $z$, the parameters of the generative model by $\theta$, and the parameters of the approximating posterior by $\phi$. |
|
The variational autoencoder~\citep[VAE;][]{kingma2014auto, rezende2014stochastic, kingma2014semi} regroups the evidence lower bound of Equation~\ref{variational-inference-equation} as: |
|
\begin{equation} \label{eq:original-VAE} |
|
\mathcal{L}(x, \theta, \phi) = |
|
-\underbrace{\KL\left[q(z|x, \phi) || p(z|\theta) \right] \rule[-4pt]{0pt}{5pt}}_{\text{KL term}} |
|
+ \underbrace{\mathbb{E}_q \left[ \log p(x| z, \theta) \right] \rule[-4pt]{0pt}{5pt}}_{\text{autoencoding term}} . |
|
\end{equation} |
|
In many cases of practical interest, such as Gaussian $q(z|x)$ and $p(z)$, the KL term of Equation~\ref{eq:original-VAE} can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior $q(z|x)$ can be drawn using a differentiable, deterministic function $f(x, \phi, \rho)$ of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables $\rho \sim D$. For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, $\mathcal{N}\left(m(x, \phi), v(x, \phi) \right)$, using $f(x, \phi, \rho) = m(x, \phi) + \sqrt{v(x, \phi)} \cdot \rho$, where $\rho \sim \mathcal{N}\left(0, 1 \right)$. |
|
When such an $f(x, \phi, \rho)$ exists, |
|
\begin{equation} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)}\left[ \log p(x|z, \theta) \right] \approx \frac{1}{N} \sum_{\rho \sim \mathcal{D}} \frac{\partial}{\partial \phi} \log p(x | f(x, \rho, \phi), \theta) . \label{eq:vae-sampling} |
|
\end{equation} |
|
|
|
The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix~\ref{multivariable-VAE-inverse-CDF-section}, where we find that an analog of Equation~\ref{eq:vae-sampling} holds. Specifically, $\mathcal{D}_i$ is the uniform distribution between $0$ and $1$, and |
|
\begin{equation} \label{eq:inv-cdf} |
|
f(x) = \mathbf{\G}^{-1}(x) , |
|
\end{equation} |
|
where $\mathbf{\G}$ is the conditional-marginal cumulative distribution function (CDF) defined by: |
|
\begin{equation} \label{conditional-marginal-cdf-main-text} |
|
\G_i(\mathbf{x}) = \int_{x_i' = -\infty}^x p\left(x_i' | x_1, \dots, x_{i-1} \right) . |
|
\end{equation} |
|
However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable. |
|
|
|
|
|
A formulation comparable to Equation~\ref{eq:vae-sampling} is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs)~\citep{smolensky1986parallel}: |
|
\begin{equation} \label{eq:RBM-distribution} |
|
p(z) = \frac{1}{\mathcal{Z}_p} e^{-E_p(z)} = \frac{1}{\mathcal{Z}_p} \cdot e^{\left(z^\top \J z + \h^{\top} z \right)} , |
|
\end{equation} |
|
where $z \in \left\{0, 1 \right\}^n$, $\mathcal{Z}_p$ is the partition function of $p(z)$, and the lateral connection matrix $\J$ is triangular. |
|
Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval $[0,1]$. The domain of the inverse CDF is thus also a proper subset of $[0,1]$, and its derivative is not defined, as required in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}.\footnote{ |
|
This problem remains even if we use the quantile function, |
|
$F_p^{-1}(\rho) = \inf \left\{ z \in \mathbb{R} : \int_{z' = -\infty}^z p(z') \geq \rho \right\} ,$ |
|
the derivative of which is either zero or infinite if $p$ is a discrete distribution. |
|
} |
|
|
|
In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierarchical probabilistic model consising of an RBM,\footnote{Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the ``visible'' units and the ``hidden'' units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome ``fully hidden bipartite Boltzmann machine.''} followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in |
|
Equation~\ref{eq:vae-sampling}, including backpropagation through its discrete latent variables. |
|
|
|
|
|
\subsection{Related work} |
|
|
|
Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. |
|
Importance weighted autoencoders~\citep{burda2016importance}, Hamiltonian variational inference~\citep{salimans2015markov}, normalizing flows~\citep{rezende2015variational}, and variational Gaussian processes~\citep{tran2016variational} improve the approximation to the posterior distribution. Ladder variational autoencoders~\citep{sonderby2016ladder} increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling~\citep{du2015learning} and reweighted wake-sleep~\citep{bornschein2015reweighted} use sophisticated approximations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distributions~\citep{johnson2016composing}. |
|
|
|
It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance \citep{paisley2012variational}. The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature \citep{mnih2014neural, williams1992simple, mnih2016variational}, |
|
which we discuss in greater detail in Appendix~\ref{REINFORCE-appendix}. |
|
|
|
Prior efforts by~\citet{makhzani2015adversarial} to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. |
|
Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal~\citep{salimans2016structured}. \citet{graves2016stochastic} computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units. |
|
|
|
The generative model underlying the discrete variational autoencoder resembles a deep belief network~\citep[DBN;][]{hinton2006fast}. A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltzmann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer $j$ receives connections from all previous layers $i < j$, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. |
|
|
|
|
|
|
|
\section{Backpropagating through discrete latent variables by adding continuous latent variables} \label{sec:continuous-variable} |
|
|
|
When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (defined by Equation~\ref{conditional-marginal-cdf-main-text} and Appendix~\ref{multivariable-VAE-inverse-CDF-section}) by augmenting the latent representation with a set of continous random variables. |
|
The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}. |
|
We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. |
|
This does not alter the fundamental form of the model, or the KL term of Equation~\ref{eq:original-VAE}; rather, it can be interpreted as adding |
|
a noisy nonlinearity, like dropout~\citep{srivastava2014dropout} or batch normalization with a small minibatch~\citep{ioffe2015batch}, |
|
to each latent variable in the approximating posterior and the prior. |
|
The conceptual motivation for this approach is discussed in Appendix~\ref{augment-discrete-with-continuous-section}. |
|
|
|
|
|
\begin{figure}[tb] |
|
\centering |
|
\subfloat[Approximating posterior $q(\zeta, z | x)$]{ |
|
\tikzsetnextfilename{fig-basic-graphical-model-approx-post} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, |
|
semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Z1) {$z_1$}; |
|
\node[state] (Z2) [xshift=0.5cm,right of=Z1] {$z_2$}; |
|
\node[state] (Z3) [xshift=0.5cm,right of=Z2] {$z_3$}; |
|
\node[state] (X) [above of=Z1] {$x$}; |
|
\node[state] (zeta1) [below of=Z1] {$\zeta_1$}; |
|
\node[state] (zeta2) [below of=Z2] {$\zeta_2$}; |
|
\node[state] (zeta3) [below of=Z3] {$\zeta_3$}; |
|
\node[state, draw=none] (Xinv) [below of=zeta3] {}; |
|
|
|
\path (X) edge (Z1) |
|
edge (Z2) |
|
edge (Z3) |
|
(Z1) edge (zeta1) |
|
(Z2) edge (zeta2) |
|
(Z3) edge (zeta3); |
|
\end{tikzpicture}\label{fig:basic-graphical-model-approx-post}}\hfill \subfloat[Prior $p(x, \zeta, z)$] { |
|
\tikzsetnextfilename{fig-basic-graphical-model-prior} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, |
|
semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Z1) {$z_1$}; |
|
\node[state] (Z2) [xshift=0.5cm,right of=Z1] {$z_2$}; |
|
\node[state] (Z3) [xshift=0.5cm,right of=Z2] {$z_3$}; |
|
\node[state] (zeta1) [below of=Z1] {$\zeta_1$}; |
|
\node[state] (zeta2) [below of=Z2] {$\zeta_2$}; |
|
\node[state] (zeta3) [below of=Z3] {$\zeta_3$}; |
|
\node[state] (X) [below of=zeta3] {$x$}; |
|
|
|
\path (Z1) edge [>=,shorten >=0] (Z2) |
|
(Z1) edge [>=,bend left,shorten >=0] (Z3) |
|
(Z2) edge [>=,shorten >=0] (Z3) |
|
(Z1) edge (zeta1) |
|
(Z2) edge (zeta2) |
|
(Z3) edge (zeta3) |
|
(zeta1) edge (X) |
|
(zeta2) edge (X) |
|
(zeta3) edge (X); |
|
\end{tikzpicture}\label{fig:basic-graphical-model-prior}}\hfill \subfloat[Autoencoding term]{ |
|
\tikzsetnextfilename{fig-basic-autoencoder-loop} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, |
|
semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Xstart) {$x$}; |
|
\node[state] (Z) [below of=Xstart] {$q$}; |
|
\node[state] (rho) [yshift=-0.4cm, right of=Z] {$\rho$}; |
|
\node[state] (zeta) [below of=Z] {$\zeta$}; |
|
\node[state] (Xend) [below of=zeta] {$x$}; |
|
|
|
\path (Xstart) edge node {$q(z=1|x,\phi)$} (Z) |
|
(Z) edge (zeta) |
|
(rho) edge [bend right=54] node[yshift=-0.2cm] {$\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$} (zeta) |
|
(zeta) edge node {$p(x|\zeta, \phi)$} (Xend); |
|
\end{tikzpicture}\label{fig:basic-autoencoder-loop}}\caption{Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation~\ref{eq:original-VAE} (c). Continuous latent variables~$\zeta_i$ are smoothed analogs of discrete latent variables $z_i$, and insulate $z$ from the observed variables $x$ in the prior (b). This facilitates the marginalization of the discrete $z$ in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input $\rho \sim U[0,1]$.} |
|
\end{figure} |
|
|
|
Specifically, as shown in Figure~\ref{fig:basic-graphical-model-approx-post}, we augment the latent representation in the approximating posterior with continuous random variables $\zeta$,\footnote{We always use a variant of $z$ for latent variables. This is zeta, or Greek $z$. The discrete latent variables $z$ can conveniently be thought of as English $z$.} conditioned on the discrete latent variables $z$ of the RBM: |
|
\begin{align*} |
|
q(\zeta, z | x, \phi) &= r(\zeta | z) \cdot q(z | x, \phi), \qquad \text{where} \\ |
|
r(\zeta | z) &= \prod_i r(\zeta_i | z_i) . |
|
\end{align*} |
|
The support of $r(\zeta | z)$ for all values of $z$ must be connected, so the marginal distribution |
|
\mbox{$q(\zeta | x, \phi ) = \sum_z r(\zeta | z) \cdot q(z | x, \phi)$} |
|
has a constant, connected support so long as $0 < q(z | x, \phi) < 1$. |
|
We further require that $r(\zeta|z)$ is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of $q(\zeta | x, \phi)$ is differentiable in Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf}, as we discuss in Appendix~\ref{multivariable-VAE-inverse-CDF-section}. |
|
|
|
|
|
As shown in Figure~\ref{fig:basic-graphical-model-prior}, we correspondingly augment the prior with $\zeta$: |
|
\begin{equation*} |
|
p(\zeta, z | \theta) = r(\zeta | z) \cdot p(z | \theta) , |
|
\end{equation*} |
|
where $r(\zeta | z)$ is the same as for the approximating posterior. Finally, we require that the conditional distribution over the observed variables only depends on $\zeta$: |
|
\begin{equation} \label{eq:final-decoder} |
|
p(x | \zeta, z, \theta) = p(x | \zeta, \theta) . |
|
\end{equation} |
|
The smoothing distribution $r(\zeta | z)$ transforms the model into a continuous function of the distribution over $z$, and allows us to use Equations~\ref{eq:original-VAE} and~\ref{eq:vae-sampling} directly to obtain low-variance stochastic approximations to the gradient. |
|
|
|
Given this expansion, we can simplify Equations~\ref{eq:vae-sampling} and~\ref{eq:inv-cdf} by dropping the dependence on $z$ and applying Equation~\ref{discrete-VAE-derivation} of Appendix~\ref{multivariable-VAE-inverse-CDF-section}, which generalizes Equation~\ref{eq:vae-sampling}: |
|
\begin{equation} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_{q(\zeta, z | x, \phi)} \left[ \log p(x | \zeta, z, \theta) \right] |
|
\approx \frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p \left( x | \mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho), \theta \right). \label{discrete-VAE-approx-zeta} |
|
\end{equation} |
|
|
|
If the approximating posterior is factorial, then each $\G_i$ is an independent CDF, without conditioning or marginalization. |
|
|
|
As we shall demonstrate in Section~\ref{sec:spike-and-exponential}, $\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$ is a function of $q(z = 1 | x, \phi)$, where \mbox{$q(z = 1 | x, \phi)$} is a deterministic probability value calculated by a parameterized function, such as a neural network. The autoencoder implicit in Equation~\ref{discrete-VAE-approx-zeta} is shown in Figure~\ref{fig:basic-autoencoder-loop}. Initially, input $x$ is passed into a deterministic feedforward network $q(z = 1 | x, \phi)$, for which the final nonlinearity is the logistic function. Its output $q$, along with an independent random variable~$\rho \sim U[0,1]$, is passed into the deterministic function \mbox{$\mathbf{\G}_{q(\zeta | x, \phi)}^{-1}(\rho)$} to produce a sample of $\zeta$. This $\zeta$, along with the original input~$x$, is finally passed to $\log p\left(x | \zeta, \theta \right)$. The expectation of this log probability with respect to $\rho$ is the autoencoding term of the VAE formalism, as in Equation~\ref{eq:original-VAE}. Moreover, conditioned on the input and the independent $\rho$, this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efficient approximation to the gradient. |
|
|
|
\subsection{Spike-and-exponential smoothing transformation} \label{sec:spike-and-exponential} |
|
|
|
As a concrete example consistent with sparse coding, consider the spike-and-exponential transformation from binary $z$ to continuous $\zeta$: |
|
\begin{align*} |
|
r(\zeta_i | z_i = 0) &= |
|
\begin{cases} |
|
\infty , & \text {if } \zeta_i = 0 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=0)}(\zeta') &= 1 \\ |
|
r(\zeta_i | z_i = 1) &= |
|
\begin{cases} |
|
\frac{\beta e^{\beta \zeta}}{e^{\beta} - 1}, & \text {if } 0 \leq \zeta_i \leq 1 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=1)}(\zeta') &= \left. \frac{e^{\beta \zeta}}{e^{\beta} - 1} \right|_0^{\zeta'} = \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} |
|
\end{align*} |
|
where $F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d \zeta$ is the CDF of probability distribution $p$ in the domain $\left[0,1\right]$. |
|
This transformation from $z_i$ to $\zeta_i$ is invertible: $\zeta_i = 0 \Leftrightarrow z_i = 0$, and $\zeta_i > 0 \Leftrightarrow z_i = 1$ almost surely.\footnote{In the limit $\beta \rightarrow \infty$, $\zeta_i = z_i$ almost surely, and the continuous variables $\zeta$ can effectively be removed from the model. This trick can be used after training with finite $\beta$ to produce a model without smoothing variables $\zeta$.} |
|
|
|
|
|
We can now find the CDF for $q(\zeta | x, \phi)$ as a function of $q(z = 1 | x, \phi)$ in the domain $\left[0,1\right]$, marginalizing out the discrete $z$: |
|
\begin{align*} |
|
F_{q(\zeta | x, \phi)}(\zeta') &= (1 - q(z=1 | x, \phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') + q(z=1 | x, \phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \\ &= q(z=1 | x, \phi) \cdot \left( \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} - 1 \right) + 1 . |
|
\end{align*} |
|
|
|
|
|
|
|
To evaluate the autoencoder of Figure~\ref{fig:basic-autoencoder-loop}, and through it the gradient approximation of Equation~\ref{discrete-VAE-approx-zeta}, we must invert the conditional-marginal CDF $F_{q(\zeta | x, \phi)}$: |
|
\begin{equation} \label{eq:probabilistic-encoder} |
|
\G_{q(\zeta | x, \phi)}^{-1}(\rho) = |
|
\begin{cases} |
|
\frac{1}{\beta} \cdot \log \left[ \left(\frac{\rho + q - 1}{q} \right) \cdot \left(e^{\beta} - 1 \right) + 1 \right], & \text{if } \rho \geq 1-q \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
\end{equation} |
|
where we use the substitution $q(z = 1 | x, \phi) \rightarrow q$ to simplify notation. |
|
For all values of the independent random variable~$\rho \sim U[0,1]$, the function $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ rectifies the input $q(z = 1 | x, \phi)$ if $q \leq 1 - \rho$ in a manner analogous to a rectified linear unit (ReLU), as shown in Figure~\ref{fig:inverse-cdf-spike-and-exp}. It is also quasi-sigmoidal, in that $\G^{-1}$ is increasing but concave-down if $q > 1 - \rho$. The effect of $\rho$ on $\G^{-1}$ is qualitatively similar to that of dropout~\citep{srivastava2014dropout}, depicted in Figure~\ref{fig:relu-and-dropout}, or the noise injected by batch normalization~\citep{ioffe2015batch} using small minibatches, shown in Figure~\ref{fig:relu-and-batch-norm}. |
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\subfloat[Spike-and-exp, $\beta \in \left\{ 1, 3, 5 \right\}$]{ |
|
\tikzsetnextfilename{fig-inverse-cdf-spike-and-exp} |
|
\begin{tikzpicture}\begin{axis}[ |
|
height=4.5cm, |
|
xlabel={$q(z = 1 | x, \phi)$}, |
|
y label style={at={(axis description cs:0.2,.5)},rotate=0,anchor=south}, |
|
ylabel={$\G_{q(\zeta | x, \phi)}^{-1}(\rho); \, f(x, \rho)$}, |
|
samples=200, |
|
xmin=0.0, xmax=1.0, |
|
ymin=-0.1, ymax=1.0]\addplot[red,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.2 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; |
|
\addplot[red,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.2 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; |
|
\addplot[red,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.2 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.2 < 1 - x)) * (0.2 >= 1 - x)}; |
|
\node[left, red] at (axis cs:0.82, 0.1) {\scriptsize $\rho = 0.2$};\addplot[green,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.5 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; |
|
\addplot[green,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.5 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; |
|
\addplot[green,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.5 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.5 < 1 - x)) * (0.5 >= 1 - x)}; |
|
\node[left, green] at (axis cs:0.51, 0.1) {\scriptsize $\rho = 0.5$};\addplot[blue,dotted,thick,domain=1e-4:1.0] {1/1 * ln(((0.8 + x - 1)/x) * (exp(1) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; |
|
\addplot[blue,smooth,thick,domain=1e-4:1.0] {1/3 * ln(((0.8 + x - 1)/x) * (exp(3) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; |
|
\addplot[blue,dashed,thick,domain=1e-4:1.0] {1/5 * ln(((0.8 + x - 1)/x) * (exp(5) - 1) + 1 + 1e8*(0.8 < 1 - x)) * (0.8 >= 1 - x)}; |
|
\node[left, blue] at (axis cs:0.3, 0.85) {\scriptsize $\rho = 0.8$};\end{axis}\end{tikzpicture}\label{fig:inverse-cdf-spike-and-exp}}\subfloat[ReLU with dropout]{ |
|
\tikzsetnextfilename{fig-relu-and-dropout} |
|
\begin{tikzpicture}\begin{axis}[ |
|
height=4.5cm, |
|
xlabel={$x$ \vphantom{$(q)$}}, |
|
yticklabels={,,}, |
|
samples=200, |
|
xmin=-1.0, xmax=1.0, |
|
ymin=-0.1, ymax=1.0]\addplot[red,smooth,thick,domain=-1.0:1.0] {0} |
|
node[pos=0.97, yshift=0.3cm](inactive){};\node[left, red] at (inactive) {\scriptsize $\rho < 0.5$};\addplot[blue,smooth,thick,domain=-1.0:1.0] {x * (x >= 0)} |
|
node[pos=0.8, xshift=-0.1cm](active){};\node[left, blue] at (active) {\scriptsize $\rho \geq 0.5$};\end{axis}\end{tikzpicture}\label{fig:relu-and-dropout}}\subfloat[ReLU with batch norm]{ |
|
\tikzsetnextfilename{fig-relu-and-batch-norm} |
|
\begin{tikzpicture}\begin{axis}[ |
|
height=4.5cm, |
|
xlabel={$x$ \vphantom{$(q)$}}, |
|
yticklabels={,,}, |
|
samples=200, |
|
xmin=-1.0, xmax=1.0, |
|
ymin=-0.1, ymax=1.0]\addplot[red,dotted,thick,domain=-1.0:1.0] {(x - 0.3) * ((x - 0.3) >= 0)}; |
|
\addplot[red,dashed,thick,domain=-1.0:1.0] {(x + 0.3) * ((x + 0.3) >= 0)} |
|
node[pos=0.65, yshift=0.3cm](shift){}node[pos=0.55, yshift=0.3cm](scale){}node[pos=0.45, yshift=0.3cm](nonoise){};\node[left, red] at (shift) {\scriptsize $x \pm 0.3$};\addplot[green,dotted,thick,domain=-1.0:1.0] {(0.7 * x) * ((0.7 * x) >= 0)}; |
|
\addplot[green,dashed,thick,domain=-1.0:1.0] {(1.3 * x) * ((1.3 * x) >= 0)}; |
|
\node[left, green] at (scale) {\scriptsize $x \cdot (1 \pm 0.3)$};\addplot[blue,smooth,thick,domain=-1.0:1.0] {x * (x >= 0)}; |
|
\node[left, blue] at (nonoise) {\scriptsize no noise};\end{axis}\end{tikzpicture}\label{fig:relu-and-batch-norm}}\caption{Inverse CDF of the spike-and-exponential smoothing transformation for \mbox{$\rho \in \left\{0.2, 0.5, 0.8\right\}$}; \mbox{$\beta = 1$ (dotted),} \mbox{$\beta = 3$ (solid),} and \mbox{$\beta = 5$ (dashed)} (a). Rectified linear unit with dropout rate $0.5$~(b). Shift (red) and scale (green) noise from batch normalization; with magnitude $0.3$~(dashed), $-0.3$~(dotted), or $0$~(solid blue); before a rectified linear unit~(c). In all cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ from Figure~\ref{fig:basic-autoencoder-loop}, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).} |
|
\label{fig:inverse-cdf} |
|
\end{figure} |
|
|
|
|
|
|
|
Other expansions to the continuous space are possible. |
|
In Appendix~\ref{two-sided-mixture}, we consider the case where both $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$ are linear functions of $\zeta$; |
|
in Appendix~\ref{sec:spike-and-slab}, we develop |
|
a spike-and-slab transformation; and in Appendix~\ref{input-dependent-tranformation-section}, we explore a spike-and-Gaussian transformation where the continuous $\zeta$ is directly dependent on the input $x$ in addition to the discrete $z$. |
|
|
|
|
|
\section{Accommodating explaining-away with a hierarchical approximating posterior} \label{sec:gradient-of-KL-divergence} |
|
|
|
When a probabilistic model is defined in terms of a prior distribution~$p(z)$ and a conditional distribution~$p(x|z)$, the observation of~$x$ often induces strong correlations in the posterior~$p(z|x)$ due to phenomena such as explaining-away~\citep{pearl1988probabilistic}. Moreover, we wish to use an RBM as the prior distribution (Equation~\ref{eq:RBM-distribution}), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions~(e.g., mean-field methods, but also~\citet{kingma2014auto, rezende2014stochastic}). |
|
|
|
To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior~$q(z|x)$ over the discrete latent variables. |
|
Specifically, we divide the latent variables $z$ of the RBM into disjoint groups, $z_1, \dots, z_k$,\footnote{The continuous latent variables $\zeta$ are divided into complementary disjoint groups $\zeta_1, \dots, \zeta_k$.} and define the approximating posterior via a directed acyclic graphical model over these groups: |
|
\begin{align} |
|
q(z_1, \zeta_1, \dots, z_k, \zeta_k | x, \phi) |
|
&= \prod_{1 \leq j \leq k} r(\zeta_j | z_j) \cdot q\left(z_j | \zeta_{i<j}, x, \phi \right) \qquad \text{where} \nonumber \\ |
|
q(z_j | \zeta_{i<j}, x, \phi) &= \frac{e^{g_j(\zeta_{i<j}, x, \phi)^{\top} \cdot z_j}}{\prod_{z_{\iota} \in z_j} \left(1 + e^{g_{z_{\iota}}(\zeta_{i<j}, x, \phi)} \right)} , |
|
\label{eq:approx-post-dist} |
|
\end{align} |
|
$z_j \in \left\{0, 1 \right\}^n$, and $g_j(\zeta_{i < j}, x, \phi)$ is a parameterized function of the inputs and preceding $\zeta_i$, such as a neural network. |
|
The corresponding graphical model is depicted in Figure~\ref{fig:hierarchical-approx-post}, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Appendix~\ref{multivariable-VAE-inverse-CDF-section}. |
|
If each group $z_j$ contains a single variable, this dependence structure is analogous to that of a deep autoregressive network~\citep[DARN;][]{gregor2014deep}, and can represent any distribution. However, the dependence of $z_j$ on the preceding discrete variables $z_{i<j}$ is always mediated by the continuous variables $\zeta_{i<j}$. |
|
|
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\subfloat[Hierarch approx post $q(\zeta, z | x)$] { |
|
\tikzsetnextfilename{fig-hierarchical-approx-post} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Z1) {$z_1$}; |
|
\node[state] (Z2) [xshift=0.2cm,right of=Z1] {$z_2$}; |
|
\node[state] (Z3) [xshift=0.2cm,right of=Z2] {$z_3$}; |
|
\node[state] (X) [above of=Z1] {$x$}; |
|
\node[state] (zeta1) [below of=Z1] {$\zeta_1$}; |
|
\node[state] (zeta2) [below of=Z2] {$\zeta_2$}; |
|
\node[state] (zeta3) [below of=Z3] {$\zeta_3$}; |
|
\node[state, draw=none] (Xinv) [below of=zeta3] {}; |
|
|
|
\path (X) edge (Z1) |
|
edge (Z2) |
|
edge (Z3) |
|
(Z1) edge (zeta1) |
|
(Z2) edge (zeta2) |
|
(Z3) edge (zeta3) |
|
(zeta1) edge (Z2) |
|
(zeta2) edge (Z3); |
|
\draw [->] (zeta1) to [out=-30,in=180] ($(zeta2) - (0,0.7)$) to [out=0, in=-115] (Z3); |
|
\end{tikzpicture}\label{fig:hierarchical-approx-post}}\hfill \subfloat[Hierarchical ELBO autoencoding term] { |
|
\tikzsetnextfilename{fig-hierarchical-autoencoding-loop} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Z1) {$q_1$}; |
|
\node[state] (Z2) [xshift=1.0cm,right of=Z1] {$q_2$}; |
|
\node[state] (Z3) [xshift=1.0cm,right of=Z2] {$q_3$}; |
|
\node[state] (Xstart) [above of=Z1] {$x$}; |
|
\node[state] (zeta1) [yshift=-0.2cm, below of=Z1] {$\zeta_1$}; |
|
\node[state] (zeta2) [yshift=-0.2cm, below of=Z2] {$\zeta_2$}; |
|
\node[state] (zeta3) [yshift=-0.2cm, below of=Z3] {$\zeta_3$}; |
|
\node[state] (rho1) [xshift=0.5cm, yshift=0.6cm, left of=zeta1] {$\rho$}; |
|
\node[state] (rho2) [xshift=0.5cm, yshift=0.6cm, left of=zeta2] {$\rho$}; |
|
\node[state] (rho3) [xshift=0.5cm, yshift=0.6cm, left of=zeta3] {$\rho$}; |
|
\node[state] (Xend) [below of=zeta3] {$x$}; |
|
|
|
\path (Xstart) edge node {$q$} (Z1) |
|
edge node[pos=0.81] {$q$} (Z2) |
|
edge node[pos=0.83] {$q(z_3=1|\zeta_{i<3}, x, \phi)$} (Z3) |
|
(Z1) edge node[pos=0.3] {$\mathbf{\G}^{-1}$} (zeta1) |
|
(Z2) edge node[pos=0.3] {$\mathbf{\G}^{-1}$} (zeta2) |
|
(Z3) edge node[pos=0.4] {$\mathbf{\G}_{q_3(\zeta_3 | \zeta_{i<3}, x, \phi)}^{-1}(\rho)$} (zeta3) |
|
(rho1) edge [bend left=58] (zeta1) |
|
(rho2) edge [bend left=58] (zeta2) |
|
(rho3) edge [bend left=58] (zeta3) |
|
(zeta3) edge node {$p(x | \zeta, \phi)$} (Xend); |
|
\draw [->] (zeta1) to [out=30,in=-110] ($(rho2) - (0.7,0)$) to [out=80, in=148] (Z2); |
|
\draw [->] (zeta2) to [out=30,in=-90] ($(rho3) - (0.8,0)$) to [out=80, in=162] (Z3); |
|
\draw [->] (zeta1) to [out=-30,in=190] ($(zeta2) - (0,0.7)$) to [out=10,in=-100] ($(rho3) - (0.6,0)$) to [out=80, in=162] (Z3); |
|
\draw [->] (zeta1) to [out=-60,in=-170] ($(zeta2) - (0,1.3)$) to [out=10,in=90] (Xend); |
|
\draw [->] (zeta2) to [out=-40,in=90] (Xend); |
|
\end{tikzpicture}\label{fig:hierarchical-autoencoding-loop}}\caption{Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation~\ref{eq:original-VAE}. Discrete latent variables $z_j$ only depend on the previous $z_{i<j}$ through their smoothed analogs $\zeta_{i<j}$. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input $\rho$.} |
|
\end{figure} |
|
|
|
|
|
|
|
This hierarchical approximating posterior does not affect the form of the autoencoding term in Equation~\ref{discrete-VAE-approx-zeta}, except to increase the depth of the autoencoder, as shown in Figure~\ref{fig:hierarchical-autoencoding-loop}. |
|
The deterministic probability value $q(z_j = 1 | \zeta_{i<j}, x, \phi)$ of Equation~\ref{eq:approx-post-dist} is parameterized, generally by a neural network, in a manner analogous to Section~\ref{sec:continuous-variable}. However, the final logistic function is made explicit in Equation~\ref{eq:approx-post-dist} to simplify Equation~\ref{eq:low-var-J-grad}. |
|
For each successive layer $j$ of the autoencoder, input $x$ and all previous $\zeta_{i<j}$ are passed into the network computing $q(z = 1 | \zeta_{i<j}, x, \phi)$. |
|
Its output $q_j$, along with an independent random variable $\rho \sim U[0,1]$, is passed to the deterministic function \mbox{$\mathbf{\G}_{q(\zeta_j | \zeta_{i<j}, x, \phi)}^{-1}(\rho)$} to produce a sample of $\zeta_j$. Once all $\zeta_j$ have been recursively computed, the full $\zeta$ along with the original input $x$ is finally passed to $\log p\left(x | \zeta, \theta \right)$. The expectation of this log probability with respect to $\rho$ is again the autoencoding term of the VAE formalism, as in Equation~\ref{eq:original-VAE}. |
|
|
|
In Appendix~\ref{sec:kl-gradient}, we show that the gradients of the remaining KL term of the ELBO (Equation~\ref{eq:original-VAE}) can be estimated stochastically using: |
|
\begin{align} |
|
\frac{\partial}{\partial \theta} \KL\left[q || p \right] |
|
&= \mathbb{E}_{q(z_1|x, \phi)} \left[ \cdots \left[ \mathbb{E}_{q(z_k | \zeta_{i<k}, x, \phi)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \right] \right] |
|
- \mathbb{E}_{p(z | \theta)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \qquad \text{and} \label{eq:RBM-grad} \\ \frac{\partial}{\partial \phi} \KL \left[ q || p \right] &= |
|
\mathbb{E}_{\rho} \left[ \left( g(x, \zeta) - \h \right)^{\top} \cdot \frac{\partial q}{\partial \phi} |
|
- z^{\top} \cdot \J \cdot \left(\frac{1-z}{1 - q} \odot \frac{\partial q}{\partial \phi} \right) \right] . \label{eq:low-var-J-grad} |
|
\end{align} |
|
In particular, Equation~\ref{eq:low-var-J-grad} is substantially lower variance than the naive approach to calculate $\frac{\partial}{\partial \phi} \KL \left[ q || p \right]$, based upon REINFORCE. |
|
|
|
|
|
\section{Modelling continuous deformations with a hierarchy of continuous latent variables} \label{sec:continuous-hierarchy} |
|
|
|
We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks~\citep{goodfellow2014generative} and traditional variational autoencoders~\citep{kingma2014auto, rezende2014stochastic}. When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. |
|
|
|
Specifically, we augment the latent representation with continuous random variables $\hz$,\footnote{We always use a variant of $z$ for latent variables. This is Fraktur $z$, or German $z$.} and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphical models. |
|
We use the same autoregressive variable order for the approximating posterior as for the prior, as in DRAW~\citep{gregor2015draw}, variational recurrent neural networks~\citep{chung2015recurrent}, the deep VAE of~\citet{salimans2016structured}, and ladder networks~\citep{rasmus2015semi, sonderby2016ladder}. We discuss the motivation for this ordering in Appendix~\ref{hierarchy-discussion-section}. |
|
|
|
|
|
\begin{figure}[tb] |
|
\centering |
|
\hspace*{\fill}\subfloat[Approx post w/ cont latent vars $q(\hz, \zeta, z | x)$]{ |
|
\tikzsetnextfilename{fig-full-graphical-model-approx-post} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, |
|
semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
|
|
\node[state] (Z) {$z$}; |
|
\node[state] (zeta) [yshift=-0.0cm,below of=Z] {$\zeta$}; |
|
\node[state] (hz1) [xshift=0.0cm,right of=Z] {$\hz_1$}; |
|
\node[state] (hz2) [xshift=0.0cm,right of=hz1] {$\hz_2$}; |
|
\node[state] (hz3) [xshift=0.0cm,right of=hz2] {$\hz_3$}; |
|
\node[state] (X) [above of=Z] {$x$}; |
|
\node[state, draw=none] (Xinv) [yshift=-1.0cm, below of=hz3] {}; |
|
|
|
\path (X) edge (Z) |
|
edge [bend left=5] (hz1) |
|
edge [bend left=10] (hz2) |
|
edge [bend left=15] (hz3) |
|
(Z) edge (zeta) |
|
(zeta) edge [bend right] (hz1) |
|
edge [bend right] (hz2) |
|
edge [bend right=50] (hz3) |
|
edge [bend right, draw=none] (Xinv) |
|
(hz1) edge [bend right] (hz2) |
|
edge [bend right=50] (hz3) |
|
(hz2) edge [bend right] (hz3); |
|
|
|
\begin{pgfonlayer}{background} |
|
\filldraw [line width=6mm,join=round,black!10] |
|
(Z.north -| zeta.east) rectangle (zeta.south -| Z.west); |
|
\end{pgfonlayer} |
|
\end{tikzpicture}\label{fig:full-graphical-model-approx-post}}\hfill \subfloat[Prior w/ cont latent vars $p(x, \hz, \zeta, z)$]{ |
|
\tikzsetnextfilename{fig-full-graphical-model-prior} |
|
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] |
|
\tikzstyle{every state}=[fill=white,draw=black,text=black] |
|
\node[state] (Z) {$z$}; |
|
\node[state] (zeta) [below of=Z] {$\zeta$}; |
|
\node[state] (hz1) [xshift=0.0cm,right of=Z] {$\hz_1$}; |
|
\node[state] (hz2) [xshift=0.0cm,right of=hz1] {$\hz_2$}; |
|
\node[state] (hz3) [xshift=0.0cm,right of=hz2] {$\hz_3$}; |
|
\node[state] (X) [yshift=-1.0cm, below of=hz3] {$x$}; |
|
|
|
\path (Z) edge (zeta) |
|
(zeta) edge [bend right] (hz1) |
|
edge [bend right] (hz2) |
|
edge [bend right=50] (hz3) |
|
edge [bend right] (X) |
|
(hz1) edge [bend right=20] (X) |
|
edge [bend right] (hz2) |
|
edge [bend right=50] (hz3) |
|
(hz2) edge (X) |
|
edge [bend right] (hz3) |
|
(hz3) edge [bend left=10] (X); |
|
|
|
\begin{pgfonlayer}{background} |
|
\filldraw [line width=6mm,join=round,black!10] |
|
(Z.north -| zeta.east) rectangle (zeta.south -| Z.west); |
|
\end{pgfonlayer} |
|
\end{tikzpicture}\label{fig:full-graphical-model-prior}} |
|
\hspace*{\fill}\caption{Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures~\ref{fig:hierarchical-approx-post} and~\ref{fig:basic-graphical-model-prior} respectively. The continuous latent variables $\hz$ build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables $z$, which can represent the discrete types of objects in the image.} |
|
\label{fig:full-graphical-model} |
|
\end{figure} |
|
|
|
|
|
The directed graphical model of the approximating posterior and prior are defined by: |
|
\begin{align} |
|
q(\hz_0, \dots, \hz_n | x, \phi) &= \prod_{0 \leq m \leq n} q\left(\hz_m | \hz_{l<m}, x, \phi \right) \qquad \text{and} \nonumber \\ |
|
p(\hz_0, \dots, \hz_n | \theta) &= \prod_{0 \leq m \leq n} p\left(\hz_m | \hz_{l<m}, \theta \right) . \label{eq:continuous-hierarchy} |
|
\end{align} |
|
The full set of latent variables associated with the RBM is now denoted by \mbox{$\hz_0 = \left\{z_1, \zeta_1, \dots, z_k, \zeta_k \right\}$}. However, the conditional distributions in Equation~\ref{eq:continuous-hierarchy} only depend on the continuous $\zeta_j$. Each $\hz_{m \geq 1}$ denotes a layer of continuous latent variables, and |
|
Figure~\ref{fig:full-graphical-model} shows the resulting graphical model. |
|
|
|
The ELBO decomposes as: |
|
\begin{equation} |
|
\mathcal{L}(x, \theta, \phi) = |
|
\mathbb{E}_{q(\hz |x, \phi)} \left[ \log p(x | \hz, \theta) \right] |
|
- \sum_m \mathbb{E}_{q(\hz_{l<m} | x, \phi)} \left[ \KL \left[ q(\hz_m | \hz_{l<m}, x, \phi) || p(\hz_m | \hz_{l<m}, \theta) \right] \right] . |
|
\label{vae-loss-input-dependent-zeta-hierarchical} |
|
\end{equation} |
|
|
|
|
|
|
|
If both $q(\hz_m | \hz_{l<m}, x, \phi)$ and $p(\hz_m | \hz_{l<m}, \theta)$ are Gaussian, then their KL~divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. Gradients can be passed through the $q(\hz_{l<m} | x, \phi)$ using the traditional reparameterization trick, described in Section~\ref{sec:ELBO}. |
|
|
|
|
|
|
|
\section{Results} \label{results-section} |
|
|
|
Discrete variational autoencoders comprise a smoothed RBM (Section~\ref{sec:continuous-variable}) with a hierarchical approximating posterior (Section~\ref{sec:gradient-of-KL-divergence}), followed by a hierarchy of continuous latent variables (Section~\ref{sec:continuous-hierarchy}). |
|
We parameterize all distributions with neural networks, except the smoothing distribution $r(\zeta | z)$ discussed in Section~\ref{sec:continuous-variable}. Like NVIL~\citep{mnih2014neural} and VAEs~\citep{kingma2014auto, rezende2014stochastic}, we define all approximating posteriors $q$ to be explicit functions of $x$, with parameters $\phi$ shared between all inputs $x$. For distributions over discrete variables, the neural networks output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation~\ref{eq:approx-post-dist}; for the continuous~$\hz$, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neural networks parameterizing the distributions over $z$, $\hz$, and $x$ consists of a linear transformation, batch normalization~\citep{ioffe2015batch} (but see Appendix~\ref{sec:laplacian-batch-normalization}), and a rectified-linear pointwise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior $p(z | \theta)$ in Equation~\ref{eq:RBM-grad} using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence~\citep{tieleman2008training}. We minimize the ELBO using ADAM~\citep{kingma2015adam} with a decaying step size. |
|
|
|
|
|
The hierarchical structure of Section~\ref{sec:continuous-hierarchy} is very powerful, and overfits without strong regularization of the prior, as shown in Appendix~\ref{sec:architecture}. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the input $p(x | \zeta, \theta)$ without any deterministic hidden layers, except on Omniglot. Moreover, all other neural networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over $\hz$. We present the details of the architecture in Appendix~\ref{sec:architecture}. |
|
|
|
We train the resulting discrete VAEs on the permutation-invariant MNIST~\citep{lecun1998gradient}, Omniglot\footnote{We use the partitioned, preprocessed Omniglot dataset of~\citet{burda2016importance}, available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.}~\citep{lake2013one}, and Caltech-101 Silhouettes datasets~\citep{marlin2010inductive}. For MNIST, we use both the static binarization of~\citet{salakhutdinov2008quantitative} and dynamic binarization. Estimates of the log-likelihood\footnote{The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix~\ref{sec:log-partition-function}.} of these models, computed using the method of~\citep{burda2016importance} with $10^4$ importance-weighted samples, |
|
are listed in Table~\ref{tab:mnist-omniglot-caltech}. |
|
The reported log-likelihoods for discrete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are $0.08$, $0.04$, $0.05$, and $0.11$ for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, respectively. |
|
Removing the RBM reduces the test set log-likelihood by $0.09$, $0.37$, $0.69$, and $0.66$. |
|
|
|
|
|
\begin{table}[tbh] |
|
\centering |
|
\begin{tabular}{lrp{0.5cm}lrr} |
|
\toprule |
|
\multicolumn{2}{c}{MNIST (dynamic binarization)} & & \multicolumn{3}{c}{MNIST (static binarization)} \\ |
|
& {\bf LL} & & & {\bf ELBO} & {\bf LL} \\ \cmidrule{1-2} \cmidrule{4-6} |
|
DBN & -84.55 & & |
|
HVI & -88.30 & -85.51 \\ |
|
IWAE & -82.90 & & |
|
DRAW & -87.40 & \\ |
|
Ladder VAE & -81.74 & & |
|
NAIS NADE & & -83.67 \\ |
|
Discrete VAE & {\bf -80.15} & & |
|
Normalizing flows & -85.10 & \\ |
|
& & & |
|
Variational Gaussian process & & -81.32 \\ |
|
& & & |
|
Discrete VAE & {\bf -84.58} & {\bf -81.01} \\ |
|
\toprule |
|
\multicolumn{2}{c}{Omniglot} & & \multicolumn{3}{c}{Caltech-101 Silhouettes} \\ |
|
& {\bf LL} & & & & {\bf LL} \\ \cmidrule{1-2} \cmidrule{4-6} |
|
IWAE & -103.38 & & |
|
IWAE & & -117.2 \\ |
|
Ladder VAE & -102.11 & & |
|
RWS SBN & & -113.3 \\ |
|
RBM & -100.46 & & |
|
RBM & & -107.8 \\ |
|
DBN & -100.45 & & |
|
NAIS NADE & & -100.0 \\ |
|
Discrete VAE & {\bf -97.43} & & |
|
Discrete VAE & & {\bf -97.6} \\ |
|
\bottomrule |
|
\end{tabular} |
|
\caption{Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with $10^4$ importance-weighted samples~\citep{burda2016importance}. For comparison, we also report performance of some recent state-of-the-art techniques. Full names and references are listed in Appendix~\ref{sec:comparison-models}.} |
|
\label{tab:mnist-omniglot-caltech} |
|
\end{table} |
|
|
|
|
|
|
|
We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. |
|
Figure~\ref{fig:mnist-evolution} shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. |
|
The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. |
|
Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure~\ref{fig:mnist-evolution} indicates that the RBM prior has well-separated modes. |
|
The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; |
|
at least tens of thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes. We present corresponding figures for the other datasets, and results on simplified architectures, in Appendix~\ref{sec:supplementary-results}. |
|
|
|
\begin{figure}[tb] |
|
\centering |
|
\includegraphics[width=\textwidth]{evolution_dyn_mnist} |
|
\caption{Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner.} |
|
\label{fig:mnist-evolution} |
|
\end{figure} |
|
|
|
The large mixing time of block Gibbs sampling on the RBM suggests that training may be constrained by sample quality. |
|
Figure~\ref{fig:ll-vs-gibbs-sampling} shows that performance\footnote{All models in Figure~\ref{fig:ll-vs-rbm-params} use only $10$ layers of continuous latent variables, for computational efficiency.} improves as we increase the number of iterations of block Gibbs sampling performed per minibatch |
|
on the RBM prior: $p(z | \theta)$ in Equation~\ref{eq:RBM-grad}. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as |
|
parallel tempering~\citep{swendsen1986replica}. |
|
|
|
\begin{figure}[tb] |
|
\centering \subfloat[Block Gibbs iterations]{ |
|
\tikzsetnextfilename{fig-ll-vs-gibbs-sampling} |
|
\begin{tikzpicture}\begin{semilogxaxis}[ |
|
width=5.25cm, |
|
ylabel={Log likelihood}, |
|
log ticks with fixed point, |
|
ymin=-80.55, ymax=-80.25]\addplot[color=blue,mark=*] coordinates { |
|
(1, -80.43) |
|
(5, -80.43) |
|
(10, -80.38) |
|
(50, -80.39) |
|
(100, -80.34) |
|
(500, -80.30) |
|
};\end{semilogxaxis}\end{tikzpicture}\label{fig:ll-vs-gibbs-sampling}}\subfloat[Num RBM units]{ |
|
\tikzsetnextfilename{fig-ll-vs-num-rbm-units} |
|
\begin{tikzpicture}\begin{semilogxaxis}[ |
|
width=5.25cm, |
|
log ticks with fixed point, |
|
xtick={8,16,32,64,128}, |
|
yticklabels={,,}, |
|
ymin=-80.55, ymax=-80.25]\addplot[color=blue,mark=*] coordinates { |
|
(8, -80.5188464) |
|
(16, -80.4732307) |
|
(32, -80.3952749) |
|
(64, -80.2914739) |
|
(96, -80.3484205) |
|
(128, -80.3406563) |
|
};\end{semilogxaxis}\end{tikzpicture}\label{fig:ll-vs-num-rbm-units}}\subfloat[RBM approx post layers]{ |
|
\tikzsetnextfilename{fig-ll-vs-rbm-approx-post-layers} |
|
\begin{tikzpicture}\begin{semilogxaxis}[ |
|
width=5.25cm, |
|
log ticks with fixed point, |
|
xtick=data, |
|
yticklabels={,,}, |
|
ymin=-80.55, ymax=-80.25]\addplot[color=blue,mark=*] coordinates { |
|
(1, -80.51978) |
|
(2, -80.27014) |
|
(4, -80.28587) |
|
(8, -80.39957) |
|
};\end{semilogxaxis}\end{tikzpicture}\label{fig:ll-vs-rbm-approx-post-layers}}\caption{Log likelihood versus the number of iterations of block Gibbs sampling per minibatch~(a), the number of units in the RBM~(b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better performance, but the network is robust to the size of the RBM (b).}\label{fig:ll-vs-rbm-params}\end{figure} |
|
|
|
|
|
Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure~\ref{fig:ll-vs-num-rbm-units}, the log-likelihood plateaus once the number of units in the RBM reaches at least $64$. |
|
Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes. |
|
|
|
The benefit of the hierarchical approximating posterior over the RBM, introduced in Section~\ref{sec:gradient-of-KL-divergence}, is apparent from Figure~\ref{fig:ll-vs-rbm-approx-post-layers}. The reduction in performance when moving from $4$ to $8$ layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the approximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approximating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting. |
|
|
|
|
|
|
|
\section{Conclusion} \label{discussion-section} |
|
|
|
Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable~\citep{bengio2013estimating}. |
|
|
|
We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL~divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. |
|
|
|
|
|
|
|
\subsubsection*{Acknowledgements} |
|
|
|
Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix~\ref{sec:engineering-smoothing-transformations}. |
|
|
|
\bibliography{dvae} |
|
\bibliographystyle{iclr2017_conference} |
|
|
|
\begin{appendix} |
|
|
|
\section{Multivariate VAEs based on the cumulative distribution function} \label{multivariable-VAE-inverse-CDF-section} |
|
|
|
The reparameterization trick is always possible if the cumulative distribution function (CDF) of $q(z|x, \phi)$ is invertible, and the inverse CDF is differentiable, as noted in~\citet{kingma2014auto}. However, for multivariate distributions, the CDF is defined by: |
|
\begin{equation*} |
|
F(\mathbf{x}) = \int_{x'_1 = -\infty}^{x_1} \cdots \int_{x'_n = -\infty}^{x_n} p(x'_1, \dots, x'_n) . |
|
\end{equation*} |
|
The multivariate CDF maps $\mathcal{R}^n \rightarrow \left[0, 1 \right]$, and is generally \emph{not} invertible.\footnote{For instance, for the bivariate uniform distribution on the interval $\left[0, 1 \right]^2$, the CDF $F(x, y) = x \cdot y$ for $0 \leq x,y \leq 1$, so for any $0 \leq c \leq 1$ and $c \leq x \leq 1$, $y = \frac{c}{x}$ yields $F(x, y) = c$. Clearly, many different pairs $(x, y)$ yield each possible value $c$ of $F(x,y)$.} |
|
|
|
In place of the multivariate CDF, |
|
consider the set of conditional-marginal CDFs defined by:\footnote{ |
|
The set of marginal CDFs, used to define copulas, is invertible. However, it does not generally map the original distribution to a simple joint distribution, such as a multivariate unform distribution, as required for variational autoencoders. In Equation~\ref{discrete-VAE-derivation}, $\left| \det \left( \frac{ \partial \mathbf{\G}_{q(z|x, \phi)}^{-1}(\rho) }{\partial \phi} \right) \right|$ does not cancel out $q \left( \mathbf{\G}_{q(z|x, \phi)}^{-1}(\rho) | x, \phi \right)$. The determinant of the inverse Jacobian is instead $\left[ \prod_i q\left(z_i = \G_i^{-1}(\rho) \right) \right]^{-1}$, which differs from $\left[ q \left(\mathbf{\G}_{q(z|x, \phi)}^{-1}(\rho) \right) \right]^{-1}$ if $q$ is not factorial. As a result, we do not recover the variational autoencoder formulation of Equation~\ref{discrete-VAE-derivation}. |
|
} |
|
\begin{equation} \label{conditional-marginal-cdf} |
|
\G_i(\mathbf{x}) = \int_{x_i' = -\infty}^x p\left(x_i' | x_1, \dots, x_{i-1} \right) . |
|
\end{equation} |
|
That is, $\G_j(\mathbf{x})$ is the CDF of $x_j$, conditioned on all $x_i$ such that $i < h$, and marginalized over all $x_k$ such the $j < k$. The range of each $\G_j$ is $\left[ 0, 1 \right]$, so $\mathbf{\G}$ maps the domain of the original distribution to~$\rho \in \left[0, 1\right]^n$. To invert $\mathbf{\G}$, we need only invert each conditional-marginal CDF in turn, conditioning $x_j = \G_j^{-1}(\mathbf{\rho})$ on $x_1 = \G_1^{-1}(\mathbf{\rho}), \dots, x_{j-1} = \G_{j-1}^{-1}(\mathbf{\rho})$. These inverses exist so long as the conditional-marginal probabilities are everywhere nonzero. |
|
It is not problematic to effectively define $\G_j^{-1}(\rho)$ based upon $x_{i < j}$, rather than $\rho_{i < j}$, since by induction we can uniquely determine $x_{i < j}$ given $\rho_{i < j}$. |
|
|
|
|
|
|
|
Using integration-by-substition, we can compute the gradient of the ELBO by taking the expectation of a uniform random variable~$\rho$ on~$[0, 1]^n$, and using~$\mathbf{\G}^{-1}_{q(z|x, \phi)}$ to transform~$\rho$ back to the element of~$z$ on which~$p(x | z, \theta)$ is conditioned. To perform integration-by-substitution, we will require the determinant of the Jacobian of $\mathbf{\G}^{-1}$. |
|
|
|
The derivative of a CDF is the probability density function at the selected point, and $\G_j$ is a simple CDF when we hold fixed the variables~$x_{i < j}$ on which it is conditioned, so using the inverse function theorem we find: |
|
\begin{align*} |
|
\frac{\partial \G_j^{-1}(\mathbf{\rho})}{\partial \rho_j} &= \frac{1}{\G_j'(\G_j^{-1}(\mathbf{\rho}))} \\ |
|
&= \frac{1}{p \left(x_j = \G_j^{-1}(\mathbf{\rho}) | x_{i<j}, \right)} |
|
\end{align*} |
|
where~$\mathbf{\rho}$ is a vector, and~$\G_j'$ is~$\frac{\partial \G_j}{\partial \rho_j}$. |
|
The Jacobian matrix~$\frac{\partial \mathbf{\G}}{\partial \mathbf{x}}$ is triangular, since the earlier conditional-marginal CDFs~$\G_j$ are independent of the value of the later~$x_k$, $j < k$, over which they are marginalized. |
|
Moreover, the inverse conditional-marginal CDFs have the same dependence structure as $\mathbf{\G}$, so the Jacobian of $\mathbf{\G}^{-1}$ is also triangular. The determinant of a triangular matrix is the product of the diagonal elements. |
|
|
|
Using these facts to perform a multivariate integration-by-substitution, we obtain: |
|
\begin{align} |
|
&\mathbb{E}_{q(z | x, \phi)} \left[ \log p(x | z, \theta) \right] |
|
= \int_{z} q(z | x, \phi) \cdot \log p(x | z, \theta) \nonumber \\ |
|
&\qquad= \int_{\rho = \mathbf{0}}^{\mathbf{1}} q\left(\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho) | x, \phi \right) \cdot \log p\left(x | \mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho), \theta \right) \cdot \left| \det \left( \frac{\partial \mathbf{\G}_{q(z | x, \phi)}^{-1} (\rho)}{\partial \rho} \right) \right| \nonumber \\ |
|
&\qquad= \int_{\rho = \mathbf{0}}^{\mathbf{1}} q\left(\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho) | x, \phi \right) |
|
\cdot \log p\left(x | \mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho), \theta \right) \cdot \left( \prod_j \frac{\partial \mathbf{\G}_{q_j(z_j | x, \phi)}^{-1} (\rho_j)}{\partial \rho_j} \right) \nonumber \\ |
|
&\qquad= \int_{\rho = \mathbf{0}}^{\mathbf{1}} \frac{q\left(\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho) | x, \phi \right)}{ |
|
\prod_j q\left(z_j = \G_j^{-1}(\rho) | z_{i<j} \right)} |
|
\cdot \log p\left(x | \mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho), \theta \right) \nonumber \\ |
|
&\qquad= \int_{\rho = \mathbf{0}}^{\mathbf{1}} \log p \left( x | \mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho), \theta \right) \label{discrete-VAE-derivation} |
|
\end{align} |
|
The variable $\rho$ has dimensionality equal to that of $z$; $\mathbf{0}$ is the vector of all $0$s; $\mathbf{1}$ is the vector of all $1$s. |
|
|
|
The gradient with respect to $\phi$ is then easy to approximate stochastically: |
|
\begin{align} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_{q(z | x, \phi)} \left[ \log p(x | z, \theta) \right] &\approx |
|
\frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p \left(x | \mathbf{\G}_{q(z|x, \phi)}^{-1} (\rho), \theta \right) \label{discrete-VAE-approx} \end{align} |
|
Note that if $q(z|x, \phi)$ is factorial (i.e., the product of independent distributions in each dimension~$z_j$), then the conditional-marginal CDFs $\G_j$ are just the marginal CDFs in each direction. However, even if $q(z|x, \phi)$ is not factorial, Equation~\ref{discrete-VAE-approx} still holds so long as $\mathbf{\G}$ is nevertheless defined to be the set of conditional-marginal CDFs of Equation~\ref{conditional-marginal-cdf}. |
|
|
|
|
|
\section{The difficulty of estimating gradients of the ELBO with REINFORCE} \label{REINFORCE-appendix} |
|
|
|
It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance \citep{paisley2012variational}. The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature \citep{mnih2014neural, williams1992simple, bengio2013estimating, mnih2016variational}: |
|
\begin{align} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)} \left[ \log p(x|z, \theta) \right] |
|
&= \mathbb{E}_{q(z|x, \phi)} \left[ \left[ \log p(x |z, \theta) - B(x) \right] \cdot \frac{\partial}{\partial \phi} \log q(z | x, \phi) \right] \nonumber \\ |
|
&\approx \frac{1}{N} \sum_{z \sim q(z | x, \phi)} \left( \left[ \log p(x |z, \theta) - B(x) \right] \cdot \frac{\partial}{\partial \phi} \log q(z | x, \phi) \right) \label{REINFORCE-equation} |
|
\end{align} |
|
where $B(x)$ is a (possibly input-dependent) baseline, which does not affect the gradient, but can reduce the variance of a stochastic estimate of the expectation. |
|
|
|
In REINFORCE, $\frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)} \left[ \log p(x|z, \theta) \right]$ is effectively estimated by something akin to a finite difference approximation to the derivative. The autoencoding term is a function of the conditional log-likelihood $\log p(x|z,\theta)$, composed with the approximating posterior $q(z|x,\phi)$, which determines the value of $z$ at which $p(x|z,\theta)$ is evaluated. However, the conditional log-likelihood is never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the conditional log-likelihood is evaluated at many different points \mbox{$z \sim q(z | x, \phi)$,} and a weighted sum of these values is used to approximate the gradient, just like in the finite difference approximation. |
|
|
|
|
|
Equation~\ref{REINFORCE-equation} of REINFORCE captures much less information about $p(x | z, \theta)$ per sample than Equation~\ref{eq:vae-sampling} of the variational autoencoder, which actively makes use of the gradient. In particular, the change of $p(x | z, \theta)$ in some direction $\vec{d}$ can only affect the REINFORCE gradient estimate if a sample is taken with a component in direction $\vec{d}$. In a $D$-dimensional latent space, at least $D$ samples are required to capture the variation of $p(x | z, \theta)$ in all directions; fewer samples span a smaller subspace. Since the latent representation commonly consists of dozens of variables, the REINFORCE gradient estimate can be much less efficient than one that makes direct use of the gradient of $p(x | z, \theta)$. Moreover, we will show in Section~\ref{results-section} that, when the gradient is calculated efficiently, hundreds of latent variables can be used effectively. |
|
|
|
|
|
|
|
\section{Augmenting discrete latent variables with continuous latent variables} \label{augment-discrete-with-continuous-section} |
|
|
|
Intuitively, variational autoencoders break the encoder\footnote{Since the approximating posterior $q(z | x, \phi)$ maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood $p(x | z, \theta)$ maps each configuration of the latent variables to a distribution over the input space, it is called the decoder.} |
|
distribution into ``packets'' of probability of infinitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region $r_i < \rho_i < r_i + \delta$ for all~$i$ in Equation~\ref{discrete-VAE-derivation}, and the expectation is taken over these packets. There are more packets in regions of high probability, so high-probability values are more likely to be selected. More rigorously, $\mathbf{\G}_{q(z|x, \phi)}(\zeta)$ maps intervals of high probability to larger spans of $0 \leq \rho \leq 1$, so a randomly selected $\rho \sim U[0,1]$ is more likely to be mapped to a high-probability point by $\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho)$. |
|
|
|
|
|
|
|
As the parameters of the encoder are changed, the location of a packet can move, while its mass is held constant. That is, $\zeta = \mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho)$ is a function of $\phi$, whereas the probability mass associated with a region of $\rho$-space is constant by definition. |
|
So long as $\mathbf{\G}_{q(z | x, \phi)}^{-1}$ exists and is differentiable, a small change in $\phi$ will correspond to a small change in the location of each packet. |
|
This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space. |
|
|
|
In contrast, REINFORCE (Equation~\ref{REINFORCE-equation}) breaks the latent represention into segments of infinitessimal but equal volume; e.g., $z_i \leq z_i' < z_i + \delta$ for all $i$~\citep{williams1992simple, mnih2014neural, bengio2013estimating}. The latent variables are also approximately constant within these segments, but the probability mass varies between them. Specifically, the probability mass of the segment $z \leq z' < z + \delta$ is proportional to $q(z | x, \phi)$. |
|
|
|
Once a segment is selected in the latent space, its location is independent of the encoder and decoder. In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is fixed. Only the probability mass assigned to the segment is relevant. |
|
|
|
Although variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section~\ref{sec:spike-and-exponential}), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear). |
|
|
|
Nevertheless, Equation~\ref{discrete-VAE-approx} of the VAE can be understood in analogy to dropout~\citep{srivastava2014dropout} or standout~\citep{ba2013adaptive} regularization. |
|
Like dropout and standout, $\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho)$ is an element-wise stochastic nonlinearity applied to a hidden layer. Since $\mathbf{\G}_{q(z | x, \phi)}^{-1}(\rho)$ selects a point in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and-Gaussian distribution of Section~\ref{sec:spike-and-gaussian} and let the standard deviation $\sigma$ go to zero. |
|
|
|
|
|
However, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation, the gradient of the decoder cannot be used to accurately estimate the change in the loss function, since the gradient only captures the effect of very small movements of the probability packet. |
|
|
|
To use discrete latent representations in the variational autoencoder framework, we must first transform to a continuous latent space, within which probability packets move smoothly. That is, we must compute Equation~\ref{discrete-VAE-approx} over a different distribution than the original posterior distribution. Surprisingly, we need not sacrifice the original discrete latent space, with its associated approximating posterior. Rather, we extend the encoder $q(z | x, \phi)$ and the prior $p(z | \theta)$ with |
|
a transformation to a continuous, auxiliary latent representation $\zeta$, and correspondingly make the decoder a function of this new continuous representation. By extending both the encoder and the prior in the same way, we avoid affecting the remaining KL~divergence in Equation~\ref{eq:original-VAE}.\footnote{Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous space to the decoder, since this does not change the space of the probabilty packets.} |
|
|
|
The gradient is defined everywhere if we require that each point in the original latent space map to nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a function of their main argument, and thus are invertible. |
|
|
|
If we ignore the cases where some discrete latent variable has probability $0$ or $1$, we need only require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters $\phi$ of the encoder, $q(z | x, \phi)$, change, redistributing weight amongst the associated regions of the auxiliary continuous space. |
|
|
|
|
|
|
|
\section{Alternative transformations from discrete to continuous latent representations} |
|
|
|
The spike-and-exponential transformation from discrete latent variables $z$ to continuous latent variables $\zeta$ presented in Section~\ref{sec:spike-and-exponential} is by no means the only one possible. Here, we develop a collection of alternative transformations. |
|
|
|
\subsection{Mixture of ramps} \label{two-sided-mixture} |
|
|
|
As another concrete example, we consider a case where both $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$ are linear functions of $\zeta_i$: |
|
\begin{align*} |
|
r(\zeta_i | z_i = 0) &= |
|
\begin{cases} |
|
2 \cdot (1 - \zeta_i) , & \text {if } 0 \leq \zeta_i \leq 1 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=0)}(\zeta') &= \left. 2 \zeta_i - \zeta_i^2 \right|_0^{\zeta'} = 2 \zeta' - {\zeta'}^2 \\ |
|
r(\zeta_i | z_i = 1) &= |
|
\begin{cases} |
|
2 \cdot \zeta_i, & \text {if } 0 \leq \zeta_i \leq 1 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=1)}(\zeta') &= \left. \zeta_i^2 \right|_0^{\zeta'} = {\zeta'}^2 |
|
\end{align*} |
|
where $F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d \zeta$ is the CDF of probability distribution $p$ in the domain $\left[0,1\right]$. |
|
The CDF for $q(\zeta | x, \phi)$ as a function of $q(z = 1 | x, \phi)$ is: |
|
\begin{align} |
|
F_{q(\zeta | x, \phi)}(\zeta') &= (1 - q(z=1 | x, \phi)) \cdot \left( 2 \zeta' - {\zeta'}^2 \right) + q(z=1 | x, \phi) \cdot {\zeta'}^2 \nonumber \\ |
|
&= 2 \cdot q(z = 1| x, \phi) \cdot \left({\zeta'}^2 - \zeta' \right) + 2 \zeta' - {\zeta'}^2 . \label{cumulative-CDF} \end{align} |
|
|
|
We can calculate $\G^{-1}_{q(\zeta | x, \phi)}$ explicitly, |
|
using the substitutions $\G_{q(\zeta | x, \phi)} \rightarrow \rho$, $q(z = 1 | x, \phi) \rightarrow q$, and $\zeta' \rightarrow \zeta$ in Equation~\ref{cumulative-CDF} to simplify notation: |
|
\begin{align*} |
|
\rho &= 2 \cdot q \cdot (\zeta^2 - \zeta) + 2\zeta - \zeta^2 \\ |
|
0 &= (2q - 1) \cdot \zeta^2 + 2(1 - q) \cdot \zeta - \rho \\ |
|
\zeta &= \frac{2(q - 1) \pm \sqrt{4(1 - 2q + q^2) + 4 (2q - 1) \rho}}{2 (2q - 1)} \\ |
|
&= \frac{(q - 1) \pm \sqrt{q^2 + 2(\rho - 1)q + (1 - \rho)}}{2q - 1} |
|
\end{align*} |
|
if $q \not= \frac{1}{2}$; $\rho = \zeta$ otherwise. |
|
$\G^{-1}_{q(\zeta | x, \phi)}$ has the desired range $\left[0, 1 \right]$ if we choose |
|
\begin{align} |
|
\G^{-1}(\rho) &= \frac{(q - 1) + \sqrt{q^2 + 2(\rho - 1)q + (1 - \rho)}}{2q - 1} \nonumber \\ |
|
&= \frac{q-1 + \sqrt{(q-1)^2 + (2q-1) \cdot \rho}}{2q - 1} |
|
\label{probabilistic-encoder-two-sided} |
|
\end{align} |
|
if $q \neq \frac{1}{2}$, and $\G^{-1}(\rho) = \rho$ if $q = \frac{1}{2}$. |
|
We plot $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ as a function of $q$ for various values of $\rho$ in Figure~\ref{fig:inverse-cdf-mixture-of-ramps}. |
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\tikzsetnextfilename{fig-inverse-cdf-mixture-of-ramps} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
height=5cm, |
|
xlabel={$q(z = 1 | x, \phi)$}, |
|
ylabel={$\G_{q(\zeta | x, \phi)}^{-1}(\rho)$}, |
|
samples=200, |
|
xmin=0.0, |
|
xmax=1.0] |
|
\addplot[red,smooth,thick,domain=0:1.0] {(x - 1 + sqrt((x - 1)^2 + (2*x - 1)*0.2)) / (2*x - 1)} |
|
node[pos=0.9, xshift=-0.1cm, yshift=0.1cm](endof02){}; |
|
\node[left, red] at (endof02) {$\rho = 0.2$}; |
|
\addplot[green,smooth,thick,domain=0:1.0] {(x - 1 + sqrt((x - 1)^2 + (2*x - 1)*0.5)) / (2*x - 1)} |
|
node[pos=0.7, xshift=-0.1cm, yshift=0.1cm](endof05){}; |
|
\node[left, green] at (endof05) {$\rho = 0.5$}; |
|
\addplot[blue,smooth,thick,domain=0:1.0] {(x - 1 + sqrt((x - 1)^2 + (2*x - 1)*0.8)) / (2*x - 1)} |
|
node[pos=0.6, xshift=-0.1cm, yshift=0.1cm](endof08){}; |
|
\node[left, blue] at (endof08) {$\rho = 0.8$}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
\caption{Inverse CDF of the mixture of ramps transformation for $\rho \in \left\{0.2, 0.5, 0.8\right\}$} |
|
\label{fig:inverse-cdf-mixture-of-ramps} |
|
\end{figure} |
|
|
|
|
|
In Equation~\ref{probabilistic-encoder-two-sided}, $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ is quasi-sigmoidal as a function of $q(z = 1 | x, \phi)$. If $\rho \ll 0.5$, $\G^{-1}$ is concave-up; if $\rho \gg 0.5$, $\G^{-1}$ is concave-down; if $\rho \approx 0.5$, $\G^{-1}$ is sigmoid. In no case is $\G^{-1}$ extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of $z$ inevitably flattens. |
|
|
|
|
|
|
|
\subsection{Spike-and-slab} \label{sec:spike-and-slab} |
|
|
|
We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models \citep{courville2011unsupervised}: |
|
\begin{align*} |
|
r(\zeta_i | z_i = 0) &= |
|
\begin{cases} |
|
\infty , & \text {if } \zeta_i = 0 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=0)}(\zeta') &= 1 \\ |
|
r(\zeta_i | z_i = 1) &= |
|
\begin{cases} |
|
1, & \text {if } 0 \leq \zeta_i \leq 1 \\ |
|
0, & \text{otherwise} |
|
\end{cases} |
|
& F_{r(\zeta_i|z_i=1)}(\zeta') &= \left. \zeta_i \right|_0^{\zeta'} = {\zeta'} |
|
\end{align*} |
|
where $F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d \zeta$ is the cumulative distribution function (CDF) of probability distribution $p$ in the domain $\left[ 0, 1 \right]$. |
|
The CDF for $q(\zeta | x, \phi)$ as a function of $q(z = 1 | x, \phi)$ is: |
|
\begin{align*} |
|
F_{q(\zeta | x, \phi)}(\zeta') &= (1 - q(z=1 | x, \phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') |
|
+ q(z=1 | x, \phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \\ |
|
&= q(z = 1| x, \phi) \cdot \left({\zeta'} - 1 \right) + 1 . |
|
\end{align*} |
|
|
|
We can calculate $\G^{-1}_{q(\zeta | x, \phi)}$ explicitly, using the substitution $q(z = 1 | x, \phi) \rightarrow q$ to simplify notation: |
|
\begin{align*} |
|
\G_{q(\zeta | x, \phi)}^{-1}(\rho) &= |
|
\begin{cases} |
|
\frac{\rho - 1}{q} + 1, & \text{if } \rho \geq 1-q \\ |
|
0, & \text{otherwise} |
|
\end{cases} \end{align*} |
|
We plot $\G_{q(\zeta | x, \phi)}^{-1}(\rho)$ as a function of $q$ for various values of $\rho$ in Figure~\ref{fig:inverse-cdf-spike-and-slab}. |
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\tikzsetnextfilename{fig-inverse-cdf-spike-and-slab} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
height=5cm, |
|
xlabel={$q(z = 1 | x, \phi)$}, |
|
ylabel={$\G_{q(\zeta | x, \phi)}^{-1}(\rho)$}, |
|
samples=200, |
|
xmin=0.0, |
|
xmax=1.0] |
|
\addplot[red,smooth,thick,domain=0.0:1.0] {((0.2 - 1.0) / x + 1) * (0.2 > 1 - x)} |
|
node[pos=0.9, xshift=-0.1cm](endof02){}; |
|
\node[left, red] at (endof02) {$\rho = 0.2$}; |
|
\addplot[green,smooth,thick,domain=0.0:1.0] {((0.5 - 1.0) / x + 1) * (0.5 > 1 - x)} |
|
node[pos=0.7, xshift=-0.1cm](endof05){}; |
|
\node[left, green] at (endof05) {$\rho = 0.5$}; |
|
\addplot[blue,smooth,thick,domain=0.0:1.0] {((0.8 - 1.0) / x + 1) * (0.8 > 1 - x)} |
|
node[pos=0.6, xshift=-0.1cm](endof08){}; |
|
\node[left, blue] at (endof08) {$\rho = 0.8$}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
\caption{Inverse CDF of the spike-and-slab transformation for $\rho \in \left\{0.2, 0.5, 0.8\right\}$} |
|
\label{fig:inverse-cdf-spike-and-slab} |
|
\end{figure} |
|
|
|
\subsection{Engineering effective smoothing transformations} \label{sec:engineering-smoothing-transformations} |
|
|
|
If the smoothing transformation is not chosen appropriately, the contribution of low-probability regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we find: |
|
\begin{align} |
|
\frac{\partial}{\partial \theta} F( F^{-1} (\rho)) &= |
|
\left. \frac{\partial F}{\partial \theta} \right|_{F^{-1}(\rho)} + \left. \frac{\partial F}{\partial z} \right|_{F^{-1}(\rho)} \cdot \frac{\partial}{\partial \theta} F^{-1}(\rho) = 0 \nonumber \\ |
|
p(z) \cdot \frac{\partial}{\partial \theta} F^{-1}(\rho) &= \left. -\frac{\partial F}{\partial \theta} \right|_z , \label{eq:variance-of-gradients} |
|
\end{align} |
|
where $z = F^{-1}(\rho)$. |
|
Consider the case where $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$ are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard deviations apart. For values of $\zeta_i$ between the two modes, $F(\zeta_i) \approx q(z_i = 0 | x, \phi)$, assuming without loss of generality that the mode corresponding to $z_i = 0$ occurs at a smaller value of $\zeta_i$ than that corresponding to $z_i = 1$. As a result, $\frac{\partial F}{\partial q} \approx 1$ between the two modes, and $\frac{\partial F^{-1}}{\partial q} \approx \frac{1}{r(\zeta_i)}$ even if $r(\zeta_i) \approx 0$. In this case, the stochastic estimates of the gradient in equation~\ref{discrete-VAE-approx-zeta}, which depend upon $\frac{\partial F^{-1}}{\partial q}$, have large variance. |
|
|
|
These high-variance gradient estimates arise because $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$ are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing transformations are analogous to a sigmoid transfer function $\sigma(c \cdot x)$, where $\sigma$ is the logistic function and $c \rightarrow \infty$. |
|
The smoothing provided by the continuous random variables $\zeta$ is only effective if there is a region of meaningful overlap between $r(\zeta | z=0)$ and $r(\zeta | z=1)$. In particular, |
|
$\sum_{z_i} r(\zeta_i | z_i=0) + r(\zeta_i | z_i = 1) \gg 0$ for all $\zeta_i$ between the modes of $r(\zeta_i | z_i = 0)$ and $r(\zeta_i | z_i = 1)$, so $p(z)$ remains moderate in equation~\ref{eq:variance-of-gradients}. |
|
In the spike-and-exponential distribution described in Section~\ref{sec:spike-and-exponential}, this overlap can be ensured by fixing or bounding $\beta$. |
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{Transformations from discrete to continuous latent representations that depend upon the input} \label{input-dependent-tranformation-section} |
|
|
|
It is not necessary to define the transformation from discrete to continuous latent variables in the approximating posterior, $r(\zeta | z)$, to be independent of the input $x$. In the true posterior distribution, |
|
$p(\zeta | z, x) \approx p(\zeta | z)$ only if $z$ already captures most of the information about $x$ and $p(\zeta | z, x)$ changes little as a function of $x$, since |
|
\begin{equation*} |
|
p(\zeta | z) = \int_x p(\zeta, x | z) = \int_x p(\zeta | z, x) \cdot p(x | z) . |
|
\end{equation*} |
|
This is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. |
|
To address this, we can define: |
|
\begin{align*} |
|
q(\zeta, z | x, \phi) &= q(z | x, \phi) \cdot q(\zeta | z, x, \phi) \\ |
|
p(\zeta, z | \theta) &= p(\zeta | z) \cdot p(z | \theta) |
|
\end{align*} |
|
This leads to an evidence lower bound that resembles that of Equation~\ref{eq:original-VAE}, but adds an extra term: |
|
\begin{align} |
|
\mathcal{L}_{VAE}(x, \theta, \phi) &= \log p(x | \theta) - \KL\left[q(z, \zeta | x, \phi) || p(z, \zeta | x, \theta) \right] \nonumber \\ |
|
&= \log p(x | \theta) - \KL\left[q(\zeta | z, x, \phi) \cdot q(z | x, \phi) || p(\zeta | z, x, \theta) \cdot p(z | x, \theta) \right] \nonumber \\ |
|
&= \sum_z \int_\zeta q(\zeta | z, x, \phi) \cdot q(z | x, \phi) \cdot |
|
\log \left[ \frac{p(x | \zeta, \theta) \cdot p(\zeta | z, \theta) \cdot p(z | \theta)}{q(\zeta | z, x, \phi) \cdot q(z | x, \phi)} \right] \nonumber \\ |
|
&= \mathbb{E}_{q(\zeta | z, x, \phi) \cdot q(z | x, \phi)} \left[ \log p(x | \zeta, \theta) \right] - \KL \left[ q(z | x, \phi) || p(z | \theta) \right] \nonumber \\ |
|
&\qquad - \sum_z q(z | x, \phi) \cdot \KL \left[ q(\zeta | z, x, \phi) || p(\zeta | z) \right] . \label{vae-loss-input-dependent-zeta} |
|
\end{align} |
|
The extension to hierarchical approximating posteriors proceeds as in sections~\ref{sec:gradient-of-KL-divergence} and~\ref{sec:continuous-hierarchy}. |
|
|
|
If both $q(\zeta | z, x, \phi)$ and $p(\zeta | z)$ are Gaussian, then their KL~divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. However, while the gradients of this KL~divergence are easy to calculate when conditioned on $z$, the gradients with respect of $q(z | x, \phi)$ in the new term seem to force us into a REINFORCE-like approach (c.f. Equation~\ref{REINFORCE-equation}): |
|
\begin{equation} |
|
\sum_z \frac{\partial q(z|x,\phi)}{\partial \phi} \cdot \KL \left[ q(\zeta | z, x, \phi) || p(\zeta | z) \right] = |
|
\mathbb{E}_{q(z|x, \phi)} \left[ \KL\left[ q(\zeta | z, x, \phi) || p(\zeta | z) \right] \cdot \frac{\partial \log q(z | x, \phi)}{\partial \phi} \right] . \label{new-term} |
|
\end{equation} |
|
The reward signal is now $\KL \left[ q(\zeta | z, x, \phi) || p(\zeta | z) \right]$ rather than $\log p(x | z, \theta)$, but the effect on the variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function. |
|
|
|
However, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation~\ref{new-term} analytically, without injecting any additional variance. |
|
Specifically, if $q(z | x, \phi)$ and $q(\zeta | z, x, \phi)$ are factorial, with $q(\zeta_i | z_i, x, \phi)$ only dependent on $z_i$, then $\KL \left[q(\zeta | z, x, \phi) || p(\zeta | z) \right]$ decomposes into a sum of the KL~divergences over each variable, as does $\frac{\partial \log q(z | x, \phi)}{\partial \phi}$. The expectation of all terms in the resulting product of sums is zero except those of the form $\mathbb{E} \left[ \KL \left[ q_i || p_i \right] \cdot \frac{\partial \log q_i}{\partial \phi} \right]$, due to the identity explained in Equation~\ref{expected-grad-log-simplification}. We then use the reparameterization trick to eliminate all hierarchical layers before the current one, and marginalize over each $z_i$. As a result, we can compute the term of Equation~\ref{new-term} by backpropagating |
|
\begin{equation*} |
|
\KL\left[ q(\zeta | z=1, x, \phi || p(\zeta | z=1) \right] - \KL\left[ q(\zeta | z=0, x, \phi) || p(\zeta | z=0) \right] |
|
\end{equation*} |
|
into $q(z | x, \phi)$. |
|
This is especially simple if $q(\zeta_i | z_i, x, \phi) = p(\zeta_i | z_i)$ when $z_i = 0$, since then |
|
\mbox{$\KL\left[ q(\zeta | z=0, x, \phi) || p(\zeta | z=0) \right] = 0$}. |
|
|
|
|
|
\subsection{Spike-and-Gaussian} \label{sec:spike-and-gaussian} |
|
|
|
We might wish $q(\zeta_i | z_i, x, \phi)$ to be a separate Gaussian for both values of the binary $z_i$. However, it is difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise: |
|
\begin{align*} |
|
q(\zeta_i | z_i=0, x, \phi) &= \delta(\zeta_i) |
|
& F_{q(\zeta_i|z_i=0, x, \phi)}(\zeta_i) &= H(\zeta_i) = |
|
\begin{cases} |
|
0 , & \text{if } \zeta_i < 0 \\ |
|
1, & \text{otherwise} |
|
\end{cases} \\ |
|
q(\zeta_i | z_i=1, x, \phi) &= \mathcal{N}\left( \mu_{q,i}(x, \phi), \sigma_{q,i}^2(x, \phi) \right) |
|
& F_{q(\zeta_i|z_i=1, x, \phi)}(\zeta_i) &= \frac{1}{2} \left[ 1 + \erf\left( \frac{\zeta_i - \mu_{q,i}(x, \phi)}{\sqrt{2} \sigma_{q,i}(x, \phi)} \right) \right] |
|
\end{align*} |
|
where $\mu_q(x, \phi)$ and $\sigma_q(x, \phi)$ are functions of $x$ and $\phi$. We use the substitutions $q(z_i=1 | x, \phi) \rightarrow q$, $\mu_{q,i}(x, \phi) \rightarrow \mu_{q,i}$, and $\sigma_{q,i}(x, \phi) \rightarrow \sigma_{q,i}$ in the sequel to simplify notation. |
|
The prior distribution $p$ is similarly parameterized. |
|
|
|
We can now find the CDF for $q(\zeta | x, \phi)$ as a function of $q(z = 1 | x, \phi) \rightarrow q$: |
|
\begin{align*} |
|
F_{q(\zeta | x, \phi)}(\zeta_i) &= \left( 1 - q_i \right) \cdot H(\zeta_i) \\ |
|
&\qquad + \frac{q_i}{2} \cdot \left[ 1 + \erf\left( \frac{\zeta_i - \mu_{q,i}}{\sqrt{2} \sigma_{q,i}} \right) \right] |
|
\end{align*} |
|
Since $z_i = 0$ makes no contribution to the CDF until $\zeta_i = 0$, the value of $\rho$ at which $\zeta_i = 0$ is |
|
\begin{equation*} |
|
\rho_i^{step} = \frac{q_i}{2} \left[ 1 + \erf \left( \frac{ -\mu_{q,i}}{\sqrt{2} \sigma_{q,i}} \right) \right] |
|
\end{equation*} |
|
so: |
|
\begin{equation*} |
|
\zeta_i = |
|
\begin{cases} |
|
\mu_{q,i} + \sqrt{2} \sigma_{q,i} \cdot \erf^{-1} \left( \frac{2 \rho_i}{q_i} - 1 \right) , & \text{if } \rho_i < \rho_i^{step} \\ |
|
0 , & \text{if } \rho_i^{step} \leq \rho_i \leq \rho_i^{step} + (1 - q_i) \\ |
|
\mu_{q,i} + \sqrt{2} \sigma_{q,i} \cdot \erf^{-1} \left( \frac{2 (\rho_i - 1)}{q_i} + 1 \right) , & \text{otherwise} |
|
\end{cases} |
|
\end{equation*} |
|
|
|
Gradients are always evaluated for fixed choices of $\rho$, and gradients are never taken with respect to~$\rho$. As a result, expectations with respect to $\rho$ are invariant to permutations of $\rho$. Furthermore, |
|
\begin{equation*} |
|
\frac{2 \rho_i}{q_i} - 1 = \frac{2 (\rho_i' - 1)}{q_i} + 1 |
|
\end{equation*} |
|
where $\rho_i' = \rho_i + (1 - q_i)$. We can thus shift the delta spike to the beginning of the range of $\rho_i$, and use |
|
\begin{equation*} |
|
\zeta_i = |
|
\begin{cases} |
|
0 , & \text{if } \rho_i \leq 1 - q_i \\ |
|
\mu_{q,i} + \sqrt{2} \sigma_{q,i} \cdot \erf^{-1} \left( \frac{2 (\rho_i - 1)}{q_i} + 1 \right) , & \text{otherwise} |
|
\end{cases} |
|
\end{equation*} |
|
|
|
All parameters of the multivariate Gaussians should be trainable functions of $x$, and independent of $q$. The new term in Equation~\ref{vae-loss-input-dependent-zeta} is: |
|
\begin{align*} |
|
&\sum_z q(z | x, \phi) \cdot \KL \left[ q(\zeta | z, x, \phi) || p(\zeta | z) \right] = \\ |
|
&\hspace{1cm}\sum_{z,i} q(z_i=1 | x, \phi) \cdot \KL \left[ q(\zeta_i | z_i=1, x, \phi) || p(\zeta_i | z_i=1) \right] \\ |
|
&\hspace{1.2cm}+ \left(1 - q(z_i=1 | x, \phi) \right) \cdot \KL \left[ q(\zeta_i | z_i=0, x, \phi) || p(\zeta_i | z_i=0) \right] |
|
\end{align*} |
|
If $z_i=0$, then $q(\zeta_i | z_i=0, x, \phi) = p(\zeta_i | z_i=0, \theta)$, and $\KL\left[q(\zeta_i | z_i=0, x, \phi) || p(\zeta_i | z_i = 0, \theta)\right] = 0$ as in Section~\ref{sec:continuous-variable}. |
|
The KL~divergence between two multivariate Gaussians with diagonal covariance matrices, with means $\mu_{p,i}$, $\mu_{q,i}$, and covariances $\sigma_{p,i}^2$ and $\sigma_{q,i}^2$, is |
|
\begin{equation*} |
|
\KL\left[q || p \right] = \sum_i \left( \log \sigma_{p,i} - \log \sigma_{q,i} + \frac{\sigma_{q,i}^2 + \left(\mu_{q,i} - \mu_{p,i} \right)^2}{2 \cdot \sigma_{p,i}^2} - \frac{1}{2} \right) |
|
\end{equation*} |
|
To train $q(z_i = 1 | x, \phi)$, we thus need to backpropagate |
|
$\KL \left[ q(\zeta_i | z_i=1, x, \phi) || p(\zeta_i | z_i=1) \right]$ into it. |
|
|
|
Finally, |
|
\begin{align*} |
|
\frac{\partial \KL[q||p]}{\partial \mu_{q,i}} &= \frac{\mu_{q,i} - \mu_{p,i}}{\sigma_{p,i}^2} \\ |
|
\frac{\partial \KL[q||p]}{\partial \sigma_{q,i}} &= -\frac{1}{\sigma_{q,i}} + \frac{\sigma_{q,i}}{\sigma_{p,i}^2}\\ |
|
\end{align*} |
|
so |
|
\begin{align*} |
|
\sum_z q(z|x, \phi) \cdot \frac{\partial}{\partial \mu_{q, i}} \KL\left[q || p \right] &= q(z_i = 1 | x, \phi) \cdot |
|
\frac{\mu_{q,i} - \mu_{p,i}}{\sigma_{p,i}^2} \\ |
|
\sum_z q(z|x, \phi) \cdot \frac{\partial}{\partial \sigma_{q, i}} \KL\left[q || p \right] &= q(z_i = 1 | x, \phi) \cdot |
|
\left( -\frac{1}{\sigma_{q,i}} + \frac{\sigma_{q,i}}{\sigma_{p,i}^2} \right) |
|
\end{align*} |
|
|
|
|
|
|
|
|
|
For $p$, it is not useful to make the mean values of $\zeta$ adjustable for each value of $z$, since this is redundant with the parameterization of the decoder. With fixed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be one. |
|
|
|
|
|
|
|
\section{Computing the gradient of $\KL \left[ q(\zeta, z | x, \phi) || p(\zeta, z | \theta) \right]$} \label{sec:kl-gradient} |
|
|
|
|
|
The KL term of the ELBO (Equation~\ref{eq:original-VAE}) is not significantly affected by the introduction of additional continuous latent variables $\zeta$, so long as we use the same expansion $r(\zeta | z)$ for both the approximating posterior and the prior: |
|
\begin{align} |
|
\KL\left[q || p \right] &= |
|
\sum_z \int_{\zeta} \left(\prod_{1 \leq j \leq k} r(\zeta_j | z_j ) \cdot q(z_j | \zeta_{i<j}, x) \right) \cdot |
|
\log \left[ \frac{\prod_{1 \leq j \leq k} r(\zeta_j | z_j) \cdot q(z_j | \zeta_{i<j}, x)}{p(z) \cdot \prod_{1 \leq j \leq k} r(\zeta_j | z_j)} \right] \nonumber \\ |
|
&= \sum_z \int_{\zeta} \left( \prod_{1 \leq j \leq k} r(\zeta_j | z_j) \cdot q(z_j | \zeta_{i<j}, x) \right) \cdot \log \left[ \frac{\prod_{1 \leq j \leq k} q(z_j | \zeta_{i<j}, x)}{p(z)} \right] . |
|
\label{KL-component-ELBO} |
|
\end{align} |
|
|
|
|
|
The gradient of Equation~\ref{KL-component-ELBO} with respect to the parameters $\theta$ of the prior, $p(z|\theta)$, can be estimated stochastically using samples from the approximating posterior, $q(\zeta,z|x,\phi)$, and the true prior,~$p(z | \theta)$. |
|
When the prior is an RBM, defined by Equation~\ref{eq:RBM-distribution}, we find: |
|
\begin{align} |
|
-\frac{\partial}{\partial \theta} \KL\left[q || p \right] &= |
|
-\sum_{\zeta, z} q(\zeta, z | x, \phi) \cdot \frac{\partial E_p(z, \theta)}{\partial \theta} |
|
+ \sum_{z} p(z | \theta) \cdot \frac{\partial E_p(z, \theta)}{\partial \theta} \nonumber \\ |
|
&= -\mathbb{E}_{q(z_1|x, \phi)} \left[ \cdots \left[ \mathbb{E}_{q(z_k | \zeta_{i<k}, x, \phi)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \right] \right] |
|
+ \mathbb{E}_{p(z | \theta)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \label{KL-gradient-prior} |
|
\end{align} |
|
The final expectation with respect to $q(z_k | \zeta_{i<k}, x, \phi)$ can be performed analytically; all other expectations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units. |
|
|
|
|
|
\subsection{Gradient of the entropy with respect to $\phi$} |
|
|
|
In contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break $\KL \left[q || p \right]$ into two terms, the negative entropy~$\sum_{z, \zeta} q \log q$, and the cross-entropy~$-\sum_{z, \zeta} q \log p$, and compute their gradients separately. |
|
|
|
We can regroup the negative entropy term of the KL~divergence so as to use the reparameterization trick to backpropagate through $\prod_{i < j} q(z_j | \zeta_{i<j}, x)$: |
|
\begin{align} |
|
-H(q) &= \sum_z \int_{\zeta} \left( \prod_{1 \leq j \leq k} r(\zeta_j | z_j) \cdot q(z_j | \zeta_{i<j}, x) \right) \cdot \log \left[ \prod_{1 \leq j \leq k} q(z_j|\zeta_{i<j}, x) \right] \nonumber \\ |
|
&= \sum_z \int_{\zeta} \left( \prod_j r(\zeta_j | z_j) \cdot q(z_j | \zeta_{i<j}, x) \right) \cdot \left( \sum_j \log q(z_j | \zeta_{i<j}, x) \right) \nonumber \\ |
|
&= \sum_j \sum_z \int_{\zeta} \left( \prod_{i \leq j} r(\zeta_i | z_i) \cdot q(z_i | \zeta_{h<i}, x) \right) \cdot \log q(z_j | \zeta_{i<j}, x) \nonumber \\ |
|
&= \sum_j \mathbb{E}_{q(\zeta_{i<j}, z_{i<j} | x, \phi)} \left[ \sum_{z_j} q(z_j | \zeta_{i<j}, x) \cdot \log q(z_j | \zeta_{i<j}, x) \right] \nonumber \\ |
|
&= \sum_j \mathbb{E}_{\rho_{i < j}} \left[ \sum_{z_j} q(z_j | \rho_{i<j}, x) \cdot \log q(z_j | \rho_{i<j}, x) \right] \label{entropy-term} |
|
\end{align} |
|
where indices $i$ and $j$ denote hierarchical groups of variables. The probability $q(z_j | \rho_{i<j}, x)$ is evaluated analytically, whereas all variables $z_{i < j}$ and $\zeta_{i<j}$ are implicitly sampled stochastically via~$\rho_{i < j}$. |
|
|
|
We wish to take the gradient of $-H(q)$ in Equation~\ref{entropy-term}. |
|
Using the identity: |
|
\begin{align} |
|
\mathbb{E}_q \left[ c \cdot \frac{\partial}{\partial \phi} \log q \right] &= c \cdot \sum_z q \cdot \left( \frac{\partial q}{\partial \phi} / q \right) |
|
= c \cdot \frac{\partial}{\partial \phi} \left(\sum_z q \right) = 0 \label{expected-grad-log-simplification} |
|
\end{align} |
|
for any constant $c$, |
|
we can eliminate the gradient of $\log q_{j | \rho_{i < j}}$ in $-\frac{\partial H(q)}{\partial \phi}$, and obtain: |
|
\begin{align*} |
|
-\frac{\partial}{\partial \phi} H(q) &= \sum_j \mathbb{E}_{\rho_{i < j}} \left[ \sum_{z_j} \left( \frac{\partial}{\partial \phi} q(z_j | \rho_{i<j}, x) \right) \cdot \log q(z_j | \rho_{i<j}, x) \right] . |
|
\end{align*} |
|
Moreover, we can eliminate any log-partition function in $\log q(z_j | \rho_{i<j}, x)$ by an argument analogous to Equation~\ref{expected-grad-log-simplification}.\footnote{ |
|
$\sum_{z} c \cdot \frac{\partial q}{\partial \phi} = c \cdot \frac{\partial}{\partial \phi} \sum_{z} q = 0$, where $c$ is the log partition function of $q(z_j | \rho_{i<j}, x)$.} |
|
By repeating this argument one more time, we can break $\frac{\partial}{\partial \phi} q(z_j | \rho_{i<j}, x)$ into its factorial components.\footnote{ |
|
$\frac{\partial}{\partial \phi} \prod_i q_i = \sum_i \frac{\partial q_i}{\partial \phi} \cdot \prod_{j \not= i} q_j$, so the $q_{j \not= i}$ marginalize out of $\frac{\partial q_i}{\partial \phi} \cdot \prod_{j \not=i} q_j$ when multiplied by $\log q_i$. When $\frac{\partial q_i}{\partial \phi} \cdot \prod_{j \not= i} q_j$ is multiplied by one of the $\log q_{j \not= i}$, the sum over $z_i$ can be taken inside the $\frac{\partial}{\partial \phi}$, and again $\frac{\partial}{\partial \phi} \sum_{z_i} q_i = 0$.} |
|
If $z_i \in \left\{0, 1 \right\}$, then using Equation~\ref{eq:approx-post-dist}, gradient of the negative entropy reduces to: |
|
\begin{align*} |
|
-\frac{\partial}{\partial \phi} H(q) &= |
|
\sum_j \mathbb{E}_{\rho_{i < j}} \left[ \sum_{\iota \in j} \sum_{z_{\iota}} q_{\iota}(z_{\iota}) \cdot \left(z_{\iota} \cdot \frac{\partial g_{\iota}}{\partial \phi} |
|
- \sum_{z_{\iota}} \left( q_{\iota}(z_{\iota}) \cdot z_{\iota} \cdot \frac{\partial g_{\iota}}{\partial \phi} \right) \right) \cdot \left( g_{\iota} \cdot z_{\iota} \right)\right] \\ |
|
&= \sum_j \mathbb{E}_{\rho_{i < j}} \left[ \frac{\partial g_j^{\top}}{\partial \phi} \cdot \left( g_j \odot \left[ q_j(z_j = 1) - q_j^2(z_j = 1) \right] \right) \right] |
|
\end{align*} |
|
where $\iota$ and $z_{\iota}$ correspond to single variables within the hierarchical groups denoted by $j$. In TensorFlow, it might be simpler to write: |
|
\begin{equation*} |
|
-\frac{\partial}{\partial \phi} H(q) = |
|
\mathbb{E}_{\rho_{i < j}} \left[ \frac{\partial q_j^{\top}(z_j = 1)}{\partial \phi} \cdot g_j \right] . |
|
\end{equation*} |
|
|
|
|
|
\subsection{Gradient of the cross-entropy} |
|
|
|
The gradient of the cross-entropy with respect to the parameters $\phi$ of the approximating posterior does not depend on the partition function of the prior $\mathcal{Z}_p$, since: |
|
\begin{equation*} |
|
-\frac{\partial}{\partial \phi} \sum_z q \log p = |
|
\sum_z \frac{\partial}{\partial \phi} q \cdot E_p + \frac{\partial }{\partial \phi} q \cdot \log \mathcal{Z}_p |
|
= \sum_z \frac{\partial}{\partial \phi} q \cdot E_p |
|
\end{equation*} |
|
by Equations~\ref{eq:RBM-distribution} and~\ref{expected-grad-log-simplification}, so we are left with the gradient of the average energy $E_p$. |
|
|
|
|
|
The remaining cross-entropy term is |
|
\begin{equation*} |
|
\sum_z q \cdot E_p = -\mathbb{E}_\rho \left[ z^{\top} \cdot \J \cdot z + \h^{\top} \cdot z \right] . |
|
\end{equation*} |
|
We can handle the term $\h^{\top} \cdot z$ analytically, since $z_i \in \left\{0, 1 \right\}$, and |
|
\begin{equation*} |
|
\mathbb{E}_\rho \left[ \h^{\top} \cdot z \right] = \h^{\top} \cdot \mathbb{E}_{\rho} \left[ q(z = 1) \right] . |
|
\end{equation*} |
|
The approximating posterior $q$ is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients: |
|
\begin{equation*} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_\rho \left[ \h^{\top} \cdot z \right] = \h^{\top} \cdot \mathbb{E}_{\rho} \left[ \frac{\partial}{\partial \phi} q(z = 1) \right] . |
|
\end{equation*} |
|
|
|
In contrast, each element of the sum |
|
\begin{equation*} |
|
z^{\top} \cdot \J \cdot z = \sum_{i,j} \J_{ij} \cdot z_i \cdot z_j |
|
\end{equation*} |
|
depends upon variables that are not usually in the same hierarchical level, so in general |
|
\begin{equation*} |
|
\mathbb{E}_\rho \left[ \J_{ij} z_i z_j \right] \neq \J_{ij} \mathbb{E}_{\rho} \left[ z_i \right] \cdot \mathbb{E}_{\rho} \left[ z_j \right]. |
|
\end{equation*} |
|
We might decompose this term into |
|
\begin{equation*} |
|
\mathbb{E}_\rho \left[ \J_{ij} z_i z_j \right] = \J_{ij} \cdot \mathbb{E}_{\rho_{k \leq i}} \left[ z_i \cdot \mathbb{E}_{\rho_{k > i}} \left[ z_j \right] \right] , |
|
\end{equation*} |
|
where without loss of generality $z_i$ is in an earlier hierarchical layer than $z_j$; however, it is not clear how to take the derivative of $z_i$, since it is a discontinuous function of $\rho_{k \leq i}$. |
|
|
|
\subsection{Naive approach} \label{naive-approach-section} |
|
|
|
The naive approach would be to take the gradient of the expectation using the gradient of log-probabilities over all variables: |
|
\begin{align} |
|
\frac{\partial}{\partial \phi} \mathbb{E} \left[ \J_{ij} z_i z_j \right] &= \mathbb{E}_q \left[ \J_{ij} z_i z_j \cdot \frac{\partial}{\partial \phi} \log q \right] \nonumber \\ |
|
&= \mathbb{E}_{q_1, q_{2|1}, \dots} \left[ \J_{ij} z_i z_j \cdot \sum_k \frac{\partial}{\partial \phi} \log q_{k | l < k} \right] \label{naive-kl-grad-reinforce} \\ |
|
&= \mathbb{E}_{q_1, q_{2|1}, \dots} \left[ \J_{ij} z_i z_j \cdot \sum_k \frac{1}{q_{k | l < k}} \cdot \frac{\partial q_{k | l < k}}{\partial \phi} \right] . \nonumber |
|
\end{align} |
|
For $\frac{\partial q_{k | l < k}}{\partial \phi}$, we can drop out terms involving only $z_{i < k}$ and $z_{j < k}$ that occur hierarchically before~$k$, since those terms can be pulled out of the expectation over $q_k$, and we can apply Equation~\ref{expected-grad-log-simplification}. However, for terms involving $z_{i > k}$ or $z_{j > k}$ that occur hierarchically after $k$, the expected value of $z_{i}$ or $z_j$ depends upon the chosen value of $z_k$. |
|
|
|
The gradient calculation in Equation~\ref{naive-kl-grad-reinforce} is an instance of the REINFORCE algorithm (Equation~\ref{REINFORCE-equation}). |
|
Moreover, the variance of the estimate is proportional to the number of terms (to the extent that the terms are independent). The number of terms contributing to each gradient $\frac{\partial q_{k | l < k}}{\partial \phi}$ grows quadratically with number of units in the RBM. |
|
We can introduce a baseline, as in NVIL~\citep{mnih2014neural}: |
|
\begin{equation*} |
|
\mathbb{E}_q \left[ \left(\J_{ij} z_i z_j - c(x) \right) \cdot \frac{\partial}{\partial \phi} \log q \right] , |
|
\end{equation*} |
|
but this approximation is still high-variance. |
|
|
|
|
|
|
|
|
|
|
|
\subsection{Decomposition of $\frac{\partial}{\partial \phi} \J_{ij} z_i z_j$ via the chain rule} \label{chain-rule-decomposition} |
|
|
|
When using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sections~\ref{sec:spike-and-exponential} \ref{sec:spike-and-slab}, and~\ref{sec:spike-and-gaussian}, we can decompose the gradient of $\mathbb{E}\left[ \J_{ij} z_i z_j \right]$ using the chain rule. Previously, we have considered $z$ to be a function of $\rho$ and $\phi$. We can instead formulate $z$ as a function of $q(z = 1)$ and $\rho$, where $q(z = 1)$ is itself a function of $\rho$ and $\phi$. Specifically, |
|
\begin{equation} \label{z-as-function-of-q} |
|
z_i(q_i(z_i = 1), \rho_i) = |
|
\begin{cases} |
|
0 & \text{if } \rho_i < 1 - q_i(z_i = 1) = q_i(z_i = 0) \\ |
|
1 & \text{otherwise} . |
|
\end{cases} |
|
\end{equation} |
|
Using the chain rule, $\frac{\partial}{\partial \phi} z_i = \sum_j \frac{\partial z_i}{\partial q_j(z_j = 1)} \cdot \frac{\partial q_j(z_j = 1)}{\partial \phi}$, where $\frac{\partial z_i}{\partial q_j(z_j = 1)}$ holds all $q_{k \not= j}$ fixed, even though they all depend on the common variables $\rho$ and parameters $\phi$. |
|
We use the chain rule to differentiate with respect to $q(z = 1)$ since it allows us to pull part of the integral over $\rho$ inside the derivative with respect to $\phi$. |
|
In the sequel, we sometimes write $q$ in place of $q(z = 1)$ to minimize notational clutter. |
|
|
|
|
|
Expanding the desired gradient using the reparameterization trick and the chain rule, we find: |
|
\begin{align} |
|
\frac{\partial}{\partial \phi} \mathbb{E}_q \left[ \J_{ij} z_i z_j \right] &= \frac{\partial}{\partial \phi} \mathbb{E}_{\rho} \left[ \J_{ij} z_i z_j \right] \nonumber \\ |
|
&= \mathbb{E}_{\rho} \left[ \sum_k \frac{\partial \J_{ij} z_i z_j }{\partial q_k(z_k = 1)} |
|
\cdot \frac{\partial q_k(z_k = 1)}{\partial \phi} \right] . \label{J-term-gradient} |
|
\end{align} |
|
We can change the order of integration (via the expectation) and differentiation since |
|
\begin{equation*} |
|
\left| \J_{ij} z_i z_j \right| \leq \J_{ij} < \infty |
|
\end{equation*} |
|
for all $\rho$ and bounded $\phi$ \citep{cheng2006differentiation}. |
|
Although $z(q, \rho)$ is a step function, and its derivative is a delta function, the integral (corresponding to the expectation with respect to $\rho$) of its derivative is finite. Rather than dealing with generalized functions directly, we apply the definition of the derivative, and push through the matching integral to recover a finite quantity. |
|
|
|
For simplicity, we pull the sum over $k$ out of the expectation in Equation~\ref{J-term-gradient}, and consider each summand independently. |
|
From Equation~\ref{z-as-function-of-q}, we see that $z_i$ is only a function of $q_i$, so all terms in the sum over $k$ in Equation~\ref{J-term-gradient} vanish except $k=i$ and $k=j$. |
|
Without loss of generality, we consider the term $k=i$; the term $k=j$ is symmetric. |
|
Applying the definition of the gradient to one of the summands, and then analytically taking the expectation with respect to $\rho_i$, we obtain: |
|
\begin{align*} |
|
&\mathbb{E}_{\rho} \left[ \frac{\partial \J_{ij} \cdot z_i(q, \rho) \cdot z_j(q, \rho) }{\partial q_i(z_i = 1)} \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right] \\ |
|
&\hspace{0.2cm} = \mathbb{E}_{\rho} \left[ |
|
\lim_{\delta q_i(z_i = 1) \rightarrow 0} \frac{ \J_{ij} \cdot z_i(q + \delta q_i, \rho) \cdot z_j(q + \delta q_i, \rho) - \J_{ij} \cdot z_i(q, \rho) \cdot z_j(q, \rho) }{\delta q_i} |
|
\cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right] \\ |
|
&\hspace{0.2cm} = \mathbb{E}_{\rho_{k \not= i}} \left[ \left. \lim_{\delta q_i(z_i = 1) \rightarrow 0} \delta q_i \cdot \frac{ \J_{ij} \cdot 1 \cdot z_j(q, \rho) - \J_{ij} \cdot 0 \cdot z_j(q, \rho) }{\delta q_i} \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right|_{\rho_i = q_i(z_i = 0)} \right] \\ |
|
&\hspace{0.2cm} = \mathbb{E}_{\rho_{k \not= i}} \left[ \left. \J_{ij} \cdot z_j(q, \rho) \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right|_{\rho_i = q_i(z_i = 0)} \right] . |
|
\end{align*} |
|
The third line follows from Equation~\ref{z-as-function-of-q}, since $z_i(q + \delta q_i, \rho)$ differs from $z_i(q, \rho)$ only in the region of $\rho$ of size $\delta q_i$ around $q_i(z_i = 0) = 1 - q_i(z_i = 1)$ where $z_i(q + \delta q_i, \rho) \not= z_i(q, \rho)$. Regardless of the choice of $\rho$, $z_j(q + \delta q_i, \rho) = z_j(q, \rho)$. |
|
|
|
The third line fixes $\rho_i$ to the transition between $z_i = 0$ and $z_i = 1$ at $q_i(z_i = 0)$. Since $z_i = 0$ implies $\zeta_i = 0$,\footnote{We chose the conditional distribution $r(\zeta_i | z_i = 0)$ to be a delta spike at zero.} and $\zeta$ is a continuous function of $\rho$, the third line implies that $\zeta_i = 0$. |
|
At the same time, since $q_i$ is only a function of $\rho_{k < i}$ from earlier in the hierarchy, the term $\frac{\partial q_i}{\partial \phi}$ is not affected by the choice of $\rho_i$.\footnote{In contrast, $z_i$ \emph{is} a function of $\rho_i$.} |
|
As noted above, due to the chain rule, the perturbation $\delta q_i$ has no effect on other $q_j$ by definition; the gradient is evaluated with those values held constant. |
|
On the other hand, $\frac{\partial q_i}{\partial \phi}$ is generally nonzero for all parameters governing hierarchical levels $k < i$. |
|
|
|
Since $\rho_i$ is fixed such that $\zeta_i = 0$, all units further down the hierarchy must be sampled consistent with this restriction. A sample from $\rho$ has $\zeta_i = 0$ if $z_i = 0$, which occurs with probability \mbox{$q_i(z_i = 0)$}.\footnote{It might also be the case that $\zeta_i = 0$ when $z_i = 1$, but with our choice of $r(\zeta | z)$, this has vanishingly small probability.} We can compute the gradient with a stochastic approximation by multiplying each sample by $1 - z_i$, so that terms with $\zeta_i \not= 0$ are ignored,\footnote{This takes advantage of the fact that $z_i \in \left\{0, 1 \right\}$.} and scaling up the gradient when $z_i = 0$ by $\frac{1}{q_i(z_i = 0)}$: |
|
\begin{equation} \label{J-gradient-z-0} |
|
\frac{\partial}{\partial \phi} \mathbb{E} \left[ \J_{ij} z_i z_j \right] = \mathbb{E}_{\rho} \left[ \J_{ij} \cdot \frac{1 - z_i}{1 - q_i(z_i = 1)} \cdot z_j \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right] . |
|
\end{equation} |
|
The term $\frac{1 - z_i}{1 - q_i}$ is not necessary if $j$ comes before $i$ in the hierarchy. |
|
|
|
While Equation~\ref{J-gradient-z-0} appears similar to REINFORCE, it is better understood as an importance-weighted estimate of an efficient gradient calculation. Just as a ReLU only has a nonzero gradient in the linear regime, $\frac{\partial z_i}{\partial \phi}$ effectively only has a nonzero gradient when $z_i = 0$, in which case $\frac{\partial z_i}{\partial \phi} \sim \frac{\partial q_i(z_i=1)}{\partial \phi}$. Unlike in REINFORCE, we do effectively differentiate the reward, $\J_{ij} z_i z_j$. |
|
Moreover, the number of terms contributing to each gradient $\frac{\partial q_i(z_i = 1)}{\partial \phi}$ grows linearly with the number of units in an RBM, whereas it grows quadratically in the method of Section~\ref{naive-approach-section}. |
|
|
|
|
|
|
|
|
|
|
|
\section{Motivation for building approximating posterior and prior hierarchies in the same order} \label{hierarchy-discussion-section} |
|
|
|
Intuition regarding the difficulty of approximating the posterior distribution over the latent variables given the data can be developed by considering sparse coding, an approach that uses a basis set of spatially locallized filters~\citep{olshausen1996emergence}. The basis set is overcomplete, and there are generally many basis elements similar to any selected basis element. However, the sparsity prior pushes the posterior distribution to use only one amongst each set of similar basis elements. |
|
|
|
As a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However, having changed one basis element, the optimal choice for the adjacent elements also changes so the filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated, since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements. |
|
|
|
These equivalent representations can easily be disambiguated by the successive layers of the representation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by inferring the approximating posterior over the top-most latent layer first. Only then do we compute the conditional approximating posteriors of lower layers given a sample from the approximating posterior of the higher layers, breaking the symmetry between representations of similar quality. |
|
|
|
\section{Architecture} \label{sec:architecture} |
|
|
|
The stochastic approximation to the ELBO is computed via one pass down the approximating posterior (Figure~\ref{fig:full-graphical-model-approx-post}), sampling from each continuous latent layer $\zeta_i$ and $\hz_{m>1}$ in turn; and another pass down the prior (Figure~\ref{fig:full-graphical-model-prior}), conditioned on the sample from the approximating posterior. |
|
In the pass down the prior, signals do not flow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation~\ref{vae-loss-input-dependent-zeta-hierarchical}. |
|
The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL~divergence between the approximating posterior and true prior at each layer, through this differentiable structure. |
|
|
|
|
|
All hyperparameters were tuned via manual experimentation. |
|
Except in Figure~\ref{fig:ll-vs-rbm-params}, RBMs have $128$ units ($64$ units per side, with full bipartite connections between the two sides), with $4$ layers of hierarchy in the approximating posterior. We use $100$ iterations of block Gibbs sampling, with $20$ persistent chains per element of the minibatch, to sample from the prior in the stochastic approximation to Equation~\ref{eq:RBM-grad}. |
|
|
|
When using the hierarchy of continuous latent variables described in Section~\ref{sec:continuous-hierarchy}, discrete VAEs overfit if any component of the prior is overparameterized, as shown in Figure~\ref{fig:ll-overfitting-prior}. In contrast, a larger and more powerful approximating posterior generally did not reduce performance within the range examined, as in Figure~\ref{fig:ll-overfitting-approx-post}. In response, we manually tuned |
|
the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. We list the selected values in Table~\ref{tab:architectural-hyperparameters}. |
|
All neural networks implementing components of the approximating posterior contain two hidden layers of $2000$ units. |
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\subfloat[Prior]{ |
|
\tikzsetnextfilename{fig-ll-overfitting-prior} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=0.5\textwidth, |
|
xlabel={Num hidden units per decoder layer}, |
|
ylabel={Log likelihood}, |
|
log ticks with fixed point, |
|
ymin=-88, ymax=-81.5] |
|
\addplot[color=blue,thick,mark=*] coordinates { |
|
(100, -82.24) |
|
(200, -82.03) |
|
(300, -82.22) |
|
(400, -82.22) |
|
(500, -82.56) |
|
}; |
|
\addplot[color=blue,dashed,thick,mark=*] coordinates { |
|
(100, -82.30) |
|
(200, -81.92) |
|
(300, -82.03) |
|
(400, -82.20) |
|
(500, -82.63) |
|
}; |
|
\addplot[color=red,thick,mark=*] coordinates { |
|
(100, -81.88) |
|
(200, -82.30) |
|
(300, -83.94) |
|
(400, -86.05) |
|
(500, -87.59) |
|
}; |
|
\addplot[color=red,dashed,thick,mark=*] coordinates { |
|
(100, -81.86) |
|
(200, -81.86) |
|
(300, -82.31) |
|
(400, -83.48) |
|
(500, -85.25) |
|
}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
\label{fig:ll-overfitting-prior} |
|
} |
|
\subfloat[Approximating posterior]{ |
|
\tikzsetnextfilename{fig-ll-overfitting-approx-post} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=0.5\textwidth, |
|
xlabel={Num hidden units per encoder layer}, |
|
log ticks with fixed point, |
|
yticklabels={,,}, |
|
ymin=-88, ymax=-81.5] |
|
\addplot[color=blue,thick,mark=*] coordinates { |
|
(500, -82.54) |
|
(1000, -82.64) |
|
(1500, -82.69) |
|
(2000, -82.68) |
|
}; |
|
\addplot[color=red,thick,mark=*] coordinates { |
|
(500, -82.14) |
|
(1000, -81.96) |
|
(1500, -81.95) |
|
(2000, -82.00) |
|
}; |
|
\addplot[color=green,thick,mark=*] coordinates { |
|
(500, -82.31) |
|
(1000, -82.14) |
|
(1500, -82.30) |
|
(2000, -82.20) |
|
}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
\label{fig:ll-overfitting-approx-post} |
|
} |
|
\caption{Log likelihood on statically binarized MNIST versus the number of hidden units per neural network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green) in (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing $p(x|\hz)$ is 0~(solid) or 1~(dashed). All models use only $10$ layers of continuous latent variables, with no parameter sharing.} |
|
\label{fig:ll-overfitting} |
|
\end{figure} |
|
|
|
|
|
\newcolumntype{P}[1]{>{\raggedleft\arraybackslash}p{#1}} |
|
|
|
\begin{table}[tbh] |
|
\centering |
|
\begin{tabular}{lP{1.5cm}P{1.5cm}P{1.5cm}P{1.5cm}} |
|
\toprule |
|
& Num layers & Vars per layer & Hids per prior layer & Param sharing \\ \cmidrule{2-5} |
|
MNIST (dyn bin) & 18 & 64 & 1000 & none \\ |
|
MNIST (static bin) & 20 & 256 & 2000 & 2 groups \\ |
|
Omniglot & 16 & 256 & 800 & 2 groups \\ |
|
Caltech-101 Sil & 12 & 80 & 100 & complete \\ |
|
\bottomrule |
|
\end{tabular} |
|
\caption{Architectural hyperparameters used for each dataset. Successive columns list the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more regularization, and achieve optimal performance with a smaller prior.} |
|
\label{tab:architectural-hyperparameters} |
|
\end{table} |
|
|
|
On statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using recurrent parameter sharing. In the simplest case, each $p \left( \hz_m | \hz_{l < m}, \theta \right)$ and $p \left( x | \hz, \theta \right)$ is a function of $\sum_{l < m} \hz_l$, rather than a function of the concatenation $\left[\hz_0, \hz_1, \dots, \hz_{m-1} \right]$. Moreover, all $p \left( \hz_{m \geq 1} | \hz_{l < m}, \theta \right)$ share parameters. The RBM layer $\hz_0$ is rendered compatible with this parameterization by using a trainable linear transformation of $\zeta$, $M \cdot \zeta$; where the number of rows in $M$ is equal to the number of variables in each $\hz_{m > 0}$. We refer to this architecture as complete recurrent parameter sharing. |
|
|
|
On datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneficial. We define the $n$ group architecture by dividing the continuous latent layers $\hz_{m \geq 1}$ into $n$ equally sized groups of consecutive layers. Each such group is independently subject to recurrent parameter sharing analogous to the complete sharing architecture, and the RBM layer $\hz_0$ is independently parameterized. |
|
|
|
We use the spike-and-exponential transformation described in Section~\ref{sec:spike-and-exponential}. The exponent is a trainable parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength $20$ for $5$ epochs, and additional warm-up of strength $2$ on the RBM alone for $20$ epochs~\citep{raiko2007building, bowman2016generating, sonderby2016ladder}. |
|
|
|
|
|
|
|
When $p(x | \hz)$ is linear, all nonlinear transformations are part of the prior over the latent variables. In contrast, it is also possible to define the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the final decoder $p(x|\hz)$, as in traditional VAEs. The former case can be reduced to something analogous to the latter case using the reparameterization trick. |
|
|
|
However, a VAE with a completely independent prior does not regularize the nonlinearity of the prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on the true posterior) be well-represented by the approximating posterior. Viewed another way, a completely independent prior requires the model to consist of many independent sources of variance, so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows the data manifold to remain curled within a higher-dimensional ambient space, with the approximating posterior merely tracking its contortions. A higher-dimensional ambient space makes sense when modeling multiple classes of objects. For instance, the parameters characterizing limb positions and orientations for people have no analog for houses. |
|
|
|
|
|
|
|
\subsection{Estimating the log partition function} \label{sec:log-partition-function} |
|
|
|
\begin{figure}[tbh] |
|
\centering |
|
\subfloat[MNIST (dyn bin)]{ |
|
\tikzsetnextfilename{fig-log-partition-function-hist-mnist-dyn-bin} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=6.0cm, |
|
ylabel={Fraction of estimates} |
|
] |
|
\addplot[ |
|
hist={density, bins=40}, |
|
fill=blue!50] |
|
table [y index=0] {log_z_hist_mnist_dyn_bin.dat}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
} \qquad |
|
\subfloat[MNIST (static bin)]{ |
|
\tikzsetnextfilename{fig-log-partition-function-hist-mnist-static-bin} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=6.0cm |
|
] |
|
\addplot[ |
|
hist={density, bins=40}, |
|
fill=blue!50] |
|
table [y index=0] {log_z_hist_mnist_static_bin.dat}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
} |
|
|
|
\subfloat[Omniglot]{ |
|
\tikzsetnextfilename{fig-log-partition-function-hist-omniglot} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=6.0cm, |
|
xlabel={Log partition function estimate}, |
|
ylabel={Fraction of estimates} |
|
] |
|
\addplot[ |
|
hist={density, bins=40}, |
|
fill=blue!50] |
|
table [y index=0] {log_z_hist_omniglot.dat}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
} \qquad |
|
\subfloat[Caltech-101 Silhouettes]{ |
|
\tikzsetnextfilename{fig-log-partition-function-hist-caltech-101} |
|
\begin{tikzpicture} |
|
\begin{axis}[ |
|
width=6.0cm, |
|
xlabel={Log partition function estimate} |
|
] |
|
\addplot[ |
|
hist={density, bins=40}, |
|
fill=blue!50] |
|
table [y index=0] {log_z_hist_caltech_101.dat}; |
|
\end{axis} |
|
\end{tikzpicture} |
|
} |
|
\caption{Distribution of estimates of the log-partition function, using Bennett's acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST~(a), statically binarized MNIST~(b), Omniglot~(c), and Caltech-101 Silhouettes~(d)} |
|
\label{fig:log-partition-function-hist} |
|
\end{figure} |
|
|
|
We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM ($\log \mathcal{Z}_p$ from Equation~\ref{eq:RBM-distribution}) from an importance-weighted computation analogous to that of~\citet{burda2016importance}. For this purpose, we estimate the log partition function using bridge sampling, a variant of Bennett's acceptance ratio method~\citep{bennett1976efficient, shirts2008statistically}, which produces unbiased estimates of the partition function. Interpolating distributions were of the form~$p(x)^\beta$, and sampled with a parallel tempering routine~\citep{swendsen1986replica}. The set of smoothing parameters $\beta$ in $[0,1]$ were chosen to approximately equalize replica exchange rates at $0.5$. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in ($0.5$ of total run time), and choose the total length of run, and number of repeated experiments, to achieve sufficient statistical accuracy in the log partition function. In Figure~\ref{fig:log-partition-function-hist}, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about $0.1$, indicating that the estimate of the log-likelihood should be accurate to within about $0.05$ nats. |
|
|
|
|
|
\subsection{Constrained Laplacian batch normalization} \label{sec:laplacian-batch-normalization} |
|
|
|
Rather than traditional batch normalization~\citep{ioffe2015batch}, we base our batch normalization on the L1 norm. Specifically, we use: |
|
\begin{align*} |
|
\mathbf{y} &= \mathbf{x} - \overline{\mathbf{x}} \\ |
|
\mathbf{x}_{bn} &= \mathbf{y} / \left( \overline{\left| \mathbf{y} \right|} + \epsilon \right) \odot \mathbf{s} + \mathbf{o}, |
|
\end{align*} |
|
where $\mathbf{x}$ is a minibatch of scalar values, $\overline{\mathbf{x}}$ denotes the mean of $\mathbf{x}$, $\odot$ indicates element-wise multiplication, $\epsilon$ is a small positive constant, $\mathbf{s}$ is a learned scale, and $\mathbf{o}$ is a learned offset. For the approximating posterior over the RBM units, we bound $2 \leq \mathbf{s} \leq 3$, and $-\mathbf{s} \leq \mathbf{o} \leq \mathbf{s}$. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used. |
|
|
|
|
|
\section{Comparison models} \label{sec:comparison-models} |
|
|
|
In Table~\ref{tab:mnist-omniglot-caltech}, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks \citep[DBN;][]{hinton2006fast}, reporting the results of~\citet{murray2009evaluating}; importance-weighted autoencoders~\citep[IWAE;][]{burda2016importance}; and ladder variational autoencoders~\citep[Ladder VAE;][]{sonderby2016ladder}. |
|
|
|
For the static MNIST binarization of~\citep{salakhutdinov2008quantitative}, we compare to Hamiltonian variational inference~\citep[HVI;][]{salimans2015markov}; the deep recurrent attentive writer~\citep[DRAW;][]{gregor2015draw}; the neural adaptive importance sampler with neural autoregressive distribution estimator~\citep[NAIS NADE;][]{du2015learning}; deep latent Gaussian models with normalizing flows~\citep[Normalizing flows;][]{rezende2015variational}; and the variational Gaussian process~\citep{tran2016variational}. |
|
|
|
On Omniglot, we compare to the importance-weighted autoencoder~\citep[IWAE;][]{burda2016importance}; ladder variational autoencoder~\citep[Ladder VAE;][]{sonderby2016ladder}; and the restricted Boltzmann machine~\citep[RBM;][]{smolensky1986parallel} and deep belief network~\citep[DBN;][]{hinton2006fast}, reporting the results of~\citet{burda2015accurate}. |
|
|
|
Finally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder~\citep[IWAE;][]{burda2016importance}, reporting the results of~\citet{li2016variational}; reweighted wake-sleep with a deep sigmoid belief network~\citep[RWS SBN;][]{bornschein2015reweighted}; the restricted Boltzmann machine~\citep[RBM;][]{smolensky1986parallel}, reporting the results of~\citet{cho2013enhanced}; and the neural adaptive importance sampler with neural autoregressive distribution estimator~\citep[NAIS NADE;][]{du2015learning}. |
|
|
|
|
|
\section{Supplementary results} \label{sec:supplementary-results} |
|
|
|
To highlight the contribution of the various components of our generative model, we investigate performance on a selection of simplified models.\footnote{In all cases, we report the negative log-likelihood on statically binarized MNIST~\citep{salakhutdinov2008quantitative}, estimated with $10^4$ importance weighted samples~\citep{burda2016importance}.} First, we remove the continuous latent layers. The resulting prior, depicted in Figure~\ref{fig:basic-graphical-model-prior}, consists of the bipartite Boltzmann machine (RBM), the smoothing variables $\zeta$, and a factorial Bernoulli distribution over the observed variables $x$ defined via a deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihood of $-86.9$ with 128 RBM units and $-85.2$ with 200 RBM units. |
|
|
|
Next, we further restrict the neural network defining the distribution over the observed variables $x$ given the smoothing variables $\zeta$ to consist of a linear transformation followed by a pointwise logistic nonlinearity, analogous to a sigmoid belief network \citep[SBN;][]{spiegelhalter1990sequential, neal1992connectionist}. This decreases the negative log-likelihood to $-92.7$ with 128 RBM units and $-88.8$ with 200 RBM units. |
|
|
|
We then remove the lateral connections in the RBM, reducing it to a set of independent binary random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN. With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of $-97.0$ with 200 binary latent variables. |
|
|
|
Finally, we replace the hierarchical approximating posterior of Figure~\ref{fig:hierarchical-approx-post} with the factorial approximating posterior of Figure~\ref{fig:basic-graphical-model-approx-post}. This simplification of the approximating posterior, in addition to the prior, reduces the log-likelihood to $-102.9$ with 200 binary latent variables. |
|
|
|
|
|
|
|
|
|
\begin{figure}[tbp] |
|
\centering |
|
\includegraphics[width=\textwidth]{evolution_bin_mnist} |
|
\caption{Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained in a wholly unsupervised manner.} |
|
\label{fig:bin-mnist-evolution} |
|
\end{figure} |
|
|
|
\begin{figure}[tbp] |
|
\centering |
|
\includegraphics[width=\textwidth]{evolution_omniglot} |
|
\caption{Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM.} |
|
\label{fig:omniglot-evolution} |
|
\end{figure} |
|
|
|
\begin{figure}[tbp] |
|
\centering |
|
\includegraphics[width=\textwidth]{evolution_caltech} |
|
\caption{Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type, despite being trained in a wholly unsupervised manner.} |
|
\label{fig:caltech-evolution} |
|
\end{figure} |
|
|
|
Figures~\ref{fig:bin-mnist-evolution}, \ref{fig:omniglot-evolution}, and~\ref{fig:caltech-evolution} repeat the analysis of Figure~\ref{fig:mnist-evolution} for statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes. Specifically, they show the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. |
|
Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the object class over many successive rows in Figures~\ref{fig:bin-mnist-evolution}, \ref{fig:omniglot-evolution}, and~\ref{fig:caltech-evolution} indicates that the RBM prior has well-separated modes. |
|
|
|
On statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure~\ref{fig:bin-mnist-evolution}. There are not obvious modes for Omniglot in Figure~\ref{fig:omniglot-evolution}; it is plausible that an RBM with $128$ units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs. |
|
|
|
|
|
|
|
|
|
\end{appendix} |
|
|
|
\end{document} |
|
|