query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
This paper explores the scenarios under which an attacker can claim that ‘Noise and access to the softmax layer of the model is all you need’ to steal the weights of a convolutional neural network whose architecture is already known. We were able to achieve 96% test accuracy using the stolen MNIST model and 82% accuracy using stolen KMNIST model learned using only i.i.d. Bernoulli noise inputs. We posit that this theft-susceptibility of the weights is indicative of the complexity of the dataset and propose a new metric that captures the same. The goal of this dissemination is to not just showcase how far knowing the architecture can take you in terms of model stealing, but to also draw attention to this rather idiosyncratic weight learnability aspects of CNNs spurred by i.i.d. noise input. We also disseminate some initial obtained with using the Ising probability distribution in lieu of the i.i.d. Bernoulli distribution In this paper, we consider the fate of an adamant attacker who is adamant about only using noise as input to a convolutional neural network (CNN) whose architecture is known and whose weights are the target of theft. We assume that the attacker has earned access to the softmax layer and is not restricted in terms of the number of inputs to be used to carry out the attack. At the outset, we'd like to emphasize that our goal in disseminating these is not to convince the reader on the real-world validity of the attacker-scenario described above or to showcase a novel attack. This paper contains our initial explorations after a chance discovery that we could populate the weights of an MNIST-trained CNN model by just using noise as input into the framework described below. Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. Through a set of empirical experiments, which we are duly open sourcing to aid reproducibility, we seek to draw the attention of the community on the following two issues:1. This risk of model weight theft clearly entails an interplay between the dataset as well as the architecture. Given a fixed architecture, can we use the level of susceptibility as a novel metric of complexity of the dataset?2. Given the wide variations in success attained by varying the noise distribution, how do we formally characterize the relationship between the input noise distribution being used by the attacker and the true distribution of the data, while considering a specific CNN architecture? What aspects of the true data distribution are actually important for model extraction?The rest of the paper is structured as follows:In Section 2, we provide a brief literature survey of the related work. In Section 3, we describe the methodology used to carry out the attack. In Section 4, we cover the main obtained and conclude the paper in Section 5. The art form of stealing machine learning models has received a lot of attention in the recent years. In BID9, the authors specifically targeted real-world ML-as-aservice BID7 platforms such as BigML and Amazon Machine Learning and demonstrated effective attacks that ed in extraction of machine learning models with near-perfect fidelity for several popular model classes. In BID2, the authors trained what they termed as a copycat network using Non-Problem Domain images and stolen labels to achieve impressive in the three problems of facial expression, object, and crosswalk classification. This was followed by work on Knockoff Nets BID6, where the authors demonstrated that by merely querying with random images sourced from an entirely different distribution than that of the black box target training data, one could not just train a well-performing knockoff but it was possible to achieve high accuracy even when the knockoff was constructed using a completely different architecture. This work differs from the above works in that the attacker is adamant on only using noise images as querying inputs. Intriguingly enough, the state-of-the-art CNNs are not robust enough to provide a flat (uniform) softmax output (with weight 1/number-of-classes) when we input noninput-domain noise at the input layer. This was been studied under two contexts. The first context was within the framework of fooling images. In BID5, the authors showcased how to generate synthetic images that were noiselike and completely unrecognizable to the human-eye but ones that state-of-the-art CNNs classified as one of the training classes with 99.99% confidence. The second text was with regards to what the authors in BID3 stated to be rubbish-class examples. Here, they showcased that the high levels of confident mis-predictions exuded by state-of-the-art trained on MNIST and CIFAR-10 datasets in response to isotropic Gaussian noise inputs. In this work, we focus on using Bernoulli noise-samples as inputs and using the softmax responses of the target model to siphon away the weights. We propose a framework for model extraction without possession of samples from the true dataset which the model has been trained on or the purpose of the model other than the dimensionality of the input tensors as well as the ability to access the ing class distribution from what is assumed to be a softmax activation given an input. We make the additional assumption that the architecture of the model to be extracted is known by the adversary. In our experiments, we assume that the input tensor is of dimension 28 by 28 and each pixel has values on the interval. The black box model which we attempt to extract, F (·), whose architecture is described in TAB1, is trained to convergence on a standard dataset for 12 epochs using the Adadelta optimizer with an initial learning rate of 1.0 and a minibatch size of 128 (Mni). From this point onward, this model is assumed to be a black box in which we have no access to the parameters of each layer. We procedurally generate a dataset of'stimuli' comprised of 600000 28 by 28 binary tensors where each pixel is sampled from a Bernoulli distribution with a success probability parameter p. In other words, let each image DISPLAYFORM0 28×28 where x i rand,j,k ∼ Bern(p) for i ∈ {1, ..., 600000}. We sample these tensors with probability parameters p ∈ {0.01, 0.11, ...0.91}, where each p is We train a new model, F extract (·), on the stimulus response pairs, {( DISPLAYFORM0 pairs with no regularization and evaluate on the dataset originally used to train F (·). The architecture for this model is the same as F (·), except we remove the dropout layers to encourage overfitting. We train for 50 epochs using the Adadelta optimizer with an initial learning rate of 1.0 and a minibatch size of 128. Additionally, we acknowledge a significant class imbalance in the highest probability classes in the softmax vectors y rand, so we remedy this by computing class weights according to the argmax of each softmax vector, and applying this re-weighting during the training of F extract (·). We show the full extraction algorithm in Algorithm 1 and summarize it in FIG0.We evaluate our proposed framework on four datasets from the MNIST family of datasets with identical dimensions: MNIST, KMNIST, Fashion MNIST, and notMNIST BID4 BID1 BID10 not). We evaluated the effect of sampling random data x i rand from different distributions on the performance of F extract (·) on the MNIST validation set. We used the same training procedure as found in the previously described experiments with two exceptions: we sample only 60000 procedurally generated examples and we train F extract (·) for only 10 epochs. We evaluated the use of the uniform distribution on the bounded interval, the standard normal distribution, the standard Gumbel distribution, the Bernoulli distribution with success parameter p = 0.5, and samples from an Ising model simulation with inverse temperature parameter β ∈ [0.0, 0.1, ..., 0.9] and ing values scaled to {0, 1}. The Ising prior is defined by the density BID8: DISPLAYFORM0 Examples of images sampled from the Ising model can be found in Figure 6.For this experiment, we evaluated the role of the inverse temeprature β parameter of the Ising sampler in training F extract (·). We first partition the stimulus response pairs, (X Ising, Y Ising) into 10 subsets with 7000 examples each corresponding to the different β parameters used to generate the samples, where (X Ising, Y Ising) = β∈{0.0,0.1,...,0.9} {(X Ising,β, Y Ising,β)}. We train F extract (·) for 10 epochs for each β and validate on the original dataset. We performed this experiment for MNIST, KMNIST, Fashion MNIST, and notMNIST and report the variation in performance over different values of β. We evaluate the efficacy of our framework by training F (·) on MNIST and going on to evaluate the performance of F extract (·) on MNIST after extraction. We found that F (·) achieved a validation accuracy of 99.03% and F extract (·) achieved a validation accuracy of 95.93%. The distribution of the argmax of Y rand can be found in Figure 2. The most underrepresented class according to the argmax of Y rand was class 6 represented by 198 out of 600000 random examples. Input: DISPLAYFORM0 for p in {0.01, 0.11, ..., 0.91} do for q in {0, 1, ..., numRandomExamples/10} do for j in {0, 1, ..., dim-1} do for k in {0, 1, ..., dim-1} do x sample,j,k ∼ Bern(p) end for end for X rand = X rand ∪ x sample end for end for Initialize F extract (·). DISPLAYFORM1 Our experiments with KMNIST ed in F (·) achieving a validation accuracy of 94.79% and F extract (·) achieving a validation accuracy of 81.18%. Class 8 was found to be the class with the fewest representatives according to the argmax of Y rand, which had 272 representative examples out of 600000. On the Fashion MNIST dataset, we found that F (·) achieved a validation accuracy of 92.16%, while F extract (·) achieved a validation accuracy of 75.31%. For Fashion MNIST, the most underrepresented class according to the argmax of Y rand was class 7 (sneaker) with only 12 out of 600000 random examples. Notably, the most common mispredictions according to FIG1 were incorrectly predicting class 5 (sandal) when the ground truth is class 7 (sneaker) and predicting class 5 (sandal) when the ground truth is class 9 (ankle boot). F extract (·) seems to predict the majority of examples from shoe-like classes to be of class 5 (sandal). We found that the notMNIST dataset had a more uniform class distribution according to the argmax of Y rand than the other datasets that we evaluated. The class with the fewest representatives in this sense was class 9 (the letter j) Table 2. Performance using different noise distributions. DISPLAYFORM0 70.03% BERNOULLI (p = 0.5) 76.58% ISING (β ∈ {0.0, 0.1, ..., 0.9}) 98.02%with 3950 out of 600000 examples. Despite this potential advantage, the extracted model F extract (·) failed to generalize to the notMNIST validation set, achieving an accuracy of 10.47%, and as can be seen in FIG1, F extract (·) predicts class 5 (the letter e) in the vast majority of cases. In contrast, F (·) achieved a validation accuracy of 88.62%. In evaluating the effect of sampling from different distributions to construct X rand, we found that among the uniform, standard normal, standard Gumbel, Bernoulli distributions, and the Ising model, samples from the Ising model attained the highest accuracy at 98.02% when evaluating F extract (·) on the MNIST validation set. The for each of the other distributions can be found in Figure 2. We postulate that this is due to the modelling of spatial correlations, which is a property which is lacking when sampling from the uniform, standard normal, standard Gumbel, and Bernoulli distributions, as the pixels are assumed to be i.i.d. We propose a measure of model extraction hardness ing from the dataset which the original model is trained on as the ratio of the post-extraction validation accuracy (using F extract (·)) and the pre-extraction validation accuracy (using F (·)) under our framework. We show that the ing ratios are align with the mainstream intuition regarding the general relative learnability of MNIST, KMNIST, Fashion MNIST, and notMNIST. For MNIST, we found this ratio to be 0.9687, the ratio for KMNIST was 0.8564, for Fashion MNIST we found it to be 0.8171, and notMNIST achieved a ratio of 0.1181. We found that the loss and accuracy undergo'phase transitions' as the value of β is varied. In FIG2, we see that across datasets, the losses tend to be minimized around β = 0.3, however the behavior of larger values of β varies from dataset to dataset. We postulate that this is indicative of the different distributions of the amount of spatial correlation across each dataset. We also found that accuracy is maximized at β = 0.4 for MNIST, β = 0.3 for KMNIST and Fashion MNIST, and β = 0.2 for notMNIST, where the behavior here also varies as β increases from the optimal value. We show this in FIG2. In this paper, we demonstrated a framework for extracting model parameters by training a new model on random impulse response pairs gleaned from the softmax output of the victim neural network. We went on to demonstrate the variation in model extractability based on the dataset which the original model was trained on. Finally, we proposed our framework as a method for which relative dataset complexity can be measured. This is a work in progress and we are currently working along the following three directions: In our experiments, pixels are notably i.i.d., whereas in real world settings, image data is comprised of pixels which are spatially correlated. In this vein, we intend to establish the relationship between the temperature of an Ising prior and the accuracy obtained by the stolen model. We will experiment with different architectures, specifically exploring the architecture unknown scenario where the attacker has a fixed plug-and-play swissarmy-knife architecture whose weights are learned by the noise and true-model softmax outputs. Additionally, we will explore methods for constructing X rand which gives more uniform distributions over argmax(Y rand) and evaluate the associated effect on the performance of F extract (·). TAB0 Garbage in, model out: Weight theft with just noise TAB0 Garbage in, model out: Weight theft with just noise
Input only noise , glean the softmax outputs, steal the weights
1,700
scitldr
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective. Variational autoencoder (VAE) (; ; Titsias & Lázaro-) is one of the most popular approaches for modeling complex high-dimensional distributions. It has been applied successfully to many practical problems. It has several nice properties such as learning low-dimensional representations for the objects and ability to conditional generation. Due to an explicit reconstruction term in its objective, one may ensure that VAE can generate all objects from the training set. These advantages, however, come at a price. It is a known fact that VAE tends to generate unrealistic objects, e.g., blurred images. Such behaviour can be explained by the properties of a maximum likelihood estimation (MLE) which is used to fit a restricted VAE model p θ (x) in data that comes from a complex distribution p with an equiprobable mixture of two Gaussians with learnable location and scale. Plots a)-c) show pairwise comparisons of optimal log-densities, the plot d) compares optimal densities themselves. ). This way, we encourage our model to be mode-seeking while still having relatively high values of p θ (x) on all objects from a training set, thus preventing the mode-collapse. We note that J λ (p θ (x) p * (x)) is not symmetric with respect to p θ (x) and p * (x) and by the weight λ we can balance between mode-seeking and mass-covering behaviour. However, the straightforward way of substituting each KL term with GAN and VAE losses does not work well in practice if we use an explicit likelihood for object reconstruction in VAE objective. Such simple distributions as Gaussian or Laplace that are usually used in VAE have limited flexibility and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model in an adversarial manner. We show how we can estimate the implicit likelihood in our loss function by an adversarially trained discriminator. We theoretically analyze the introduced loss function and show that under assumptions of optimal discriminators, our model minimizes the λ-Jeffreys divergence J λ (p θ (x) p * (x)) and we call our method as Implicit λ-Jeffreys Autoencoder (λ-IJAE). In an extensive set of experiments, we evaluate the generation and reconstruction ability of our model on CIFAR10 and TinyImagenet datasets. It shows the state-of-the-art trade-off between generation and reconstruction quality. We demonstrate how we can balance between the ability of generating realistic images and the reconstruction ability by changing the weight λ in our objective. Based on our experimental study we derive a default choice for λ that establishes a reasonable compromise between mode-seeking and mass-covering behaviour of our model and this choice is consistent over these two datasets. Relation to forward KL-based methods. We can say that all VAE-based models minimize the upper bound on the forward KL D KL (p * (x) p θ ). In recent years there have been many extensions and improvements of the standard VAE. One direction of research is to inroduce the discriminator as a part of data likelihood to leverage its intermediate layers for measuring similarity between objects. However, these models do not have a sound theoretical justification about what distance between p θ (x) and p * (x) they optimize. The other way is to consider more complex variational distribution q ϕ (z|x). One can either use better variational bounds (; Maaløe et al., 2016; ; ;) or apply the adversarial training to match q ϕ (z|x) and the prior distribution p(z) or to match the marginals q ϕ (z) and p(z) . Although these methods improve approximate inference in VAE model, they remain in the scope of MLE framework. As we discussed above within this framework the model with a limited capacity is going to have the mass-covering behaviour. Relation to reverse KL-based methods. The vanilla GAN framework is equivalent under the assumption of optimal discriminator to minimization of Jensen-Shanon divergence JSD(p * (x) p θ (x)) . With a minor modification of a generator loss we can obtain the equivalence to minimization of the reverse KL D KL (p θ (x) p * (x)) . There have been proposed many autoencoder models which utilize one of these two divergences JSD(p One approach is to minimize the divergence between joint distributions p * (x)q(z|x) and p θ (x|z)p(z) in a GAN framework . ALICE model introduces an additional entropy loss for dealing with the non-identifiability issues in previous works. Other methods (; a; ; ;) use the reverse KL D KL (p θ (x) p * (x)) as an additional term to encourage mode-seeking behaviour. Relation to Jeffreys divergence-based methods. To the best of our knowledge, there are only two other autoencoder models which minimize λ-Jeffreys divergence for λ = 0.5. It is an important case when λ-Jeffreys divergence equals symmetric KL divergence. These methods are AS-VAE (a) and SVAE and they are most closely related to our work. AS-VAE is a special case of SVAE method therefore further we will consider only SVAE. There are two most crucial differences between SVAE and λ-IJAE models. The first one is that SVAE minimizes ) between marginal distributions p * (x) and p θ (x) for arbitrary λ. The second difference is that the SVAE's loss J λ (p * (x)q(z|x) p θ (x|z)p(z)) solely did not give good reconstructions in experiments. Therefore, authors introduced additional data-fit terms E p * (x)q θ (z|x) log p θ (x|z) + E p θ (x|z)p(z) log q ϕ (z|x) where p θ (x|z) and q ϕ (z|x) are explicit densities. In contrast, λ-IJAE model achieves good generation and reconstruction quality as it is and allows training implicit p θ (x|z) and q ϕ (z|x) distributions. These two differences make SVAE and λ-IJAE models significantly distinct, and we observe it in practice. Consider samples x ∈ X from the true data distribution p * (x). The aim of generative models is to fit a model distribution p θ (x) to p * (x). Most popular models are GAN and VAE. In practice, we observe that they have significantly different properties. VAE tends to cover all modes of p * (x) at the cost of capturing low probability regions as well. As a , it often generates unspecific and/or blurry images. On the other hand, GAN is highly mode-seeking, i.e. it tends to concentrate most of its probability mass in a small number of modes of p * (x). Therefore it may not cover significant part of p * (x) which is also known as a mode collapse problem. Such radical contrast between VAE and GAN can be explained by the fact that they optimize different divergences between p θ (x) and p * (x). Variational Inference. VAE is trained by MLE: max θ E p * (x) log p θ (x). The distribution p θ (x) is defined as an integral over a latent variable z: p θ (x) = p θ (x|z)p(z)dz, and in practice it is typically intractable. Variational inference sidesteps this issue by introducing an encoder model (also known as a variational distribution) q ϕ (z|x) and replacing the intractable log p θ (x) with a tractable evidence lower bound (ELBO): Then we maximize ELBO L ELBO (θ, ϕ) with respect to θ and ϕ. One can easily derive that MLE is equivalent to optimizing the forward KL D KL (p * p θ): Adversarial Training. The adversarial framework is based on a game between a generator G θ (z) and a discriminator D ψ (x) which classifies objects from p * (x) versus ones from p θ (x): showed that the loss of the generator is equivalent to the Jensen-Shanon It is easy to recognize this as an instance of classification-based Density Ratio Estimation (DRE) . Following this framework, one can consider different generator's objectives while keeping the same objective for the discriminator. DRE relies on the fact that. By this approach we can obtain a likelihood-free estimator for the reverse D KL (p θ p *) : (Un)Biased Gradients in Adversarial Training. Since in practice the discriminator D ψ (x) is only trained to work for one particular set of generator parameters θ, we need to be cautious regarding validity of gradients obtained by DRE approach. For example, consider the forward KL D KL (p * p θ). If we apply DRE, we will arrive at E p * (x) log. However, we can notice that in practice this expression does not depend on θ in any way, i.e.. This is because the forward KL depends on θ only through the ratio of densities, which is replaced by a point estimate using a discriminator which has no idea regarding p θ's local behaviour. This shows we need to be careful when designing adversarial learning objective as to ensure unbiased gradients. Luckily, JSD(p θ p *) and D KL (p θ p *) are not affected by this problem: for any x. Then Proof. Given in Appendix, section A. VAE provide a theoretically sound way to learn generative models with a natural and coherent encoder. However, they are known to generate blurry and unspecific samples that have inferior perceptual quality compared to generative models based on adversarial learning. The main cause for that is that the root principle VAEs are built upon -MLE framework -is equivalent to minimization of the forward KL D KL (p * p θ). While D KL (p * p θ) recovers the true data-generating process p * (x) if the model p θ (x) has enough capacity, in a more realistic case of an insufficiently expressive model p θ (x) it is known to be mass-covering. As a , the model is forced to cover all modes of p * (x) even at the cost of covering low-probability regions as well. This in turn might lead to blurry samples as the model does not have the capacity to concentrate inside the modes. On the other hand, the reverse KL D KL (p θ p *) has mode-seeking behavior that penalizes covering low-probability regions and thus the model p θ (x) tends to cover only a few of the modes of p * (x). Following this reasoning, we propose a more balanced divergence -one that seeks modes, but still does a decent job covering all modes of p * (x) to prevent mode collapse. We chose λ-Jeffreys divergence We illustrate the advantage of λ-Jeffreys divergence for λ = 0.5 over Forward KL, Reverse KL and JSD divergences in the case of a model with limited capacity in Figure 1. In this figure we compared divergences in a simple task (see Appendix, section B) of approximating a mixture of 4 Gaussians with a mixture of just two: both Reverse KL and JSD exhibit mode-seeking behavior, completely dropping side modes, whereas the Forward KL assigns much more probability to tails and does poor job capturing the central modes. On a contrast, λ-Jeffreys divergence uses one mixture component to capture the most probable mode, and the other to ensure mass-covering. The optimization of λ-Jeffreys divergence consists of two parts. The first one is the minimization of the reverse KL D KL (p θ (x) p * (x)) which can be implemented as a standard GAN optimization as we discussed in Section 3. The second part is the optimization of the forward KL D KL (p * (x) p θ (x)) and we tackle it by maximization of the ELBO L ELBO (θ, ϕ) as in VAE. So, we obtain an upper bound on λ-Jeffreys divergence by incorporating GAN and VAE objectives: The ELBO term L ELBO (θ, ϕ) can be decomposed into two parts: (i) a reconstruction term. While both terms are easy to deal with in cases of explicit p(x|z) and q(z|x), an implicit formulation poses some challenges. In the next two sections we address them. Typically to optimize the reconstruction term E p * (x) E qϕ(z|x) log p θ (x|z) the conditional likelihood p θ (x|z) is defined explicitly as a fully factorized Gaussian or Laplace distribution (; ; Titsias & Lázaro-; b; ; ;). While convenient, such choice might limit the expressivity of the generator G θ (z). As we discussed previously, optimization of the forward KL(p * p θ) leads to a mass-covering behavior. The undesired properties of this behavior such as sampling unrealistic and/or blurry images can be more significant if a capacity of our model p θ (x) is limited. Therefore we propose a technique which allows to extend the class of possible likelihoods for p θ (x|z) to implicit ones. We note that typically in VAE the decoder p θ (x|z) first maps the latent code z to the space X, which is then used to parametrize the distribution of z's decodings x|z. For example, this is the case for N (x|G θ (z), σI) or Laplace(x|G θ (z), σI). We also use the output of the generator G θ (z) ∈ X to parametrize an implicit likelihood. In particular, we assume p θ (x|z) = r(x|G θ (z)) for some symmetric likelihood r(x|y): (ii) r(x = a|y = b) has a mode at a = b. While the Gaussian and Laplace likelihoods are symmetric and explicit, in general we do not require r(x|y) to be explicit, only being able to generate samples from r(x|y) is required. The idea is to introduce a discriminator D τ (x, z, y) which classifies two types of triplets: • fake class: (x, z, y) ∼ p * (x)q ϕ (z|x)r (y|G θ (z)). We note that r(y|x) and r (y|x) can be different and we will utilize this possibility in practice. Then we train the discriminator D τ (x, z, y) using the standard binary cross-entropy objective: If we apply the straightforward way to obtain an objective for the generator G θ (z) we will derive that we should minimize E p * (x)qϕ(z|x) log Dτ (x,z,x) 1−Dτ (x,z,x). Indeed, given an optimal discriminator for D τ * (x, z, y) = r(y|x) r(y|x)+r (y|G θ (z)), we obtain: Dτ (x,z,x) given the optimal D τ * (x, z, y) is equivalent to maximizing the reconstruction term with p θ (x|z) = r (x|G θ (z)). However, we face in practice the same issue as we discussed in Section 3 that ∇ θ E p * (x)qϕ(z|x) log Dτ (x,z,x) 1−Dτ (x,z,x) = 0 because D τ (x, z, x) does not depend on θ explicitly even for optimal τ = τ *. We can overcome this issue by exploiting the properties of symmetric likelihoods if we minimize a slightly different loss for the generator The following theorem guarantees the gradients will be unbiased in the optimal discriminator case: Theorem 1. Let D τ * (x, z, y) be the optimal solution for the objective and r(y|x) and r (y|x) are symmetric likelihoods. Then Proof. Given in Appendix, section A. So, we obtain that we can maximize the reconstruction term E p * (x) E qϕ(z|x) log r(x|G θ (z)) by minimizing −E p * (x) E qϕ(z|x) log Dτ (x,z,G θ (z)) 1−Dτ (x,z,G θ (z)) and optimize it using gradient based methods. We note again that we do not require an access to an analytic form of r(y|G θ (z)). It is an open question what is the best choice for the r(y|G θ (z)). Our expectations from r(y|G θ (z)) are that it should encourage realistic reconstructions and highly penalize for visually distorted images. In experiments, as r(y|x) we use a distribution over cyclic shifts in all directions of an image x. This distribution is symmetric with respect to all directions and has a mode in x, therefore it is the symmetric likelihood (see Definition 1 for details). Although in practice we use r(y|x) which has an explicit form due to non-optimality of D τ (x, z, y) (that is always the case when training on finite datasets) the ratio log Dτ (x,z,G θ (z)) 1−Dτ (x,z,G θ (z)) sets implicit likelihood of reconstructions. We can think of the non-optimality of D τ (x, z, y) as a form of regularization that allows us to convert explicit function r(y|x) into implicit likelihood that has desirable properties, i.e. encourages realistic reconstructions of x and penalizes unrealistic ones. Implicit Encoder. The KL term from L ELBO (θ, ϕ) can be computed either analytically, using the Monte Carlo estimation or by the adversarial manner. We chose the latter approach proposed by because it enables implicit variational distribution q ϕ (z|x) defined by a neural sampler (encoder) E ϕ (x, ξ) where ξ ∼ N (·|0, I). For this purpose we should train a discriminator D ζ (x, z) which tries to distinguish pairs (x, z) from p * (x)q ϕ (z|x) versus the ones from p ) is a reverse KL with respect to parameters ϕ, therefore we can substitute it by the expression −E qϕ(z|x) log Putting it all together we arrive at the following objective: In practice, discriminators are not optimal therefore we train our model by alternating gradients. We maximize objectives,, In experiments, we evaluate generation and reconstruction ability of our model on datasets CIFAR-10 and TinyImageNet. We used a standard ResNet architecture for the encoder E ϕ (x, ξ), the generator G θ (z) and for all three discriminators D ψ (x), D τ (x, z, y), D ζ (x, z). The complete architecture description for all networks and hyperparameters used in λ-IJAE can be found in Appendix, section D. To compare our method to other autoencoding methods in the best way, we also used official and publicly available code for baselines. For AGE 1 we use a pretrained model. For SVAE 2, TwoStage-VAE (2SVAE) 3 we report metrics reproduced using officially provided code and hyperparameters. For α-GAN we also use public implementation 4 with same architecture as in λ-IJAE. In experiments, for symmetric likelihoods r(y|x) and r (y|x) we use the following: r(y|x) is a continuous distribution over cyclic shifts in all directions of an image x. In practice, we discretize this distribution. To sample from it: (i) we sample one of four directions (top, bottom, right, left) equally probable; (ii) then sample the size of a shift (maximum size S = 5 pixels) from 0 to S (iii) as a , we shift an image x to the selected direction in a size which is sampled. For r (y|x) in practice we observe that the best choice is when r (y|x) is close to a delta function δ x (y). Therefore, we use r (y|x) = N (y|x, σI) which is clearly a symmetric likelihood. We set σ = 10 −8. For r(y|x) as an implicit likelihood we also studied a distribution over small rotations of x, however, we observed that cyclic shifts achieve better . Evaluation. We evaluate our model on both generation and reconstruction tasks. The quality of the former is assessed using Inception Score (IS) . To calculate these metrics we used the official implementation provided in tensorflow 1.13 . The reconstruction quality is evaluated using LPIPS, proposed by . LPIPS compares images based on high-level features obtained by the pre-trained network. It was show by that LPIPS is a good metric which captures perceptual similarity between images. We use the official implementation (LPIPS github) to compute LPIPS. Ablation Study. To show the importance of the implicit conditional likelihood r(y|x) we compare λ-IJAE with its modification which has instead of implicit r(y|x) a standard Gaussian or Laplace distribution. We call such models λ-IJAE-L 2 and λ-IJAE-L 1 respectively. In Figure 2 we compare λ-IJAE with λ-IJAE-L 2 and λ-IJAE-L 1 in terms of IS (generation quality) and LPIPS (reconstruction quality). We see that λ-IJAE significantly outperforms these baselines and allows to achieve paretooptimal for different choice of λ. Comparison with Baselines. We assess generation and reconstruction quality of λ-IJAE on CIFAR-10 and TinyImageNet datasets. We compare the to closest baselines with publicly available code. We provide visual in Appendix, section C. Quantitative are given in Figure 3 and in Table 1. In Figure 3 we compare the methods with respect to IS and LPIPS. Considering both metrics λ-IJAE achieves a better trade-off between reconstruction and generation quality within these Reconstruction and generation quality on CIFAR10 and TinyImagenet for models that allow reconstructions. Baseline models were trained using publicly available code, if possible, to fill reconstruction quality metrics. ↓ -lower is better, ↑ -higher is better, best is marked with bold. WAE 4.18 ± 0.04 ALI ) 5.34 ± 0.04 ALICE 6.02 ± 0.03 AS-VAE (b) 6.3 VAE (resnet) 3.45 ± 0.02 0.09 ± 0.03 2Stage-VAE 3.85 ± 0.03 0.06 ± 0.03 α-GAN 5.20 ± 0.08 0.04 ± 0.02 AGE 5.90 ± 0.04 0.06 ± 0.02 SVAE 6.56 ± 0.07 0.19 ± 0.08 6.98 ± 0.1 0.07 ± 0.03 TinyImagenet AGE 6.75 ± 0.09 0.27 ± 0.09 SVAE 5.09 ± 0.05 0.28 ± 0.08 2Stage-VAE 4.22 ± 0.05 6.87 ± 0.09 0.09 ± 0.03 datasets. We see that small values of λ give a good IS score while remain the decent reconstruction quality in terms of LPIPS. However, if decrease λ further LPIPS will start to degrade. Therefore, we chose the λ = 0.3 as a reasonable trade-off between generation and reconstruction ability of λ-IJAE. For this choice of λ we compute the for Table 1. From these Table 1 we see that λ-IJAE achieves the state-of-the-art trade-off between generation and reconstruction quality. It confirms our justification about λ-Jeffreys divergence that it takes the best properties of both KL divergences. In the paper, we considered a fusion of VAE and GAN models that takes the best of two worlds: it has sharp and coherent samples and can encode observations into low-dimensional representations. We provide a theoretical analysis of our objective and show that it is equivalent to the Jeffreys divergence. In experiments, we demonstrate that our model achieves a good balance between generation and reconstruction quality. It confirms our assumption that the Jeffreys divergence is the right choice for learning complex high-dimensional distributions in the case of the limited capacity of the model. Proof. Now we will show the second term is equal to zero given our assumptions: Where we have used the and properties of the likelihoods r(x|y) (Definition 1): To generate the plot 1 we considered the following setup: a target distribution was a mixture: While the model as an equiprobable mixture of two learnable Gaussians: The optimal θ was found by making 10,000 stochastic gradient descent iterations on Monte Carlo estimations of the corresponding divergences with a batch size of 1000. We did 50 independent runs for each method to explore different local optima and chose the best one based on a divergence estimate with 100,000 samples Monte Carlo samples.
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN)
1,701
scitldr
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time. Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network. Generalizing the model-agnostic meta-learning (MAML) algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations. This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters. Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification. We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks. Meta-learning algorithms aim to increase the efficiency of learning by treating task-specific learning episodes as examples from which to generalize BID39. The central assumption of a meta-learning algorithm is that some tasks are inherently related and so inductive transfer can improve generalization and sample efficiency BID4 BID2. Recent metalearning algorithms have encoded this assumption by learning global hyperparameters that provide a task-general inductive bias. In learning a single set of hyperparameters that parameterize, for example, a metric space BID46 or an optimizer for gradient descent BID31 BID8, these meta-learning algorithms make the assumption that tasks are equally related and therefore mutual transfer is appropriate. This assumption has been cemented in recent few-shot learning benchmarks, which consist of a set of tasks generated in a systematic manner (e.g., BID8 BID46 .However, the real world often presents scenarios in which an agent must decide what degree of transfer is appropriate. In the case of positive transfer, a subset of tasks may be more strongly related to each other and so non-uniform transfer poses a strategic advantage. Negative transfer in the presence of dissimilar or outlier tasks worsens generalization performance BID34 . Moreover, when the underlying task distribution is non-stationary, inductive transfer to initial tasks should exhibit graceful degradation to address the catastrophic forgetting problem BID16 . However, the consolidation of all inductive biases into a single set of hyperparameters cannot flexibly account for variability in the task distribution. In contrast, in order to deal with this degree of task heterogeneity, extensive task-switching literature reveals that people detect and readily adapt even in the face of significantly novel contexts (see BID5, for a review).In this work, we learn a mixture of hierarchical models that allows the meta-learner to adaptively select over a set of learned parameter initializations for gradient-based fast adaptation BID8 to a new task. The method is equivalent to clustering task-specific parameters in the hierarchical model induced by recasting gradient-based meta-learning as hierarchical Bayes BID13 and generalizes the model-agnostic meta-learning (MAML) algorithm introduced in BID8.By treating the assignment of task-specific parameters to clusters as latent variables in a probabilistic model, we can directly detect similarities between tasks on the basis of the task-specific likelihood, which may be parameterized by a black-box model such as a neural network. Our approach therefore alleviates the need for explicit geometric or probabilistic modelling assumptions about the weights of a parametric model and provides a scalable method to regulate information transfer between episodes. We extend our latent variable model to the non-parametric setting and leverage stochastic point estimation for scalable inference in a Dirichlet process mixture model (DPMM) BID30. To the best of our knowledge, no previous work has considered a scalable stochastic point estimation in a non-parametric mixture model. Furthermore, we are not aware of prior work applying nonparametric mixture modelling techniques to high-dimensional parameter spaces such as those of deep neural networks. The non-parametric extension allows the complexity of a meta-learner to evolve by introducing or removing clusters in alignment with the changing composition of the dataset and preserves performance on previously encountered tasks better than a parametric counterpart. The goal of a meta-learner is to extract task-general knowledge from the experience of solving a number of related tasks. By leveraging this acquired prior knowledge, the meta-learner can quickly adapt to novel tasks even in the face of limited data or limited computation time BID40. Recent approaches to meta-learning consolidate information from a set of training tasks into the parameters of a mapping to be applied at test time to a novel task. This mapping has taken the form of, for instance, a learned metric space (e.g., BID46 BID42, a trained recurrent neural network (e.g., BID38, or an gradient-based optimization algorithm with learned parameters (e.g., BID31 .Model-agnostic meta-learning (MAML) BID8 ) is a gradient-based meta-learning approach that estimates global parameters to be shared among task-specific models as an initialization for a few steps of gradient descent. MAML also admits a natural interpretation as parameter estimation in a hierarchical model, where the learned initialization acts as data-driven regularization for task-specific parameters BID13. In particular, BID13 cast MAML as posterior inference for task-specific parameters φ j given a batch of task data x j 1:N and a prior over φ j that is induced by early stopping of an iterative descent procedure. A few steps of gradient descent on the negative log-likelihood − log p(x j 1:N | φ j ), starting from φ j = θ can be then understood as mode estimation of the posterior p(φ DISPLAYFORM0 DISPLAYFORM1 Subroutine 3: The E-STEP and M-STEP for a finite mixture of hierarchical Bayesian models. If the task distribution is heterogeneous, assuming a single parameter initialization θ is not suitable because it is unlikely that the point estimate computed by a few steps of gradient descent will sufficiently adapt the task-specific parameters φ to a diversity of tasks. Moreover, explicitly estimating relatedness between tasks has the potential to aid the efficacy of a meta-learning algorithm by modulating both positive and negative transfer BID44 BID51 BID35 BID52 BID48 . Nonetheless, defining an appropriate notion of task relatedness is a difficult problem in the high-dimensional parameter or activation space of models such as neural networks. Using the probabilistic interpretation of Section 2, we may deal with the variability in the tasks T j by assuming that each set of task-specific parameters φ j is drawn from a mixture of base distributions each of which is parameterized by a hyperparameter θ . Accordingly, we capture task relatedness by estimating the likelihood of assigning each task to a mixture component based only on the task loss itself after a single step of fast adaptation BID8. The is a scalable end-to-end meta-learning algorithm that jointly learns task-specific cluster assignments and network parameters. This algorithm, further detailed in the following section, is capable of modulating the transfer of information across tasks to better generalize to heterogeneous or evolving task distributions. Let z j be the Categorical latent variable indicating the cluster assignment of each task-specific parameter φ j. A direct maximization of the mixture model likelihood is a combinatorial optimization problem that can grow intractable. This intractability is equally problematic for the posterior distribution over the cluster assignment variables z j and the task-specific parameters φ j, which are both treated as latent variables in the probabilistic formulation of meta-learning. A standard approach for estimation in latent variable models such as probabilistic mixtures is to represent the distribution using samples with a sampler. The most widely used is the Gibbs Model 1-shot matching network FCE BID46 a 43.56 ± 0.84 meta-learner LSTM BID31 43.44 ± 0.77 SNAIL BID20 b 45.1 ± --prototypical networks BID42 c 46.61 ± 0.78 MAML BID8 48.70 ± 1.84 LLAMA BID13 49.40 ± 1.83 KNN + GNN embedding BID10 49.44 ± 0.28 mAP-DLM BID45 49.82 ± 0.78 fwCNN (Hebb) BID22 50.21 ± 0.37 GNN BID10 50.33 ± 0.36 Our method (clustering all layers of a neural network)50.80 ± 1.70 sampler BID24 BID11, which draws from the conditional distribution of each latent variable given the others until convergence to the posterior distribution over all the latents. However, in the setting of latent variables defined over high-dimensional parameter spaces such as those of neural network models, a sampling approach such as Gibbs sampling is prohibitively expensive BID25 BID21.Instead of maintaining samples to represent the distribution over the latent variables, a scalable approximation involves representing the conditional distribution for each latent variable with either a maximum a posteriori (MAP) value or an expectation. In our meta-learning setting of a mixture of hierarchical Bayesian models, this suggests an augmented expectation maximization (EM) procedure BID7 alternating between an E-STEP that computes an expectation of the task-to-cluster assignments, which itself involves the computation of a MAP estimate for the task-specific parameters, and an M-STEP that computes a local maximum of the hyperparameters θ (1:L).To ensure scalability, we use the minibatch variant of stochastic optimization BID33 to compute both the E-STEP and M-STEPs; such approaches to EM are motivated by a view of the algorithm as optimizing a single free energy at both the E-STEP and the M-STEP BID26. In particular, for each task j and cluster, we follow the gradients to minimize the negative log-likelihood on the training data points using the cluster parameters θ as initialization. This allows us to obtain a modal point estimate of the task parametersφ DISPLAYFORM0 The E-STEP in Subroutine 3 leverages the connection between gradient-based meta-learning and hierarchical Bayes (HB) BID13 to employ the task-specific parameters to compute the posterior probability of cluster assignment. Accordingly, based on the likelihood of the same training data points under the model parameterized byφ j, we compute the cluster assignment probabilities as DISPLAYFORM1 The cluster means θ are then updated by gradient descent on the validation loss in the M-STEP, given in Subroutine 3; this M-STEP is similar to the MAML algorithm in BID8. Note that, unlike other recent approaches to probabilistic clustering (e.g., BID1 we adhere to the episodic meta-learning setup for both training and testing since only the task support set x j 1:N is used to compute both the point estimateφ j and the cluster responsibilities γ j. See Algorithm 2 for the full algorithm, whose high-level structure is shared with the non-parametric variant of our method. Clustering task-specific parameters provides a way for a meta-learner to deal with task heterogeneity since each cluster can be associated with a subset of the tasks that would benefit most from inductive DISPLAYFORM0 Subroutine 4: The E-STEP and M-STEP for an infinite mixture of hierarchical Bayesian models.transfer. We apply Algorithm 2 with L = 5 components to the 1-shot 5-way classification few-shot classification benchmark miniImageNet BID46 using the same data split, architecture, and hyperparameter values as in BID8. We additionally use τ = 1 for the softmax temperature and the same initialization as BID8 for our global prior G 0 (which reduces to a fixed initialization in the parametric case).While we do not expect the standard miniImageNet dataset to present much heterogeneity given the uniform sampling assumptions behind its design, we demonstrate in TAB1 that our parametric meta-learner can improve the generalization of gradient-based meta-learning on this task. This suggests some level of cluster differentiation even on this non-heterogeneous benchmark. The mixture of meta-learners developed in Section 3 addresses a drawback of meta-learning approaches such as MAML that consolidate task-general information into a single set of hyperparameters, but adds another dimension to model validation in the form of identifying the correct number of mixture components. While this may be resolved by cross-validation if the dataset is static and therefore the number of components may be fixed, adhering to a fixed number of components throughout training is not appropriate in the nonstationary regime, where the underlying task distribution changes and therefore different types of tasks are presented sequentially. In this regime, it is important to add mixture components sequentially to enable specialization of the component meta-learners to the different types of tasks that constitute the dataset. To address this, we derive a scalable stochastic estimation procedure to compute the expectation of task-tocluster assignments (E-Step) for a growing number of task clusters using an infinite or non-parametric mixture model BID30 called the Dirichlet process mixture model (DPMM). This approach obviates the need for an a priori fixed number of components and enables the model to unboundedly adapt its complexity according to the observed data. The formulation of the DPMM that is most appropriate for incremental learning is the sequential draws formulation that corresponds to an instantiation of the Chinese restaurant process (CRP) BID30. A CRP prior over z allows some probability to be assigned to a new mixture component while the task identities are inferred in a sequential manner, and has therefore been key to recent online and stochastic learning of the DPMM BID19. A draw from a CRP proceeds as follows: For a sequence of tasks (1, . . ., J), the first task is assigned to the first cluster and the jth subsequent task is then assigned to the th cluster drawn with probability DISPLAYFORM0 where L indicates the number of non-empty clusters, n indicates the number of tasks already occupying a cluster, and ζ is a fixed positive concentration parameter. The prior probability associated with a new mixture component is therefore p(z j = L + 1 | z 1:j−1, ζ), and the joint log-likelihood of the infinite mixture model can be written as DISPLAYFORM1 where γ j is the responsibility of cluster for task-specific parameters j. We refer the reader to BID30 for more details on the likelihood function associated with the DPMM.In a similar spirit to Section 3, we develop a stochastic EM procedure for this estimation problem. However, while the computation of the mode estimate of the task-specific parameters φ is mostly unchanged from finite variant, the estimation of the cluster assignment variables z in the E-STEP requires revisiting the Gibbs conditional distributions due to the potential addition of a new cluster at each step. For a DPMM, the conditional distributions for z are DISPLAYFORM2 with G 0 as the base measure over the components of the CRP, also known as the global prior. G is the prior over each cluster's parameters, initialized with a draw from a Gaussian centered at G 0 with a fixed variance. Using a mode estimateφ j for task-specific φ j, the distribution over task-to-cluster assignments z j can be expressed as DISPLAYFORM3 We thus obtain a set of local objective functions amenable to gradient-based optimization in a similar fashion to the parametric M-STEP of Subroutine 3. We can also omit the prior term log p(φ | θ ) as it arises as an implicit prior ing from truncated gradient descent, as explained in Section 3 of BID13.One marked difference between the objective of the M-STEP in Subroutine 3 and that in Subroutine 4 is the penalty term of log n or log ζ which incentivizes larger clusters in order to deter overspawning. Accordingly, this approximate inference routine still preserves the preferential attachment ("rich get richer") dynamics of Bayesian nonparametrics BID32. Another is that is not immediate from the Gibbs conditionals is the use of a threshold on the cluster responsibilities to account for mini-batch noise when spawning a cluster based on a single batch. This threshold is necessary for our stochastic mode estimation of Algorithm 4 as it maintains that a new cluster's responsibility needs to exceed a certain value before being permanently added to the set of components (see the E-STEP in Subroutine 4 for more details). Intuitively, if a cluster has close to an equal share of responsibilities to existing clusters after accounting for the CRP penalty (log n vs. log ζ), it is spawned. A sequential approximation for nonparametric mixtures with a similar threshold was proposed in BID19, where variational Bayes was used instead of point estimation in a DPMM.Overall, unlike traditional nonparametric algorithms, our model does not refine the cluster assignments of previously observed points by way of multiple expensive passes over the whole data set. Instead, we incrementally infer model parameters and add components during episodic training based on noisy estimates but unbiased of the log-likelihood gradients. To the best of our knowledge, no previous work has considered a scalable stochastic point estimation of the parameters in a non-parametric mixture model. Furthermore, we are not aware of prior work applying non-parametric mixture models to high-dimensional function approximators such as modern neural networks with end-to-end learning. A non-parametric mixture model should be able to detect and adapt to a changing distribution of tasks, without any external information to signal the start of a new task type (i.e.,, in a "taskagnostic" manner), by adjusting its capacity. On the other hand, current meta-learning methods coerce Figure 5: Mixture component validation log-likelihoods (MSE losses) on an evolving dataset that generates even polynomial regression tasks for 700 iterations, then odd polynomial regression tasks until iteration 1400 at which point it generates sinusoidal regression tasks. We plot the validation negative log likelihood of the data for each task. Note, for example, the change in loss to the second (red) cluster at 700 iterations when the odd polynomial tasks are introduced.the inductive biases into a single set of hyperparameters and therefore inevitably saturate model parameters in the evolving dataset regime. Here, we present our main experimental using synthetic few-shot regression tasks to demonstrate that our non-parametric meta-learning algorithm is able to acquire new inductive biases as new tasks are introduced without necessarily over-writing existing ones by adding new components. Experimental Setup To demonstrate the unconstrained and adaptive capacity of our non-parametric meta-learning model, we consider alternating sinusoidal, even-degree polynomial, and odd-degree polynomial regression tasks with input x sampled uniformly from [−5, 5] during the meta-training procedure. For the sinusoidal regression, we consider a sine wave with phase sampled uniformly from [0, π] and with amplitudes sampled from a 1 ∼ N in a similar fashion to the synthetic regression tasks in BID8 As for the polynomial regression, y = Figure 5, we presented even-degree polynomial regression tasks for 700 iterations, followed by odd-degree polynomial regression tasks until 1400 iterations, before switching to sinusoidal regression tasks. We use the mean-squared error loss function for each task as the inner loop and meta-level objectives. Hyperparameter choices Our architecture is a feedforward neural network with 2 hidden layers with ReLU nonlinearities, each of size 40. We use a meta-batch size of 25 tasks (both for the inner updates and the meta-gradient updates) as in the setup for 10-shot sinusoidal regression in BID8. Our non-parametric algorithm starts with a single cluster (L 0 = 1 in Algorithm 4). In these experiments, we set the threshold = 0.95T /(L + 1), with L the number of non-empty clusters and T the size of the meta-batch. We also compute the cluster sizes using a moving window of size 20 (which is a dataset-dependent hyperparameter) to accommodate the order of magnitude in comparison to the small training losses used for cluster responsibility evaluation. This is necessary since we do not keep track of the exact prior cluster assignments for our randomly selected task mini-batches nor do we re-assess our assignments at each iteration. Otherwise, non-empty clusters can accumulate an extremely large number of assignments, making cluster spawning impossible after only a few meta-learning episodes. An additional practical reason for the stochastic setting is that it would be extremely expensive to store the assignments for an entire dataset in memory. Finally, preserving task assignments is potentially training iteration index Figure 6: (Top) An evolving dataset of quadratic, sinusoidal, and logistic regression tasks. The dataset generates quadratic regression tasks for 700 iterations, then switches to generating sinusoidal regression tasks until iteration 5700 at which point it generates logistic regression tasks. (Below) Note the change in responsibilities to the second (red) cluster at 700 iterations when the sinusoidal tasks are introduced. At 5700 iterations, the logistic regression tasks (third row) also cause a third (brown) cluster to be introduced. harmful due to stale parameter values since the task assignments in our framework are meant to be easily reconstructed on-the-fly using the E-STEP with updated parameters θ. Results. In Figure 5, we report the mean-squared error (MSE) validation loss for each task as the 3 different tasks are introduced sequentially and disjoint (only one task for each training phase) to the non-parametric meta-learner. Overall, our algorithm consistently outperforms MAML in terms of validation loss on the three tasks across the three training phases. More interestingly, our algorithm preserves its performance on old tasks when switching to a new training phase, whereas MAML suffers from a clear degradation. While it seems our algorithm does not perfectly preserve old inductive biases, we can conclude from a parallel experiment in Figure 6 that it can increase its capacity, when needed, to adjust for new training phases. This allows for better preservation of previously learnt knowledge which is key for continual learning. In this section, we turn our focus to task differentiation. We thus investigate the cluster responsibilities on validation data from each of the 3 tasks: quadratic regression, sinusoidal regression, and logistic regression on data from the same input range as specified in Section 5.1.In Figure 5, we notice a clear differentiation between the tasks as indicated by the cluster responsibilities. The responsibilities under the first cluster (in blue) decreases to almost zero for the 3rd task (sinusoid) while staying evenly split for the related odd and even polynomial regression tasks. Furthermore, a second cluster (in red) is spawned to account for the difference between odd and even degree polynomials. However, we also notice that the second cluster responsibilities are not zero for the first task, indicating similarities between even and odd polynomial regressions. The same behavior can be seen for the third cluster on the third task. Note that the sinusoidal regression task is the more difficult task which explains the different order of magnitude of the losses and the motivation for a longer period of training. Also note that regression losses are unbounded and thus pose difficulties to any optimization-based continual learner; accordingly, most continual learning datasets such as Moving MNIST consist of classification tasks that make use of a bounded cross-entropy error. Multi-task learning. Rosenstein et al. BID34 demonstrated that negative transfer can worsen generalization performance. This has motivated much work on HB in transfer learning and domain adaptation (e.g., BID17 BID49 BID9 BID6 BID47 . Closest to our proposed approach is early work on hierarchical Bayesian multi-task learning with neural networks BID15 BID0 BID43 . These approaches are different from ours in that they place a prior, which could be nonparametric as in and BID43, only on the output layer. Furthemore, none of these approaches were applied to the episodic training setting of meta-learning. BID15 and BID43 also propose training a mixture model over the output layer weights using MAP inference. However, this approach does not scale well to all the layers in a network and performing full passes on the dataset for inference of the full set of weights can become computationally intractable. Continual learning. Techniques developed specifically to address the catastrophic forgetting problem in continual learning, such as elastic weight consolidation (EWC) BID16, synaptic intelligence (SI) BID50, and variational continual learning (VCL) BID27 require access to an explicit delineation between tasks that acts as a catalyst to grow model size, which we refer to as "task-aware." In contrast, our nonparametric algorithm tackles the "task-agnostic" setting of continual learning, where the meta-learner does not receive information about task changes but instead learns to recognize a shift in the task distribution and adapt accordingly. Clustering. Incremental or stochastic clustering has been considered in the EM setting in BID26, and similarly for minibatch K-means BID41. Online learning of nonparametric mixture models is also a way of perform clustering in the mini-batch setting using sequential variational inference BID19.A key distinction between our work and these approaches is that we leverage the connection between empirical Bayes in this model and gradient-based metalearning BID13 to use the MAML BID8 objective as a log posterior surrogate. This allows our algorithm to scale and easily integrate with minibatch stochastic gradient-based meta-learning instead of alternating multiple backpropagation steps with multiple inference passes over the full dataset BID43 BID1. Our approach is distinct from recent work on gradient-based clustering BID14 since we adhere to the more challenging setting of episodic meta-learning for both training and testing. This can be a challenging setting for a clustering algorithm, as the assignments need to be computed using K = 1 examples per class as is the case in 1-shot learning. Meta-learning is a source of learned inductive bias. Occasionally, the inductive bias is harmful because the experience gained from solving one task does not transfer well to another. On the other hand, if tasks are closely related, they can benefit from a greater amount of inductive transfer. Here, we present an approach that allows a gradient-based meta-learner to explicitly modulate the amount of transfer between tasks, as well as to adapt its parameter dimensionality when the underlying task distribution evolves. We formulate this as probabilistic inference in a mixture model that defines a clustering of task-specific parameters. To ensure scalability, we make use of the recent connection between gradient-based meta-learning and hierarchical Bayes BID13 to perform approximate maximum a posteriori (MAP) inference in both a finite and an infinite mixture model. This approach admits non-conjugate likelihoods parameterised with a black-box function approximator such as a deep neural network, and therefore learns to identify underlying genres of tasks using the standard gradient descent learning rule. We demonstrate that this approach allows the model complexity to grow along with the evolving complexity of the observed tasks in both a few-shot regression and a few-shot classification problem. BID31 43.44 ± 0.77 60.60 ± 0.71 SNAIL BID20 b 45.1 ± --55.2 ± --prototypical networks BID42 c 46.61 ± 0.78 65.77 ± 0.70 MAML BID8 48.70 ± 1.84 63.11 ± 0.92 LLAMA BID13 49.40 BID45 49.82 ± 0.78 63.70 ± 0.67 KNN + GNN embedding BID10 49.44 ± 0.28 64.02 ± 0.51 GNN BID10 50.33 ± 0.36 66.41 ± 0.63 fwCNN (Hebb) BID22 50.21 ± 0.37 64.75 ± 0.49 fwResNet (Hebb) BID22 56.84 ± 0.52 71.00 ± 0.34 SNAIL BID20 55. BID23 57.10 ± 0.70 70.04 ± 0.63 MAML BID8 Figure 7: An evolving dataset of miniImageNet few-shot classification tasks where for the first 20k iterations we train on the standard dataset, then switch to a "pencil" effect set of tasks for 10k iterations before finally switching to a "blurred" effect set of tasks until 40k. Responsibilities γ for each cluster are plotted over time. Note the change in responsibilities as the dataset changes at iterations 20k and 30k. We apply Algorithm 4 to an evolving variant of the miniImageNet few-shot classification dataset while using the same standard architecture of BID46 BID8. In this variant, different artistic filters are applied to the images in the few-shot classification tasks over the metatraining procedure to simulate a changing distribution of classification tasks. More specifically, we first train on the standard mini-imagenet tasks for 20000 iterations then introduce "pencil" effect tasks for 10000 iterations, before finally switching to a "radial blur" effect for another 10000 iterations. Cluster responsibilities during training can be found in Figure 7. To compare more closely to the single-cluster baseline, we restrict the number of clusters to 1 for the first phase of training (the standard mini-imagenet tasks). However, the moment we start introducing new datasets, this restriction is lifted. This allows a better evaluation of the transfer from the first cluster to the new tasks as more datasets are introduced in an online setting. We notice some differentiation between the tasks that is not as pronounced as what we observed on the toy data. This is potentially due to the fact that all the tasks are derived from the same core set of images. Accordingly, inductive biases learned on the unfiltered dataset for the first 20000 iterations can transfer to the filtered datasets more easily than the different regression tasks we experimented with.
We use the connection between gradient-based meta-learning and hierarchical Bayes to learn a mixture of meta-learners that is appropriate for a heterogeneous and evolving task distribution.
1,702
scitldr
We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote. Unlike previously proposed routing algorithms, the parent's ability to reconstruct the child is not explicitly taken into account to update the routing probabilities. This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100. The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing. Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN (ResNet-18), using less than 25% of the parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks. Capsule Networks (CapsNets) represent visual features using groups of neurons. Each group (called a "capsule") encodes a feature and represents one visual entity. Grouping all the information about one entity into one computational unit makes it easy to incorporate priors such as "a part can belong to only one whole" by routing the entire part capsule to its parent whole capsule. Routing is mutually exclusive among parents, which ensures that one part cannot belong to multiple parents. Therefore, capsule routing has the potential to produce an interpretable hierarchical parsing of a visual scene. Such a structure is hard to impose in a typical convolutional neural network (CNN). This hierarchical relationship modeling has spurred a lot of interest in designing capsules and their routing algorithms (; ; ; ; ; ; . In order to do routing, each lower-level capsule votes for the state of each higher-level capsule. The higher-level (parent) capsule aggregates the votes, updates its state, and uses the updated state to explain each lower-level capsule. The ones that are well-explained end up routing more towards that parent. This process is repeated, with the vote aggregation step taking into account the extent to which a part is routed to that parent. Therefore, the states of the hidden units and the routing probabilities are inferred in an iterative way, analogous to the M-step and E-step, respectively, of an Expectation-Maximization (EM) algorithm. Dynamic Routing and EMrouting can both be seen as variants of this scheme that share the basic iterative structure but differ in terms of details, such as their capsule design, how the votes are aggregated, and whether a non-linearity is used. We introduce a novel routing algorithm, which we called Inverted Dot-Product Attention Routing. In our method, the routing procedure resembles an inverted attention mechanism, where dot products are used to measure agreement. Specifically, the higher-level (parent) units compete for the attention of the lower-level (child) units, instead of the other way around, which is commonly used in attention models. Hence, the routing probability directly depends on the agreement between the parent's pose (from the previous iteration step) and the child's vote for the parent's pose (in the current iteration step). We also propose two modifications for our routing procedure - using Layer Normalization as normalization, and doing inference of the latent capsule states and routing probabilities jointly across multiple capsule layers (instead of doing it layer-wise). These modifications help scale up the model to more challenging datasets. Our model achieves comparable performance as the state-of-the-art convolutional neural networks (CNNs), but with much fewer parameters, on CIFAR-10 (95.14% test accuracy) and CIFAR-100 (78.02% test accuracy). We also introduce a challenging task to recognize single and multiple overlapping objects simultaneously. To be more precise, we construct the DiverseMultiMNIST dataset that contains both single-digit and overlapping-digits images. With the same number of layers and the same number of neurons per layer, the proposed CapsNet has better convergence than a baseline CNN. Overall, we argue that with the proposed routing mechanism, it is no longer impractical to apply CapsNets on real-world tasks. We will release the source code to reproduce the experiments. An example of our proposed architecture is shown in Figure 1. The backbone is a standard feedforward convolutional neural network. The features extracted from this network are fed through another convolutional layer. At each spatial location, groups of 16 channels are made to create capsules (we assume a 16-dimensional pose in a capsule). LayerNorm is then applied across the 16 channels to obtain the primary capsules. This is followed by two convolutional capsule layers, and then by two fully-connected capsule layers. In the last capsule layer, each capsule corresponds to a class. These capsules are then used to compute logits that feed into a softmax to computed the classification probabilities. Inference in this network requires a feed-forward pass up to the primary capsules. After this, our proposed routing mechanism (discussed in the next section) takes over. In prior work, each capsule has a pose and some way of representing an activation probability. In Dynamic Routing CapsNets , the pose is represented by a vector and the activation probability is implicitly represented by the norm of the pose. In EM Routing CapsNets , the pose is represented by a matrix and the activation probability is determined by the EM algorithm. In our work, we consider a matrix-structured pose in a capsule. We denote the capsules in layer L as P L and the i-th capsule in layer L as p form and will be reshaped to R d L when representing it as a matrix, where d L is the number of hidden units grouped together to make capsules in layer L. The activation probability is not explicitly represented. By doing this, we are essentially asking the network to represent the absence of a capsule by some special value of its pose. The proposed routing process consists of two steps. The first step computes the agreement between lower-level capsules and higher-level capsules. The second step updates the pose of the higher-level capsules. Procedure 1 Inverted Dot-product Attention Routing algorithm returns updated poses of the capsules in layer L + 1 given poses in layer L and L + 1 and weights between layer L and L + 1. for all capsule i in layer L and capsule j in layer for all capsule i in layer L and capsule j in layer for all capsule j in layer (L + 1): return P Step 1: Computing Agreement: where the matrix W The pose p L+1 j is obtained from the previous iteration of this procedure, and will be set to 0 initially. Step 2 where r L ij is an inverted attention score representing how higher-level capsules compete for attention of lower-level capsules. Using the routing probabilities, we update the pose p L+1 j for capsule j in layer L + 1 from all capsules in layer L: We adopt Layer Normalization as the normalization, which we empirically find it to be able to improve the convergence for routing. The routing algorithm is summarized in Procedure 1 and Figure 2. To explain how inference and learning are performed, we use Figure 1 as an example. Note that the choice of the backbone, the number of capsules layers, the number of capsules per layer, the design of the classifier may vary for different sets of experiments. We leave the discussions of configurations in Sections 5 and 6, and in the Appendix. For ease of exposition, we decompose a CapsNet into pre-capsule, capsule and post-capsule layers. The goal is to obtain a backbone feature F from the input image I. The backbone model can be either a single convolutional layer or ResNet computational blocks . for L in layers 2 to N: P L ← 0s non-primary capsules /* Capsules Layers (1st Iteration): sequential routing */ for L in layers 1 to (N − 1) do 6: non-primary capsules /* Capsules Layers (2nd to tth Iteration): concurrent routing */ 7: for L in layers 2 to N: P L ←P returnŷ Capsule Layers: The primary capsules P 1 are computed by applying a convolution layer and Layer Normalization to the backbone feature F. The non-primary capsules layers P 2:N are initialized to be zeros 1. For the first iteration, we perform one step of routing sequentially in each capsule layer. In other words, the primary capsules are used to update their parent convolutional capsules, which are then used to update the next higher-level capsule layer, and so on. After doing this first pass, the rest of the routing iterations are performed concurrently. Specifically, all capsule layers look at their preceding lower-level capsule layer and perform one step of routing simultaneously. This procedure is an example of a parallel-in-time inference method. We call it "concurrent routing" as it concurrently performs routing between capsules layers per iteration, leading to better parallelism. Figure 3 illustrates this procedure from routing iteration 2 to t. It is worth noting that, our proposed variant of CapsNet is a weight-tied concurrent routing architecture with Layer Normalization, which empirically showed could converge to fixed points. Previous CapsNets used sequential layer-wise iterative routing between the capsules layers. For example, the model first performs routing between layer L − 1 and layer L for a few iterations. Next, the model performs routing between layer L and L + 1 for a few iterations. When unrolled, this sequential iterative routing defines a very deep computational graph with a single path going from the inputs to the outputs. This deep graph could lead to a vanishing gradients problem and limit the depth of a CapsNet that can be trained well, especially if any squashing non-linearities are present. With concurrent routing, the training can be made more stable, since each iteration has a more cumulative effect. The goal is to obtain the predicted class logitsŷ from the last capsule layer (the class capsules) P N. In our CapsNet, we use a linear classifier for class i in class capsules: This classifier is shared across all the class capsules. We update the parameters θ, W 1:N −1 by stochastic gradient descent. For multiclass classification, we use multiclass cross-entropy loss. For multilabel classification, we use binary cross-entropy loss. We also tried Margin loss and Spread loss which are introduced by prior work . However, these losses do not give us better performance against cross-entropy and binary cross-entropy losses. The concurrent routing is a parallel-in-time routing procedure for all capsules layers. CIFAR-10 and CIFAR-100 datasets consist of small 32 × 32 real-world color images with 50, 000 for training and 10, 000 for evaluation. CIFAR-10 has 10 classes, and CIFAR-100 has 100 classes. We choose these natural image datasets to demonstrate our method since they correspond to a more complex data distribution than digit images. Comparisons with other CapsNets and CNNs: In Table 1, we report the test accuracy obtained by our model, along with other CapsNets and CNNs. Two prior CapsNets are chosen: Dynamic Routing CapsNets and EM Routing CapsNets . For each CapsNet, we apply two backbone feature models: simple convolution followed by ReLU nonlinear activation and a ResNet backbone. For CNNs, we consider a baseline CNN with 3 convolutional layers followed by 1 fully-connected classifier layer. First, we compare previous routing approaches against ours. In a general trend, the proposed CapsNets perform better than the Dynamic Routing CapsNets, and the Dynamic Routing CapsNets perform better than EM Routing CapsNets. The performance differs more on CIFAR-100 than on CIFAR-10. For example, with simple convolutional backbone, EM Routing CapsNet can only achieve 37.73% test accuracy while ours can achieve 57.32%. Additionally, for all CapsNets, we see improved performance when replacing a single convolutional backbone with ResNet backbone. This is not surprising since ResNet structure has better generalizability than a single convolutional layer. Second, we discuss the performance difference between CNNs and CapsNets. We see that, with a simple backbone (a single convolutional layer), it is hard for CapsNets to reach the same performance as CNNs. For instance, our routing approach can only achieve 57.32% test accuracy on CIFAR-100 while the baseline CNN achieves 62.30%. However, with a SOTA backbone structure (ResNet backbone), the proposed routing approach can reach competitive performance (95.14% on CIFAR-10) as compared to the SOTA CNN model (ResNet-18 with 95.11% on CIFAR-10). Convergence Analysis: In Figure 4, top row, we analyze the convergence for CapsNets with respect to the number of routing iterations. The optimization hyperparameters are chosen optimally for each routing mechanism. For Dynamic Routing CapsNets , we observe a mild performance drop when the number of iterations increases. For EM Routing CapsNets , the best-performed number of iterations is 2. Increasing or decreasing this number severely hurts the performance. For our proposed routing mechanism, we find a positive correlation between performance and number of routing iterations. The performance variance is also the smallest among the three routing mechanisms. This suggests our approach has better optimization and stable inference. However, selecting a larger iteration number may not be ideal since memory usage and inference time will also increase (shown in the bottom right in Figure 4). Note that, we observe sharp performance jitters during training when the model has not converged (especially when the number of iterations is high). This phenomenon is due to applying LayerNorm on a low-dimensional vector. The jittering is reduced when we increase the pose dimension in capsules. Ablation Study: Furthermore, we inspect our routing approach with the following ablations: 1) Inverted Dot-Product Attention-A: without Layer Normalization; 2) Inverted Dot-Product Attention-B: replacing concurrent to sequential iterative routing; and 3) Inverted Dot-Product Attention-C: Figure 4 bottom row. When removing Layer Normalization, performance dramatically drops from our routing mechanism. Notably, the prediction becomes uniform when the iteration number increases to 5. This implies that the normalization step is crucial to the stability of our method. When replacing concurrent with sequential iterative routing, the positive correlation between performance and iteration number no longer exists. This fact happens in the Dynamic Routing CapsNet as well, which also uses sequential iterative routing. When adding activations to our capsule design, we obtain a performance deterioration. Typically, squashing activations such as sigmoids make it harder for gradients to flow, which might explain this. Discovering the best strategy to incorporate activations in capsule networks is an interesting direction for future work. The goal in this section is to compare CapsNets and CNNs when they have the same number of layers and the same number of neurons per layer. Specifically, we would like to examine the difference of the representation power between the routing mechanism (in CapsNets) and the pooling operation (in CNNs). A challenging setting is considered in which objects may be overlapping with each other, and there may be a diverse number of objects in the image. To this end, we construct the DiverseMultiMNIST dataset which is extended from MNIST , and it contains both single-digit and two overlapping digit images. The task will be multilabel classification, where the prediction is said to be correct if and only if the recognized digits match all the digits in the image. We plot the convergence curve when the model is trained on 21M images from DiverseMultiMNIST. Please see Appendix B.2 for more details on the dataset and Appendix B.1 for detailed model configurations. The are reported in Figure 5. First, we compare our routing method against the Dynamic routing one. We observe an improved performance from the CapsNet * to the CapsNet (83.39% to 85.74% with vector-structured poses). The suggests a better viewpoint generalization for our routing mechanism. Second, we compare baseline CNN against our CapsNet. From the table, we see that CapsNet has better test accuracy compared to CNN. For example, the CapsNet with vector-structured poses reaches 85.74% test accuracy, and the baseline CNN reaches 79.81% test accuracy. In our CNN implementation, we use average pooling from the last convolutional layer to its next fully-connected layer. We can see that having a routing mechanism works better than pooling. However, one may argue that the pooling operations requires no extra parameter but routing mechanism does, and hence it may not be fair to compare their performance. To address this issue, in the baseline CNN, we replace the pooling operation with a fully-connected operation. To be more precise, instead of using average pooling, we learn the entire transformation matrix from the last convolutional layer to its next fully-connected layer. This procedure can be regarded as considering pooling with learnable parameters. After doing this, the number of parameters in CNN increases to 42.49M, and the corresponding test accuracy is 84.84%, which is still lower than 85.74% from the CapsNet. We conclude that, when recognizing overlapping and diverse number of objects, the routing mechanism has better representation power against the pooling operation. Last, we compare CapsNet with different pose structures. The CapsNet with vector-structured poses works better than the CapsNet with matrix-structured poses (80.59% vs 85.74%). However, the former requires more parameters, more memory usage, and more inference time. If we increase the number of parameters in the matrix-pose CapsNet to 42M, its test accuracy rises to 91.17%. Nevertheless, the model now requires more memory usage and inference time as compared to using vector-structured poses. We conclude that more performance can be extracted from vector-structured poses but at the cost of high memory usage and inference time. The idea of grouping a set of neurons into a capsule was first proposed in Transforming AutoEncoders . The capsule represented the multi-scale recognized fragments of the input images. Given the transformation matrix, Transforming Auto-Encoders learned to discover capsules' instantiation parameters from an affine-transformed image pair. extended this idea to learn part-whole relationships in images systematically. cast the routing mechanism as fitting a mixture of Gaussians. The model demonstrated an impressive ability for recognizing objects from novel viewpoints. Recently, Stacked Capsule AutoEncoders proposed to segment and compose the image fragments without any supervision. The work achieved SOTA on unsupervised classification. However, despite showing promising applications by leveraging inherent structures in images, the current literature on capsule networks has only been applied on datasets of limited complexity. Our proposed new routing mechanism instead attempts to apply capsule networks to more complex data. Our model also relates to Transformers and Set Transformers , where dot-product attention is also used. In the language of capsules, a Set Transformer can be seen as a model in which a higher-level unit can choose to pay attention to K lower-level units (using K attention heads). Our model inverts the attention direction (lower-level units "attend" to parents), enforces exclusivity among routing to parents and does not impose any limits on how many lower-level units can be routed to any parent. Therefore, it combines the ease and parallelism of dot-product routing derived from a Transformer, with the interpretability of building a hierarchical parsing of a scene derived from capsule networks. There are other works presenting different routing mechanisms for capsules. formulated the Dynamic routing as an optimization problem consisting of a clustering loss and a KL regularization term. generalized the routing method within the framework of weighted kernel density estimation. approximated the routing process with two branches and minimized the distributions between capsules layers by an optimal transport divergence constraint. replaced standard convolutional structures before capsules layers by densely connected convolutions. It is worth noting that this work was the first to combine SOTA CNN backbones with capsules layers. proposed DeepCaps by stacking 10+ capsules layers. It achieved 92.74% test accuracy on CIFAR-10, which was the previous best for capsule networks. Instead of looking for agreement between capsules layers, proposed to learn deterministic attention scores only from lower-level capsules. Nevertheless, without agreement, their best-performed model achieved only 88.61% test accuracy on CIFAR-10. In contrast to these prior work, we present a combination of inverted dotproduct attention routing, layer normalization, and concurrent routing. To the best of our knowledge, we are the first to show that capsule networks can achieve comparable performance against SOTA CNNs. In particular, we achieve 95.14% test accuracy for CIFAR-10 and 78.02% for CIFAR-100. In this work, we propose a novel Inverted Dot-Product Attention Routing algorithm for Capsule networks. Our method directly determines the routing probability by the agreements between parent and child capsules. Routing algorithms from prior work require child capsules to be explained by parent capsules. By removing this constraint, we are able to achieve competitive performance against SOTA CNN architectures on CIFAR-10 and CIFAR-100 with the use of a low number of parameters. We believe that it is no longer impractical to apply capsule networks to datasets with complex data distribution. Two future directions can be extended from this paper: • In the experiments, we show how capsules layers can be combined with SOTA CNN backbones. The optimal combinations between SOTA CNN structures and capsules layers may be the key to scale up to a much larger dataset such as ImageNet. • The proposed concurrent routing is as a parallel-in-time and weight-tied inference process. The strong connection with Deep Equilibrium Models can potentially lead us to infinite-iteration routing. Suofei Zhang, Quan Zhou, and Xiaofu Wu. Fast dynamic routing based on weighted kernel density estimation. In International Symposium on Artificial Intelligence and Robotics, pp. 301-309.. A MODEL CONFIGURATIONS FOR CIFAR-10/CIFAR-100 The configuration choices of Dynamic Routing CapsNets and EM Routing CapsNets are followed by prior work . We empirically find their configurations perform the best for their routing mechanisms (instead of applying our network configurations to their routing mechanisms). The optimizers are chosen to reach the best performance for all models. We list the model specifications in Table 2, 3, 4, 5, 6, 7, 8, and 9. We only show the specifications for CapsNets with a simple convolutional backbone. When considering a ResNet backbone, two modifications are performed. First, we replace the simple feature backbone with ResNet feature backbone. Then, the input dimension of the weights after the backbone is set as 128. A ResNet backbone contains a 3 × 3 convolutional layer (output 64-dim.), three 64-dim. residual building block with stride 1, and four 128-dim. residual building block with stride 2. The ResNet backbone returns a 16 × 16 × 128 tensor. For the optimizers, we use stochastic gradient descent with learning rate 0.1 for our proposed method, baseline CNN, and ResNet-18 . We use Adam with learning rate 0.001 for Dynamic Routing CapsNets and Adam with learning rate 0.01 for EM Routing CapsNets. We decrease the learning rate by 10 times when the model trained on 150 epochs and 250 epochs, and there are 350 epochs in total. We consider the same data augmentation for all networks. During training, we first pad four zerovalue pixels to each image and randomly crop the image to the size 32 × 32. Then, we horizontally flip the image with probability 0.5. During evaluation, we do not perform data augmentation. All the model is trained on a 8-GPU machine with batch size 128. To fairly compare CNNs and CapsNets, we fix the number of layers and the number of neurons per layer in the models. These models consider the design: 36x36 image → 18x18x1024 neurons → 8x8x1024 neurons → 6x6x1024 neurons → 640 neurons → 10 class logits. The configurations are presented in Table 10, 11, and 12. We also fix the optimizers across all the models. We use stochastic gradient descent with learning rate 0.1 and decay the learning rate by 10 times when the models trained on 150 steps and 250 steps. One step corresponds to 60, 000 training samples, and we train the models with a total of 350 steps. Diverse MultiMNIST contains both single-digit and overlapping-digit images. We generate images on the fly and plot the test accuracy for training models over 21M (21M = 350(steps) × 60, 000(images)) generated images. We also generate the test images, and for each evaluation step, there are 10, 000 test images. Note that we make sure the training and the test images are from the disjoint set. In the following, we shall present how we generate the images. We set the probability of generating a single-digit image as 1 6 and the probability of generating an overlapping-digit image as 5 6. The single-digit image in DiverseMultiMNIST training/ test set is generated by shifting digits in MNIST training/ test set. Each digit is shifted up to 4 pixels in each direction and in 36 × 36 image. , we generate overlapping-digit images in DiverseMultiMNIST training/ test set by overlaying two digits from the same training/ test set of MNIST. Two digits are selected from different classes. Before overlaying the digits, we shift the digits in the same way which we shift for the digit in a single-digit image. After overlapping, the generated image has size 36 × 36. We consider no data augmentation for both training and evaluation. All the model is trained on a 8-GPU machine with batch size 128. Output Size input dim=3, output dim=256, 9x9 conv, stride=1, padding=0 24x24x256 ReLU input dim=256, output dim=256, 9x9 conv, stride=2, padding=0 8x8x256 Capsules reshape 8x8x32x8 Squash Linear Dynamic Routing to 100 16-dim. capsules 100x16 Squash
We present a new routing method for Capsule networks, and it performs at-par with ResNet-18 on CIFAR-10/ CIFAR-100.
1,703
scitldr
We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing. Such data can be used to train automated dialogue agents performing customer care tasks for the enterprises or organizations. In particular, the framework takes the documents as input and generates the tasks for obtaining the annotations for simulating dialog flows. The dialog flows are used to guide the collection of utterances produced by crowd workers. The outcomes include dialogue data grounded in the given documents, as well as various types of annotations that help ensure the quality of the data and the flexibility to (re)composite dialogues. There has been growing interest in using automated dialogue agents to assist customers through online chat. However, despite recent effort in training automated agents with human-human dialogues, it often faces the bottleneck that a large number of chat logs or simulated dialogues with various scenarios are required. Meanwhile, enterprises and organizations often own a large number of business documents that could address customers' requests, such as technical documentation, policy guidance and Q&A webpages. However, customers would still prefer having interactive conversations with agents instead of searching and reading through lengthy documents. Taken together, a promising solution is to build machine assisted agents that could perform task-oriented dialogues that are based on the content of the business documents. In task-oriented dialogues for customer care, a recurrent theme is a diagnostic process -identifying the contextual conditions that apply to the customer to retrieve the most relevant solutions. Meanwhile, business documents often contain similar information, with prior conditions, in for example if-clauses or subtitles, followed by corresponding solutions. Therefore, these documents can be used to guide diagnostic dialogues-we call them document-grounded dialogues. For example, the sample business document in Figure 1 contains information for an agent to perform the dialogue on the right, where P-n (S-n) denotes text span n labeled a precondition (solution) and "O-D" denotes "out of domain". The preconditions are expressed in various ways such as subtitles or if-clauses, followed by corresponding solution if that precondition applies. In this work, we hypothesize that an essential capability for a dialogue agent to perform goal-oriented information retrieval tasks should be to recognize the preconditions and their associated solutions covered by the given documents and then use them to carry out the diagnostic interactions. Towards this goal, we introduce DOC2DIAL, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing. It aims to minimize the effort for handcrafting dialog flows that is specific to the document but still introduce dynamic dialog scenes. It also provides quality control over the data collection process. We guide our investigation with the following principles: 1) We aim to identify the document content that provides solution(s) to a user's request as well as describes the prerequisites required. 2) The generated dialog flows should be specific to the given document without relying on heavily supervised or handcrafted work. 3) The generated data tasks should be easy to scale -feasible to crowdsourcing platforms and could be updated with respect to changes in the document. Thus, we propose a pipeline of three interconnected tasks for dialogue composition based on business documents: labeling text spans as preconditions or solutions in a given documents (TextAnno); identifying the relation(s) between these preconditions or solutions (RelAnno; simulating dialog flows based on the linked preconditions/solutions and applying them to guide the collection of human generated utterances(DialAnno). For the dialogue collection, we could deploy it via both synchronized and asynchronized processes. An asychronized process allows crowd workers to work on the production and evaluation of individual turns without the constraints of timing or having a dialog partner. The outcome includes the document grounded dialogues as well as various types of implicit and explicit annotations that help ensure the quality of the data. Such data sets can be used to develop various types of dialogue agent technologies in the context of customer care. Our primary contributions can be summarized as follows: we introduce DOC2DIAL, an end-to-end framework 1 to generate task-oriented conversational data from business documents; we propose a novel pipeline approach of three interconnected tasks that minimizes the dialog flow crafting manual effort and enables comprehensive cross-task validation to ensure the quality of dialogue data; the system supports both synchronized and asynchronized dialogue collection processes. We demonstrate that such setting allows flexible dialogue composition by utterances, which are guided by the given document content and its annotations. Our work is mostly motivated by the success of research in training automatic agents with documents, as evidenced by the wide usage of data sets for machine comprehension tasks, e.g.. Two recent data tasks, CoQA and QuAC, aim to support conversational QA which involves understanding contexts and interconnected questions with multi-turn conversations. These data sets were created by pairing crowd workers to chat about a passage in the form of multi-turn questions and answers. Although getting closer to enabling automatic dialogue agents with documents, these data tasks do not tackle the understanding of preconditions, which is common in real-world task-oriented dialogues. In a recent paper, ShARC data task was proposed to address under-specified questions in conversational QAs by asking follow-up questions, which can be created by crowd workers based on supporting rules presented in a document. While it shares similar goals as our work on supporting dialogues that derive answers by understanding preconditions, it is a much simpler task focusing only on asking boolean follow-up questions. In contrast, our DOC2DIAL framework enables the end-to-end pipeline, from extracting complex precondition and solutions that present in business documents, to generating complete task-oriented dialogues that fit different scenarios. Figure 2 presents the overview of our framework. Next, we first introduce the main tasks, and then describe a bit more about the core components on processing the documents for generating the tasks. Main Tasks There are three tasks in our pipeline approach: TextAnno, as in Figure 3, is for labeling individual text spans as precondition, solution or otherwise in a selected document. The ed annotations can be used to automatically determine if a document is feasible for creating dynamic dialogues, thus enters the next steps in the pipeline. RelAnno as seen in Figure 4 is to identify the relations between precondition pairs labeled in TextAnno, such as hierarchical relations i.e. is-sibling, is-parent-of, as R-1 and R-2 in Figure 1. The task can also include text pairs to be labeled as is-precondition-of or is-solution-of for cross validating the annotations from TextAnno (e.g. R-3 in Figure 1).DialAnno, as shown in Figure 5, collects the dialog utterances based on the pre-generated dialog flow. Each turn correspond in the flow corresponds to a dialog scene, which is a composite of the interlocutor role (agent or user), a dialog action, a selected text span in the document based on the annotations from TextAnno and RelAnno, and dialogue history if applies. For instance, if the selected text is a precondition, the worker is given the role of an agent, the dialog action could be request/query/open. The worker is then given the instruction that You need to know whether the highlighted precondition applies to the customer in order to narrow down the solution. More examples of dialogue scenarios are given in Table 1. Document Processing prepares the input documents for the three annotation tasks. We first obtain various syntactic-semantic analysis ranging from computational linguistic statistics to HTML-based tree structures. For instance, we apply constituency parsing for splitting long sentences to text spans for TextAnno. We also extract sub-clauses with certain discourse connectives (e.g. "if", "as long as") via for identifying linguistic indicators of preconditions. In addition, we try to capture the outline patterns embedded in HTML tree structure in the documents. Documents that are well structured and clearly written with descriptive sub-titles and the discourse connectives are considered as good candidates for generating the dialogues with dynamic flows. The system is equipped with the capabilities to automatically assign labels based on the linguistic indicators mentioned above. We also employ heuristics based on the HTML tree structures and text proximity for the relation annotations. Such fuzzy labels are mainly for allowing generating and testing the data tasks without human labels. They can also be used as pseudo-gold labels for quality control for the crowdsourced tasks. Dialog Flow Composition generates the dialog flows for DialAnno from the labels of precondition/solution text and their relations obtained via TextAnno and RelAnno. For each turn, the dialog scene is determined by three factors, i.e., selected text span content, role and dialog act, which are determined sequentially. The dynamics of the dialog flows are introduced by varying the three aforementioned factors that are constrainted by the relations collected via RelAnnp. First, we randomly select content from a candidate pool of preconditions and solutions identified in the document, which is updated after every scenario generated. The general rule for updating the candidate pool is to eliminate preconditions/solutions that are already verified or eliminated. Then role is randomly selected between AGENT and USER. For our pilot study, we mainly consider dialog acts corresponding to the preconditions/solutions as selectively shown in As a pilot study, we demonstrate how to apply DOC2DIAL to various documents and task contexts. The input were internal customer care documents on topics such as technical trouble shooting, policy guidance, etc. We experimented with 1900 documents from the candidate document pool after the automatic filtering. With these documents, we obtained 26,000 text spans labeled as either precondition or solution via TextAnno and RelAnno. Then, we generated 5 dialog flows per document and selected those with more than 5 turns. Next, we evaluated the dialog flows by asking the crowd worker if the highlighted text matched the given dialogue scenario of a turn in a dialog flow. 67% of the turns were labeled as "match". Most of the mismatches were due to disagreement on the precondition/solution labeled by crowd workers in earlier steps. Figure 5 shows the sample task on collecting the 6-th turn of a dialog flow of 7 turns by providing the dialogue scenario and the chat history. The sample dialogues show that when the crowd contributors were able to understand the selected text in the context of the document, they could properly interpret the dialog scene to produce utterance or evaluate the task.
We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing for train automated dialogue agents
1,704
scitldr
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation. Audio waveforms have complex structure at drastically varying timescales, which presents a challenge for generative models. Local structure must be captured to produce high-fidelity audio, while longrange dependencies spanning tens of thousands of timesteps must be captured to generate audio which is globally consistent. Existing generative models of waveforms such as WaveNet (van den a) and SampleRNN are well-adapted to model local dependencies, but as these models typically only backpropagate through a fraction of a second, they are unable to capture high-level structure that emerges on the scale of several seconds. We introduce a generative model for audio which captures longer-range dependencies than existing end-to-end models. We primarily achieve this by modelling 2D time-frequency representations such as spectrograms rather than 1D time-domain waveforms (Figure 1). The temporal axis of a spectrogram is orders of magnitude more compact than that of a waveform, meaning dependencies that span tens of thousands of timesteps in waveforms only span hundreds of timesteps in spectrograms. In practice, this enables our spectrogram models to generate unconditional speech and music samples with consistency over multiple seconds whereas time-domain models must be conditioned on intermediate features to capture structure at similar timescales. Modelling spectrograms can simplify the task of capturing global structure, but can weaken a model's ability to capture local characteristics that correlate with audio fidelity. Producing high-fidelity audio has been challenging for existing spectrogram models, which we attribute to the lossy nature of spectrograms and oversmoothing artifacts which from insufficiently expressive models. To reduce information loss, we model high-resolution spectrograms which have the same dimensionality as their corresponding time-domain signals. To limit oversmoothing, we use a highly expressive autoregressive model which factorizes the distribution over both the time and frequency dimensions. Modelling both fine-grained details and high-level structure in high-dimensional distributions is known to be challenging for autoregressive models. To capture both local and global structure in spectrograms with hundreds of thousands of dimensions, we employ a multiscale approach which generates spectrograms in a coarse-to-fine manner. A low-resolution, subsampled spectrogram that captures high-level structure is generated initially, followed by an iterative upsampling procedure that adds high-resolution details. (1x, 5x, 25x, 125x) Figure 1: Spectrogram and waveform representations of the same 4 second audio signal. The waveform spans nearly 100,000 timesteps whereas the temporal axis of the spectrogram spans roughly 400. Complex structure is nested within the temporal axis of the waveform at various timescales, whereas the spectrogram has structure which is smoothly spread across the time-frequency plane. Combining these representational and modelling techniques yields a highly expressive and broadly applicable generative model of audio. Our contributions are are as follows: • We introduce MelNet, a generative model for spectrograms which couples a fine-grained autoregressive model and a multiscale generation procedure to jointly capture local and global structure. • We show that MelNet is able to model longer-range dependencies than existing time-domain models. Additionally, we include an ablation to demonstrate that multiscale modelling is essential for modelling long-range dependencies. • We demonstrate that MelNet is broadly applicable to a variety of audio generation tasks, including unconditional speech and music generation. Furthermore, MelNet is able to model highly multimodal data such as multi-speaker and multilingual speech. We briefly present regarding spectral representations of audio. Audio is represented digitally as a one-dimensional, discrete-time signal y = (y 1, . . ., y n). Existing generative models for audio have predominantly focused on modelling these time-domain signals directly. We instead model spectrograms, which are two-dimensional time-frequency representations which contain information about how the frequency content of an audio signal varies through time. Spectrograms are computed by taking the squared magnitude of the short-time Fourier transform (STFT) of a time-domain signal, i.e. x = STFT(y) 2. The value of x ij (referred to as amplitude or energy) corresponds to the squared magnitude of the jth element of the frequency response at timestep i. Each slice x i, * is referred to as a frame. We assume a time-major ordering, but following convention, all figures are displayed transposed and with the frequency axis inverted. Time-frequency representations such as spectrograms highlight how the tones and pitches within an audio signal vary through time. Such representations are closely aligned with how humans perceive audio. To further align these representations with human perception, we convert the frequency axis to the Mel scale and apply an elementwise logarithmic rescaling of the amplitudes. Roughly speaking, the Mel transformation aligns the frequency axis with human perception of pitch and the logarithmic rescaling aligns the amplitude axis with human perception of loudness. Spectrograms are lossy representations of their corresponding time-domain signals. The Mel transformation discards frequency information and the removal of the STFT phase discards temporal information. When recovering a time-domain signal from a spectrogram, this information loss manifests as distortion in the recovered signal. To minimize these artifacts and improve the fidelity of generated audio, we model high-resolution spectrograms. The temporal resolution of a spectrogram can be increased by decreasing the STFT hop size, and the frequency resolution can be increased by The context x <ij (grey) for the element x ij (black) is encoded using 4 RNNs. Three of these are used in the time-delayed stack to extract features from preceding frames. The fourth is used in the frequency-delayed stack to extract features from all preceding elements within the current frame. Each arrow denotes an individual RNN cell and arrows of the same color use shared parameters. increasing the number of Mel channels. Generated spectrograms are converted back to time-domain signals using classical spectrogram inversion algorithms. We experiment with both Griffin-Lim and a gradient-based inversion algorithm (Decorsière et al., 2015), and ultimately use the latter as it generally produced audio with fewer artifacts. We use an autoregressive model which factorizes the joint distribution over a spectrogram x as a product of conditional distributions. Given an ordering of the dimensions of x, we define the context x <ij as the elements of x that precede x ij. We default to a row-major ordering which proceeds through each frame x i, * from low to high frequency, before progressing to the next frame. The joint density is factorized as where θ ij parameterizes a univariate density over x ij. We model each factor distribution as a Gaussian mixture model with K components. Thus, θ ij consists of 3K parameters corresponding to means, and mixture coefficients {π ijk} K k=1. The ing factor distribution can then be expressed as Following the work on Mixture Density Networks and their application to autoregressive models , θ ij is modelled as the output of a neural network and computed as a function of the context x <ij. Precisely, for some network f with parameters ψ, we have θ ij = f (x <ij ; ψ). A maximum-likelihood estimate for the network parameters is computed by minimizing the negative log-likelihood via gradient descent. To ensure that the network output parameterizes a valid Gaussian mixture model, the network first computes unconstrained parameters {μ ijk,σ ijk,π ijk} K k=1 as a vectorθ ij ∈ R 3K, and enforces constraints on θ ij by applying the following transformations: These transformations ensure the standard deviations σ ijk are positive and the mixture coefficients π ijk sum to one. To model the distribution in an autoregressive manner, we design a network which computes the distribution over x ij as a function of the context x <ij. The network architecture draws inspiration from existing autoregressive models for images (; van den c; b; ; ;). In the same way that these models estimate a distribution pixel-by-pixel over the spatial dimensions of an image, our model estimates a distribution element-by-element over the time and frequency dimensions of a spectrogram. A noteworthy distinction is that spectrograms are not invariant to translation along the frequency axis, making 2D convolution less desirable than other 2D network primitives which do not assume invariance. Utilizing multidimensional recurrence instead of 2D convolution has been shown to be beneficial when modelling spectrograms in discriminative settings ), which motivates our use of an entirely recurrent architecture. Similar to Gated PixelCNN (van den b), the network has multiple stacks of computation. These stacks extract features from different segments of the input to collectively summarize the full context x <ij: • The time-delayed stack computes features which aggregate information from all previous frames x <i, *. • The frequency-delayed stack utilizes all preceding elements within a frame, x i,<j, as well as the outputs of the time-delayed stack, to summarize the full context x <ij. The stacks are connected at each layer of the network, meaning that the features generated by layer l of the time-delayed stack are used as input to layer l of the frequency-delayed stack. To facilitate the training of deeper networks, both stacks use residual connections . The outputs of the final layer of the frequency-delayed stack are used to compute the unconstrained parametersθ. The time-delayed stack utilizes multiple layers of multidimensional RNNs to extract features from x <i, *, the two-dimensional region consisting of all frames preceding x ij. Each multidimensional RNN is composed of three one-dimensional RNNs: one which runs forwards along the frequency axis, one which runs backwards along the frequency axis, and one which runs forwards along the time axis. Each RNN runs along each slice of a given axis, as shown in Figure 2. The output of each layer of the time-delayed stack is the concatenation of the three RNN hidden states. We denote the function computed at layer l of the time-delayed stack (three RNNs followed by concatenation) as F To ensure that h f ij [l] is computed using only elements in the context x <ij, the inputs to the frequencydelayed stack are shifted backwards one step along the frequency axis: h At the final layer, layer L, a linear map is applied to the output of the frequency-delayed stack to produce the unconstrained Gaussian mixture model parameters, i.e. To incorporate conditioning information into the model, conditioning features z are simply projected onto the input layer along with the inputs x: Reshaping, upsampling, and broadcasting can be used as necessary to ensure the conditioning features have the same time and frequency shape as the input spectrogram, e.g. a one-hot vector representation for speaker ID would first be broadcast along both the time and frequency axes. To improve audio fidelity, we generate high-resolution spectrograms which have the same dimensionality as their corresponding time-domain representations. Under this regime, a single training example has several hundreds of thousands of dimensions. Capturing global structure in such high-dimensional distributions is challenging for autoregressive models, which are biased towards capturing local dependencies. To counteract this, we utilize a multiscale approach which effectively permutes the autoregressive ordering so that a spectrogram is generated in a coarse-to-fine order. The elements of a spectrogram x are partitioned into G tiers x 1,..., x G, such that each successive tier contains higher-resolution information. We define x <g as the union of all tiers which precede x g, i.e. x <g = (x 1, . . ., x g−1). The distribution is factorized over tiers: and the distribution of each tier is further factorized element-by-element as described in Section 3. We explicitly include the parameterization by ψ = (ψ 1, . . ., ψ G) to indicate that each tier is modelled by a separate network. Figure 5: Schematic showing how tiers of the multiscale model are interleaved and used to condition the distribution for the subsequent tier. a) The initial tier is generated unconditionally. b) The second tier is generated conditionally given the the initial tier. c) The outputs of tiers 1 and 2 are interleaved along the frequency axis and used to condition the generation of tier 3. d) Tier 3 is interleaved along the time axis with all preceding tiers and used to condition the generation of tier 4. During training, the tiers are generated by recursively partitioning a spectrogram into alternating rows along either the time or frequency axis. We define a function split which partitions an input into even and odd rows along a given axis. The initial step of the recursion applies the split function to a spectrogram x, or equivalently x <G+1, so that the even-numbered rows are assigned to x G and the odd-numbered rows are assigned to x <G. Subsequent tiers are defined similarly in a recursive manner: At each step of the recursion, we model the distribution p(x g | x <g ; ψ g). The final step of the recursion models the unconditional distribution over the initial tier p(x 1 ; ψ 1). To model the conditional distribution p(x g | x <g ; ψ g), the network at each tier needs a mechanism to incorporate information from the preceding tiers x <g. To this end, we add a feature extraction network which computes features from x <g which are used condition the generation of x g. We use a multidimensional RNN consisting of four one-dimensional RNNs which run bidirectionally along slices of both axes of the context x <g. A layer of the feature extraction network is similar to a layer of the time-delayed stack, but since the feature extraction network is not causal, we include an RNN which runs backwards along the time axis and do not shift the inputs. The hidden states of the RNNs in the feature extraction network are used to condition the generation of x g. As each tier doubles the resolution, the features extracted from x <g have the same time and frequency shape as x g, allowing the conditioning mechanism described in section 4.3 to be used straightforwardly. To sample from the multiscale model we iteratively sample a value for x g conditioned on x <g using the learned distributions defined by the estimated network parametersψ = (ψ 1, . . .,ψ G). The initial tier, x 1, is generated unconditionally by sampling from p(x 1 ;ψ 1) and subsequent tiers are sampled from p(x g | x <g ;ψ g). At each tier, the sampled x g is interleaved with the context x <g: The interleave function is simply the inverse of the split function. Sampling terminates once a full spectrogram, x <G+1, has been generated. A spectrogram generated by a multiscale model is shown in Figure 4 and the sampling procedure is visualized schematically in Figure 5. To demonstrate the MelNet is broadly applicable as a generative model for audio, we train the model on a diverse set of audio generation tasks (single-speaker speech generation, multi-speaker speech generation, and music generation) using three publicly available datasets. Generated audio samples for each task are available on the accompanying web page https://audio-samples.github.io. We include samples generated using the priming and biasing procedures described by. Biasing lowers the temperature of the predictive distribution and priming seeds the model state with a given sequence of audio prior to sampling. Hyperparameters for all experiments are available in Appendix A. Speech and music have rich hierarchies of latent structure. Speech has complex linguistic structure (phonemes, words, syntax, semantics, etc.) and music has highly compositional musical structure (notes, chords, melody and rhythm, etc.). The presence of these latent structures in generated samples can be used as a proxy for how well a generative model has learned dependencies at various timescales. As such, a qualitative analysis of unconditional samples is an insightful method of evaluating generative models of audio. To facilitate such a qualitative evaluation, we train MelNet on each of the three unconditional generation tasks and include samples on the accompanying web page. For completeness, we briefly provide some of our own qualitative observations regarding the generated samples (Sections 6.1, 6.2, and 6.3). In addition to qualitative analysis, we conduct a human evaluation experiment to quantitatively compare how well WaveNet and MelNet capture high-level structure (Section 6.4). Lastly, we ablate the impact of the multiscale generation procedure on MelNet's ability model long-range dependencies (Section 6.5). To test MelNet's ability to model a single speaker in a controlled environment, we utilize the Blizzard 2013 dataset , which consists of audiobook narration performed in a highly animated manner by a professional speaker. We find that MelNet frequently generates samples that contain coherent words and phrases. Even when the model generates incoherent speech, the intonation, prosody, and speaking style remain consistent throughout the duration of the sample. Furthermore, the model learns to produce speech using a variety of character voices and learns to generate samples which contain elements of narration and dialogue. Biased samples tend to contain longer strings of comprehensible words but are read in a less expressive fashion. When primed with a real sequence of audio, MelNet is able to continue sampling speech which has consistent speaking style and intonation. Audiobook data is recorded in a highly controlled environment. To demonstrate MelNet's capacity to model distributions with significantly more variation, we utilize the VoxCeleb2 dataset . The VoxCeleb2 dataset consists of over 2,000 hours of speech data captured with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. When trained on the VoxCeleb2 dataset, we find that MelNet is able to generate unconditional samples with significant variation in both speaker characteristics (accent, language, prosody, speaking style) as well as acoustic conditions ( noise and recording quality). While the generated speech is often not comprehensible, samples can often be identified as belonging to a specific language, indicating that the model has learned distinct modalities for different languages. Furthermore, it is difficult to distinguish real and fake samples which are spoken in foreign languages. For foreign languages, semantic structures are not understood by the listener and cannot be used to discriminate between real and fake. Consequently, the listener must rely largely on phonetic structure, which MelNet is able to realistically model. To show that MelNet can model audio modalities other than speech, we apply the model to the task of unconditional music generation. We utilize the MAESTRO dataset, which consists of over 172 hours of solo piano performances. The samples demonstrate that MelNet learns musical structures such as melody and harmony. Furthermore, generated samples often maintain consistent tempo and contain interesting variation in volume, timbre, and rhythm. Making quantitative comparisons with existing generative models such as WaveNet is difficult for various reasons and previous works have ultimately relied on largely empirical evaluations by the reader. To allow the reader to make these judgements for themselves, we provide samples from both WaveNet and MelNet for each of the tasks described in the previous sections. Furthermore, in an effort to provide quantitative metrics to support the claim that MelNet generates samples with improved long-range structure in comparison to WaveNet, we conduct a human experiment whereby participants are presented anonymized samples from both models and asked to select which sample exhibits longer-term structure. We resort to such evaluations since standard metrics for evaluation of generative models such as density estimates cannot be used to compare WaveNet and MelNet as that these models operate on different representations. The methodology for this experiment is as follows. For each of the three unconditional audio generation tasks, we generated 50 samples from WaveNet and 50 samples from MelNet. Participants were shown an anonymized, randomly-drawn sample from each model and instructed to "select the sample which has more coherent long-term structure." We collected 50 evaluations for each task. Results, shown in Table 1a, show that evaluators overwhelmingly agreed that samples generated by MelNet had more coherent long-range structure than samples from WaveNet across all tasks. In addition to comparing MelNet to an unconditional WaveNet model for music generation, we also compare to a two-stage Wave2Midi2Wave model which conditions WaveNet on MIDI generated by a separately-trained Music Transformer. The two-stage Wave2Midi2Wave model has the advantage of directly modelling labelled musical notes which distill much of the salient, high-level structure in music into a compact symbolic representation. Despite this, as shown by the in Table 1b, the two-stage model does not capture long-range structure as well as a MelNet model that is trained without access to any intermediate representations. To isolate the impact of multiscale modelling procedure described in Section 5, we train models with varying numbers of tiers and evaluate the long-term coherence of their respective samples. As noted before, long-term coherence is difficult to quantify and we provide samples on the accompanying web page so that the reader can make their own judgements. We believe the samples clearly demonstrate that increasing the number of tiers in samples with more coherent high-level structure. We note that our experiment varies the number of tiers from two to five. Training a single-tier model on full-resolution spectrograms was prohibitively expensive in terms of memory consumption. This highlights another benefit of multiscale modelling-large, deep networks can be allocated to learning complex distributional structure in the initial tiers while shallower networks can be used for modelling the relatively simple, low-entropy distributions in the upsampling tiers. This allows multiscale models to effectively allocate network capacity in proportion to the complexity of the modelling task. The predominant line of research regarding generative models for audio has been directed towards modelling time-domain waveforms with autoregressive models (van den a; ; . WaveNet is a competitive baseline for audio generation, and as such, is used for comparison in many of our experiments. However, we note that the contribution of our work is in many ways complementary to that of WaveNet. MelNet is more proficient at capturing high-level structure, whereas WaveNet is capable of producing higher-fidelity audio. Several works have demonstrated that time-domain models can be used to invert spectral representations to highfidelity audio (; ; Arık et al., 2019), suggesting that MelNet could be used in concert with time-domain models such as WaveNet. and capture long-range dependencies in waveforms by utilizing a hierarchy of autoencoders. This approach requires multiple stages of models which must be trained sequentially, whereas the multiscale approach in this work can be parallelized over tiers. Additionally, these approaches do not directly optimize the data likelihood, nor do they admit tractable marginalization over the latent codes. We also note that the modelling techniques devised in these works can be broadly applied to autoregressive models such as ours, making their contributions largely complementary to ours. Recent works have used generative adversarial networks (GANs) to model both waveforms and spectral representations. As with image generation, it remains unclear whether GANs capture all modes of the data distribution. Furthermore, these approaches are restricted to generating fixed-duration segments of audio, which precludes their usage in many audio generation tasks. Generating spectral representations is common practice for end-to-end text-to-speech models (; ;). However, these models use probabilistic models which are much less expressive than the fine-grained autoregressive model used by MelNet. Consequently, these models are unsuitable for modelling high-entropy, multimodal distributions such as those involved in tasks like unconditional music generation. The network architecture used for MelNet is heavily influenced by recent advancements in deep autoregressive models for images. introduced an LSTM architecture for autoregressive modelling of 2D images and van den Oord et al. (2016c) introduced PixelRNN and PixelCNN and scaled up the models to handle the modelling of natural images. Subsequent works in autoregressive image modelling have steadily improved state-of-the-art for image density estimation (van den b; ; ;). We draw inspiration from many of these models, and ultimately design a recurrent architecture of our own which is suitable for modelling spectrograms rather than images. We note that our choice of architecture is not a fundamental contribution of this work. While we have designed the architecture particularly for modelling spectrograms, we did not experimentally validate whether it outperforms existing architectures and make no such claims to this effect. We use a multidimensional recurrence in both the time-delayed stack and the upsampling tiers to extract features from two-dimensional inputs. Our multidimensional recurrence is effectively'factorized' as it independently applies one-dimensional RNNs across each dimension. This approach differs from the tightly coupled multidimensional recurrences used by MDRNNs and GridLSTMs and more closely resembles the approach taken by ReNet . Our approach allows for efficient training as we can extract features from an M × N grid in max(M, N) sequential recurrent steps rather than the M + N sequential steps required for tightly coupled recurrences. Additionally, our approach enables the use of highly optimized one-dimensional RNN implementations. Various approaches to image generation have succeeded in generating high-resolution, globally coherent images with hundreds of thousands of dimensions (; ;). The methods introduced in these works are not directly transferable to waveform generation, as they exploit spatial properties of images which are absent in one-dimensional audio signals. However, these methods are more straightforwardly applicable to two-dimensional representations such as spectrograms. Of particular relevance to our work are approaches which combine autoregressive models with multiscale modelling (van den c; ; ;). Our work demonstrates that the benefits of a multiscale autoregressive model extend beyond the task of image generation, and can be used to generate high-resolution, globally coherent spectrograms. We have introduced MelNet, a generative model for spectral representations of audio. MelNet combines a highly expressive autoregressive model with a multiscale modelling scheme to generate high-resolution spectrograms with realistic structure on both local and global scales. In comparison to previous works which model time-domain signals directly, MelNet is particularly well-suited to model long-range temporal dependencies. Experiments show promising across a diverse set of audio generation tasks. Furthermore, we believe MelNet provides a foundation for various directions of future work. Two particularly promising directions are text-to-speech synthesis and representation learning: • Text-to-Speech Synthesis: MelNet utilizes a more flexible probabilistic model than existing end-to-end text-to-speech models, making it well-suited to model expressive, multi-modal speech data. • Representation Learning: MelNet is able to uncover salient structure from large quantities of unlabelled audio. Large-scale, pre-trained autoregressive models for language modelling have demonstrated significant benefits when fine-tuned for downstream tasks. Likewise, representations learned by MelNet could potentially aid downstream tasks such as speech recognition.
We introduce an autoregressive generative model for spectrograms and demonstrate applications to speech and music generation
1,705
scitldr
Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, we see these failures to generalize more frequently in more modern networks. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans. Deep convolutional neural networks (CNNs) have revolutionized computer vision. Perhaps the most dramatic success is in the area of object recognition, where performance is now described as "superhuman" . A key to the success of any machine learning method is the inductive bias of the method, and clearly the choice of architecture in a neural network significantly affects the inductive bias. In particular, the choice of convolution and pooling in CNNs is motivated by the desire to endow the networks with invariance to irrelevant cues such as image translations, scalings, and other small deformations (; BID33 . This motivation was made explicit in the 1980s by Fukushima in describing the "neocognitron" architecture, which served as inspiration for modern CNNs , "After finishing the process of learning, pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the input patterns." Despite the excellent performance of CNNs on object recognition, the vulnerability to adversarial attacks suggests that superficial changes can in highly non-human shifts in prediction (e.g. BID1 BID27 . In addition, filtering the image in the Fourier domain (in a way that does not change human prediction) also in a substantial drop in prediction accuracy BID13. These and other BID20 indicate that CNNs are not invariant to cues that are irrelevant to the object identity. An argument against adversarial attacks on CNNs is that they often involve highly unnatural transformations to the input images, hence in some sense we would not expect CNNs to be invariant to these transformations. When considering more natural transformations, there is preliminary evidence that AlexNet BID15 ) is robust to some of them BID33. On the other hand, there is also preliminary evidence for lack of robustness in the more modern networks for object classification BID2 and detection BID21 along with studies suggesting that with small CNNs and the MNIST data, data augmentation is the main feature affecting CNN invariance BID14 ). An indirect method to probe for invariances measures the linearity of the learned representations under natural transformations to the input image (Lenc Figure 1 : Examples of jagged predictions of modern deep convolutional neural networks. Top: A negligible vertical shift of the object (Kuvasz) in an abrupt decrease in the network's predicted score of the correct class. Middle: A tiny increase in the size of the object (Lotion) produces a dramatic decrease in the network's predicted score of the correct class. Bottom: A very small change in the bear's posture in an abrupt decrease in the network's predicted score of the correct class. Colored dots represent images chosen from interesting x-axis locations of the graphs on the right. These dots illustrate sensitivity of modern neural networks to small, insignificant (to a human), and realistic variations in the image. BID17 BID12; BID6. The recent work of BID10 investigates adversarial attacks that use only rotations and translations. They find that "simple transformations, namely translations and rotations alone, are sufficient to fool neural network-based vision models on a significant fraction of inputs" and show that advanced data augmentation methods can make the networks more robust. In this paper, we directly ask "why are modern CNNs not invariant to natural image transformations despite the architecture being explicitly designed to provide such invariances?". Specifically, we systematically examine the invariances of three modern deep CNNs: VGG-16 BID26, ResNet-50 , and InceptionResNet-V2 BID28. We find that modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets. Figure 1 contains examples of abrupt failures following tiny realistic transformations for the InceptionResNet-V2 CNN. Shifting or scaling the object by just one pixel could in a sharp change in prediction. In the top row, we embed the original image in a larger image and shift it in the image plane (while filling in the rest of the image with a simple inpainting procedure). In the middle row, we repeat this protocol with rescaling. In the bottom row, we show frames from a BBC film in which the ice bear moves almost imperceptibly between frames and the network's output changes dramatically 1. In order to measure how typical these failures are, we randomly chose images from the ImageNet validation set and measured the output of three modern CNNs as we embedded these images in a larger image and systematically varied the vertical translation. As was the case in figure 1, we used a simple inpainting procedure to fill in the rest of the image. Results are shown in figure 2. Each row corresponds to an image under different translations and the color denotes the network's estimate of the probability of the correct class. Thus a row that is all light corresponds to a correct classification that is invariant to translation, while a row that is all dark corresponds to an incorrect classification that is invariant to translation. Surprisingly, many rows show abrupt transitions from light to dark, indicating that the classification changes abruptly as the object is translated. We quantify the lack of invariance by a measure we call "jaggedness": the number of times the network's predictions had the correct class in its top-5 and after just one pixel shift it moved outside of the top-5 (and also the opposite transition from non-top-5 to top5). Using this measure, we find that for approximately 30% of the images, the output is "jagged", i.e the network changes its prediction by a shift of a single pixel. Also, as shown in the right of figure 2, jaggedness is greater for the modern, deeper, networks compared to the less modern VGG16 network. While the deeper networks have better test accuracy, they are also less invariant. In the appendix we also show an alternative to the "jaggedness" measure, which gives similar . A natural criticism of these is that they are somehow related to the image resizing and inpainting procedures that we used. To test this possibility, we repeated the experiment with a different protocol where we chose different crops of the original ImageNet image while making sure that the object bounding box remained within the crop. This protocol does not require any inpainting while still translating the object location within the new image. Results are shown in the appendix. We still have a large fraction of images for which the prediction is not invariant to translation. We also show in the appendix similar for scaling rather than translation. Overall we find that regardless of the protocol used, modern CNNs often change their output significantly as a of a small translation or scaling. The failure of CNNs to generalize to image translations is particularly puzzling. Intuitively, it would seem that if all layers in a network are convolutional then the representation should simply translate when an image is translated. If the final features for classification are obtained by a global pooling operation on the representation (as is done for example in ResNet50 and InceptionResNetV2) then these features should be invariant to translation. Where does this intuition fail? This intuition ignores the subsampling operation which is prevalent in modern CNNs, also known as "stride". This failure of translation invariance in systems with subsampling was explicitly discussed in Simoncelli et al. BID25 who wrote "We cannot literally expect translation invariance in a system based on convolution and subsampling: translation of the input signal cannot produce simple translations of the transform coefficients, unless the translation is a multiple of each of the subsampling factors in the system". Since deep networks often contain many subsampling operations, the subsampling factor of the deep layers may be very large so that "literal" translation invariance only holds for very special translations. In InceptionResnetV2, for example, the subsampling factor is 60, so we expect exact translation invariance to hold only for 1 60 2 of possible translations. Simoncelli et al. also defined a weaker form of translation invariance, which they called "shiftability" and showed that it can hold for systems with subsampling (this is related to weak translation invariance as defined by BID17, see also BID11 BID6 for related ideas applied to neural networks). Here we extend the basic shiftability to show that when shiftability holds, then global pooling will indeed yield invariant representations. We define r(x) as the response of a feature detector at location x in the image plane. We say that this response is "convolutional" if translating the image by any translation δ yields a translation of the response by the same δ. This definition includes cases when the feature response is obtained by convolving the input image with a fixed filter, but also includes combinations of linear operations and nonlinear operations that do not include any subsampling. We start by a trivial observation:Observation: If r(x) is convolutional then global pooling r = x r(x) is translation invariant. Proof: This follows directly from the definition of a convolutional response. If r(x) is the feature response to one image and r 2 (x) is the feature response to the same image translated, then x r(x) = x r 2 (x) since the two responses are shifts of each other. Definition: A feature detector r(x) with subsampling factor s is called "shiftable" if for any x the detector output at location x can be linearly interpolated from the responses on the sampling grid: DISPLAYFORM0 where x i are located on the sampling grid for subsampling factor s and B s (x) is the basis function for reconstructing r(x) from the samples. The classic Shannon-Nyquist theorem tells us that r(x) will be shiftable if and only if the sampling frequency is at least twice the highest frequency in r(x).Claim: If r(x) is shiftable then global pooling on the sampling grid r = i r(x i) is translation invariant. Proof: This follows from the fact that global pooling on the sampling grid is (up to a constant) the same as global pooling for all x. DISPLAYFORM1 DISPLAYFORM2 where DISPLAYFORM3 While the claim focuses on a global translation, it can also be extended to piecewise constant transformations. Corollary: Consider a set of transformations T that are constant on a set of given image subareas. If r(x) is shiftable and for a given image, the support of r(x) and its receptive field is contained in the same subregion for all transformations in T, then global pooling on the sampling grid is invariant to any transformation in T.Proof: This follows from the fact that applying any transformation in T to an image has the same effect on the feature map r(x) as translating the image. To illustrate the importance of the sampling theorem in guaranteeing invariance in CNNs, consider a convolutional layer in a deep CNN where each unit acts as a localized "part detector" (this has been reported to be the case for many modern CNNs BID33 BID34). Each such part detector has a spatial tuning function and the degree of sharpness of this tuning function will determine whether the feature map can be subsampled while preserving shiftability or not. For example, consider a part detector that fires only when the part is exactly at the center of its receptive field. If there is no subsampling, then as we translate the input image, the feature map will translate as well, and the global sum of the feature map is invariant to translation. But if we subsample by two (or equivalently use a stride of two), then there will only be activity in the feature map when the feature is centered on an even pixel, but not when it is centered on an odd pixel. This means that the global sum of the feature map will not be invariant to translation. In the language of Fourier transforms, the problem with a part detector that fires only when the part is exactly at the center of the receptive field is that the feature map contains many high frequencies and hence it cannot be subsampled while preserving shiftability. On the other hand, if we have a part detector whose spatial tuning function is more broad, it can be shiftable and our claim (above) shows that the global sum of activities in a feature map will be preserved for all translations, even though the individual firing rates of units will still be different when the part is centered at an odd pixel or an even pixel. Our corollary (above), shows the importance of shiftability to other smooth transformations: in this case each "part detector" will translate with a different translation but it is still the case that nonshiftable representations will not preserve the global sum of activities as the image is transformed, while shiftable representations will. FIG1 examines the extent to which the representations learned by modern CNNs are invariant or shiftable. The top row shows an image that is translated vertically, while the bottom three rows show the representations in different layers for the three CNNs we consider. For VGG16 the representation appears to shift along with the object, including the final layer where the blurred pattern of response is not a simple translation of the original response, but seems to preserve the global sum for this particular image. For the two more modern networks, the responses are sharper but lose their shiftability in the later layers. In particular, the final layers show approximate invariance to one special translation but no response at all to another translation, suggesting that the many layers of subsampling yield a final response that is not shiftable. We also performed a more quantitative measure of shiftability by counting for a given image the number of times the global sum of activities in each layer changes significantly (more than 20% of mean) as the input is shifted (for each image, we only considered feature maps where the maximum response was above a threshold). We call this measure "nonshiftability". According to the preceding analysis, in architectures that obey the sampling theorem, the global sum should be invariant to input translation so nonshiftability should be zero in all layers. We find that for all three networks, the initial layers have nonshiftability close to zero but as we go deeper and deeper nonshiftability increases. Furthermore, the deeper, more modern networks, exhibit larger nonshiftability in their deep layers compared to VGG16 (see appendix for graphs).How can we guarantee that representations in CNNs will be shiftable? As explained above, we need to make sure that any feature map that uses stride does not contain frequencies above the Nyquist frequency. If CNNs were purely linear, we could simply blur the input images so that they would not include any frequencies higher than the Nyquist limit determined by the final sampling factor of the network. But since CNNs also include nonlinearities, they can add high frequencies that were not present in the input. An important message of the sampling theorem is that you should always blur before subsampling. Translated to the language of neural networks this means that stride (i.e. subsampling) should always be combined with pooling (i.e. blurring) in the preceding layer. Indeed if we have an arbitrarily deep CNN where all the layers use stride=1 followed by one layer that has a stride greater than one, then by choosing the pooling window appropriately we can guarantee that the final layer will still be shiftable. If we do not use appropriate pooling then there is no guarantee that this layer will be shiftable. Even if we use appropriate pooling that ensures that a given layer is shiftable, the subsequent nonlinearities in a CNN may not preserve the shiftability, as the nonlinearities may again introduce high frequencies. To illustrate the effect of pooling on shiftability in modern CNNs we replaced the 2 × 2 max pooling layers of VGG16 with 6 × 6 average pooling. This has the effect of reducing low freqencies but given the nonlinearities, it does not guarantee shiftability. As shown in figure 4 this simple change makes the representations approximately shiftable and, as predicted by our theory, the global sum is now invariant to both translations and rescalings of the input. This invariance of course comes with a price: the feature maps now have less detail and in preliminary experiments we find that recognition performance decreases somewhat. But the sampling theorem tells us that if we want to use subsampling while avoiding aliasing, we need to ensure that no high frequencies (relative to the Nyquist frequency) are present in the feature maps. As an alternative to pooling, Ruderman et al. BID22 have shown that networks may learn smooth filters that will lead to reduced sensitivity to transformations. Evidently, the filters learned in standard VGG16 are not smooth enough. While the preceding discussion suggests that the CNN architecture will not yield translation invariance "for free", there is still the possibility that the CNN will learn a translation invariant prediction from the training examples. This requires that the training set will actually be invariant to the irrelevant transformations. We examined the degree of invariance in the ImageNet training set by manually labeling the training images in five categories: Tibetan terrier, elephant, pineapple, bagel and rowing paddle. Consistent with previous on "dataset bias" BID24 BID19 BID0 BID29 BID30 BID18 we 2 × 2 max 6 × 6 average Translation: find that the ImageNet dataset is not invariant to translations and rescalings. FIG3 shows the distribution of the distances between the eyes of a "Tibetan terrier" and the positions of the center point between the dog's eyes. Notice that both distributions are far from uniform. Similar are obtained for the other four categories. To be more quantitative, we used the available bounding-box labels, and extracted the center point of the bounding-box and its height as proxies for the object position and size respectively. We then applied a statistical significance test to ask whether object location and object sizes were uniform for that category. For more than 900 out of the 1000 categories we found that location and size were highly non uniform (P < 10 −10). Given these strong biases, we cannot expect a learning system to learn to be invariant. DISPLAYFORM0 Even if the training set is not invariant, we can make it invariant using data augmentation. Will this make the CNN learn an invariant prediction? First, we note that we used pretrained networks and according to the authors' description of the training procedure, all three networks were trained using data augmentation. Obviously, not any data augmentation is sufficient for the networks to learn invariances. To understand the failure of data augmentation, it is again instructive to consider the subsampling factor. Since in modern networks the subsampling factor is approximately 60, then for a system to learn complete invariance to translation only, it would need to see 60 2 = 3600 augmented versions of each training example, or it would need to have an inductive bias that allows it to generalize over transformations. If we also add invariance to rotations and scalings, the number grows exponentially with the number of irrelevant transformations. Engstrom et al. BID10 suggest a sophisticated data augmentation method and show that it increases the invariance to translation and rotation. However, for challenging datasets such as ImageNet the lack of invariance largely persists. Figure 6: The performance of modern CNNs on test images from ImageNet that are embedded in a random location in a larger image is quite poor (less than 50% accuracy). Human performance is not affected. Right: An example of a full sized image and the same image resized to 100x100. Although our show that modern CNNs fail to generalize for small image transformations, their performance on the ImageNet test set is still amazingly good and far better than previous techniques. This is related to the fact that the ImageNet test set contains the same photographer's biases as the training set, so generalization to very different sizes and locations is not required. To highlight this point, we created a new test set in which ImageNet images were embedded in a larger image in a random location (and the missing pixels were filled in using a simple inpainting algorithm). Figure 6 shows that human performance is not affected by the rescaling and random translations, while the performance of modern CNNs deteriorates dramatically. In fact, when images are scaled to half their original size and randomly translated, the accuracy of modern CNNs is less than 50%, typically considered poor performance. One way in which modern systems partially address the lack of invariance is using test time augmentation in which the system output on a given image is computed by a majority vote among many random crops of the image. Clearly this is wasteful in resources and still only provides partial invariance.6 DISCUSSION CNN architectures were designed based on an intuition that the convolutional structure and pooling operations will give invariance to translations and small image deformations "for free". In this paper we have shown that this intuition breaks down once subsampling, or "stride" is used and we have presented empirical evidence that modern CNNs do not display the desired invariances since the architecture ignores the classic sampling theorem. This still leaves open the possibility of a CNN learning invariance from the data but we have shown that the ImageNet training and testing examples include significant photographer's bias so that it is unlikely that a system will learn invariance using these examples. In addition to pointing out these failures, the sampling theorem also suggests a way to impose translation invariance by ensuring that all representations are sufficiently blurred to overcome the subsampling. However, such blurred representations may lead to a decrease in performance, especially in datasets and benchmarks that contain photographer's bias. Alternatively, one could use specially designed features in which invariance is hard coded or neural network architectures that explicitly enforce invariance BID23; BID3 BID20 BID9 BID32 BID31 BID5. Again, as long as the datasets contain significant photographer's bias, such invariant approaches may lead to a decrease in performance. We download this video: https://www.youtube.com/watch?v=0mgnf6t9VEc using an online downloader. We load the video frames and resize them to 299 by 299 as used by the standard Keras applications framework (https://keras.io/applications/). We preprocess the frames using the standard Keras preprocessing function. Finally, we use the predictions of the InceptionV3 model to demonstrate the jagged behavior shown in figure 1. Top-1 Top-5 Parameters Depth VGG16 BID26 0 9** 5HV1HW,QFHSWLRQ5HV1HW9 0$' Figure 7: We also measure jaggedness using the Mean Absolute Difference (MAD) in the probability of the correct response as the image is shifted by a single pixel. Results are similar to those using the jaggedness measure described in the text. Figure 10: Nonshiftability as a function of depth in the three networks. Nonshiftability is defined as the number of times the global sum of a feature map changes by more than 20% of the mean as the input is translated. We only consider feature maps where the maximum response was above a threshold. According to our analysis, this measure should be zero if the representation is shiftable. Each line shows the nonshiftability in different layers in response to a randomly selected image.
Modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets.
1,706
scitldr
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles. The synthesis of realistic human motion has recently seen increased interest BID12 BID39 BID4 BID14 BID1 BID25 with applications beyond animation and video games. The simulation of human looking virtual agents is likely to become mainstream with the dramatic advancement of Artificial Intelligence and the democratization of Virtual Reality. A challenge for human motion synthesis is to automatically generate new variations of motions while preserving a certain style, e.g., generating large numbers of different Bollywood dances for hundreds of characters in an animated scene of an Indian party. Aided by the availability of large human-motion capture databases, many database-driven frameworks have been employed to this end, including motion graphs BID18 BID33 BID27, as well as linear BID34 BID2 BID36 and kernel methods BID29 BID31 BID8 BID28 BID42, which blend key-frame motions from a database. It is hard for these methods, however, to add new variations to existing motions in the database while keeping the style consistent. This is especially true for motions with a complex style such as dancing and martial arts. More recently, with the rapid development in deep learning, people have started to use neural networks to accomplish this task BID13. These works have shown promising , demonstrating the ability of using high-level parameters (such as a walking-path) to synthesize locomotion tasks such as jumping, running, walking, balancing, etc. These networks do not generate new variations of complex motion, however, being instead limited to specific use cases. In contrast, our paper provides a robust framework that can synthesize highly complex human motion variations of arbitrary styles, such as dancing and martial arts, without querying a database. We achieve this by using a novel deep auto-conditioned RNN (acRNN) network architecture. Recurrent neural networks are autoregressive deep learning frameworks which seek to predict sequences of data similar to a training distribution. Such a framework is intuitive to apply to human motion, which can be naturally modeled as a time series of skeletal joint positions. We are not the first to leverage RNNs for this task BID4 BID14 BID1 BID25, and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc. However, these existing methods also have a critical drawback: the motion becomes unrealistic within a couple of seconds and is unable to recover. This issue is commonly attributed to error accumulation due to feeding network output back into itself BID13. This is reasonable, as the network during training is given ground-truth input sequences to condition its subsequent guess, but at run time, must condition this guess on its own output. As the output distribution of the network will not be identical to that of the ground-truth, it is in effect encountering a new situation at test-time. The acRNN structure compensates for this by linking the network's own predicted output into its future input streams during training, a similar approach to the technique proposed in BID0. Our method is light-weight and can be used in conjunction with any other RNN based learning scheme. Though straightforward, this technique fixes the issue of error accumulation, and allows the network to output incredibly long sequences without failure, on the order of hundreds of seconds (see Figure 5). Though we are yet as unable to prove the permanent stability of this structure, it seems empirically that motion can be generated without end. In summary, we present a new RNN training method capable for the first time of synthesizing potentially indefinitely long sequences of realistic and complex human motions with respect to different styles. Many approaches have developed over the years in order to generate realistic human motion. In this section, we first review the literature that has dealt with motion synthesis using simulation methods and database-driven methods, then review the more recent deep learning approaches. Simulation-based Methods. Simulation-based techniques are able to produce physically plausible animations BID5 BID3 BID10 BID9 BID24, including realistic balancing, motion on terrain of various heights, and recovery from falling. Many of these methods consider physical constraints on the human skeleton while optimizing an motion objective. For example, in the work of, one task they employ is moving the skeleton in a certain direction without falling over. Similarly, in Ha et al. BID9, given initial fall conditions, they seek to minimize joint stress due to landing impact while ensuring a desired landing pose. Though the output is plausible, they require specific objectives and constraints for each individual task; it is infeasible to apply such explicit priors to highly stylized and varied motion such as dancing or martial arts. There are also some recent works BID32 BID26 which attempt to use less rigid objectives, instead employing adversarial and reinforcement learning. However, these motions often look uncanny and not human-like. Database-driven Methods. Motion graphs BID18 BID33 BID27, which stitch transitions into segments queried from a database, can generate locomotion along arbitrary paths, but are in essence limited to producing memorized sequences. Frameworks based on linear BID34 BID2 BID36 and kernal methods BID29 BID31 BID8 BID28 BID42 have also shown reasonable success. Taylor et al. BID37 use conditional restricted boltzmann machines to model motion styles. Grochow et al. BID8 and BID41 both use Gaussian Process Latent Variable Models (GPLVM) to control motion. apply reinforcement learning in a reduced space to compute optimal motions for a variety of motion tasks, including movement, punching, and kicking. Kee et al. BID20 use a representation of motion data called motion-fields which allows users to interactively control movement of a character. The work of Xia et al. BID44 is able to achieve real-time style transfers of unlabeled motion. Taylor et al. BID38 use binary latent variables which are connected between different time steps while Liu et al. BID23 estimate physical parameters from motion capture data. Database methods have limitations in synthesizing new variations of motion, in addition to the high memory costs for storing complex motions. Deep Learning Approaches. The use of recurrent networks is a natural approach to dealing with the problem of human motion. RNNs are trained to generate output sequentially, each output conditioned on the previous elements in the sequence. These networks have shown much success in Natural Language Processing for generating text BID35, hand written characters BID6 BID7, and even captioning images BID40. For the purpose of motion prediction and generation, Fragkiadaki et al. BID4 propose to use encoderrecurrent-decoder (ERD), jointly learning a skeleton embedding along with sequential information, in order to make body positions more realistic. In Jain et al. BID14, the authors employ RNNs to learn spatio-temporal graphs of interaction, a structure which naturally applies to the human skeleton over time-joint positions are related over consecutive frames, and joints also interact with each other spatially (arms and legs interact with each other, the spine interacts with all other joints). Bütepage et al. BID1 attempt to learn dance sequentially, but they are unable to produce varied and realistic output. More recently, Martinez et al. BID25 propose using a sequence-to-sequence architecture along with sampling-based loss. A main problem with these approaches, even in the case of the ERD, is that motion generation tends to converge to a mean position over time. Using our proposed training method, motion neither halts nor becomes unrecognizable for any period of time. Holden et al. BID11 demonstrate that a manifold of human motion can be learned using an autoencoder trained on the CMU dataset. They BID12 extend this work by using this deep convolutional auto-encoder, in addition to a disambiguation network, in order to produce realistic walking motion along a user-defined path. They are also able to use the embedding space of the encoder to perform style transfer across different walks. Using the same autoencoder and a different disambiguation network, they can also perform other user defined tasks such as punching and kicking. Their method does not use an RNN structure and so it can only generate fixed length sequences based on the architecture of the disambiguation network. More recently, Holden et al. BID13 ) use a Phase-Functioned Neural Network which takes the geometry of the scene into account to produce motion along a user-defined path. This method makes use of humans' periodic change of gait to synthesize realistic walking motion but has not demonstrated the ability to generate motions that have more complex step movements. Auto-Conditioned RNN. Recurrent neural networks are well documented in the literature. A good survey and introduction can be found at BID16 BID30. In short, a recurrent network can be represented as a function with a hidden memory unit, x t+1 = f (x t, m t), where m t is the "memory" of the network, and is updated during every forward pass, and initialized at 0. The motivation is that the memory stores important information about a sequence up until that point, which can help with the prediction of the next element. In the experiments that follow, we use a special type of RNN called an "LSTM", or a "long short term memory" network. We refer to the network trained with our method as "acLSTM". As mentioned in the introduction, the major drawback of using LSTM/RNN deep learning methods for motion prediction is the problem of error accumulation. Following the conventional way to train an RNN, the network is recursively given a sequence of ground truth motion data, DISPLAYFORM0 and asked to produce the output G 2,k+1 [g 2, ..., g k+1]. Specifically, at training time, the recursive module at time step t, M t, is conditioned on the input [g 1, ..., g t−1], and the error is measured between its output and g t. Because of this, the backpropogation algorithm BID43 used for updating the parameters will always optimize w.r.t. the input ground-truth sequences [g 1, ..., g t−1]. The parameters are accustomed to ground truth input -something it does not have access to at test-time. It is easy to see why problems will emerge: even if initial input is similar to ground truth, those slight differences will accumulate as the output is fed back in, producing output that become progressively worse until the sequence diverges or freezes. Effectively, the network is encountering a completely novel situation during test time as compared to training time, and so cannot perform well. The issue is so prevalent, in fact, that previous methods fail to produce realistic output after just several seconds BID4 BID14 BID25.Holden et al. BID11 show that an autoencoder framework can to some degree "fix" such broken input, and some researchers have tried jointly learning such an encoder-decoder network alongside RNNs to better condition subsequent input BID7 BID4. However, in the case of this framework being applied to motion as in ERD BID4, it does not generalize to indefinitely long sequences, as shown in BID14. It seems as though the autoencoder might mitigate error accumulation, but does not eliminate it. The acRNN, on the other hand, deals with poor network output explicitly by using it during training. Instead of only feeding in ground-truth instances, we use subsequences of the network's own outputs at periodic intervals. For instance, sticking with the example above, instead of conditioning the network on DISPLAYFORM1 The variable p u+1 is the network output conditioned on [g 1, ..., g u], and p u+2 is conditioned on [g 1, ..., g u, p u+1]. In this example, we refer to v as the "condition length" and u as the "ground-truth length". As the network is conditioned on its own output during training, it is able to deal with such input during synthesis. FIG0 details an unrolled Auto-Conditioned RNN with condition length u = v = 4, and FIG0 shows a more detailed view or our network. The method of BID0 also proposes using network output during training, but does so stochastically, without fixing condition lengths. However, we found that changing the condition/ground-truth length while keeping the proportion of ground-truth input fixed affects both the accuracy and variation of the output. See Figure 9 in the appendix. Auto-conditioning also has the interpretation of training the network to produce longer sequences without further input. Whereas with standard training the network error is measured only against p u+1 when conditioned with [g 1, ..., g u], under auto-conditioning the error is computed on the entire sequence p u+1,..., p u+v w.r.t. the same input. This effectively forces the network to produce v frames of output simultaneously as opposed to only one. Martinez et al. BID25 ) also use contiguous sequences of network output during training, but unlike us they do not alternate these with ground-truth input at regular intervals. Data Representation. We use the publicly available CMU motion-capture dataset for our experiments. The dataset is given as sequences of 57 skeleton joint positions in 3D-space. First, we define a root joint, whose position at time t is given by r t = (r 1,t, r 2,t, r 3,t). In order to better capture relative motion, we instead use the displacement of the root from the previous frame for input -r t = (r 1,t − r 1,t−1, r 2,t − r 2,t−1, r 3,t − r 3,t−1). For every other joint at time t, with position j t = (j 1,t, j 2,t, j 3,t), we represent it as as the relative distance in the world-coordinate system to the root joint,j t = (j 1,t − r 1,t, j 2,t − r 2,t, j 3,t − r 3,t). All distances are stored in meters. We use a skeleton with height 1.54 meters in neutral pose for all experiments. We found this representation to be desirable for several reasons. Primarily, if there is periodic motion in the dataset, we would like frames at the same point in the repeated activity to have small Euclidean distance. If we instead used absolute positions, even if it were only for the hip, this would certainly not be the case. We note that there are alternative representations which achieve the same property. BID4 express joint positions as rotations relative to a parent joint, and BID12 define them in the body's relative coordinate system along with a relative rotation of the body w.r.t. the previous frame. Training. We train the acLSTM with three fully connected layers with a memory size of 1024, similar to BID4 BID14 BID1. The main difference is that for every u ground-truth inputs of the time series, we connect v instances of the network's own output into its subsequent input streams (see section 2.1). In the main body of the paper, we set u = v = 5. We carry out further experiments with varying u and v in the appendix. We train with a sequence length of 100 for 500000 iterations using the ADAM backpropogation algorithm BID17 on an NVIDIA 1080 GPU for each dataset we experiment on. We use Euclidean loss for our objective function. The initial learning rate is is set to 0.0001. We implement the training using the python caffe framework BID15. We sample sequences at multiple frame-rates as well as rotate the sequence randomly in order to increase the training size. In detail, if at time t we input the network H with ground truth, then the loss is given by: DISPLAYFORM2 where x t and x t+1 are the ground truth motions for time steps t and t + 1.If at time t the network is input with its own previous output, the loss is given by: DISPLAYFORM3 DISPLAYFORM4. k indicates how many times the network has fed itself is its own input since the last injection of ground-truth. It is bounded by the condition length (see previous section). We evaluate our synthesized motion on different networks trained on each of four distinct subsets from the CMU motion capture database: martial arts, Indian dance, Indian/salsa hybrid, and walking. An anonymous video of the can be found here: https://youtu.be/FunMxjmDIQM.Quantitative Results. TAB0 shows the prediction error as Euclidean distance from the ground truth for different motion styles at various time frames. We compare with a 3-layer LSTM (LSTM-3LR), ERD BID4, the seq2seq framework of BID25 as well as scheduled sampling BID0. Though our goal is long-term stable synthesis and not prediction, it can be seen in TAB0 that acLSTM performs reasonably well in this regard, even achieving the best performance at most time-scales for the Indian dance category. The method of BID25 performs the best in the short term, but has the worst error by far past the half-second mark. As noted in BID4, the stochasticity of human motion makes longer-term prediction infeasible. TAB3 in the appendix shows this error difference between versions of the Indian dance network trained with different condition lengths. Figure 2 shows the average change between subsequent frames for different frame times for the acLSTM and the basic scheme. For the more complex motions of Martial Arts and Indian Dance, it is clear from the graph that acLSTM continues producing motion in the long-term while the basic training scheme (LSTM-3LR) without auto-conditioning in stagnated motion, as the network freezes into a converged mean position. Likewise, Figure 9 in the appendix shows this average change for the Indian dance network trained with different condition lengths. We note that while the methods of BID4 BID25 do not simply freeze completely, as with the basic scheme, their motion becomes unrealistic at around the same time FIG3. This is consistent with the observations of the original authors. Qualitative Results. Figure 4 shows several example frames taken from 50 second synthesized outputs, representing both the extended long term complexity and plausibility of the output. In comparison, our implementations of ERD, and seq2seq are only able to generate motion for a couple:DONLQJ /670/5 DF/670/5ZFOHQJWK,QGLDQ'DQFH )UDPH1XPEHU 0DUWLDO$UWV of seconds before they become unrealistic FIG3 ). Scheduled sampling BID0 performs better than ERD and seq2seq, but also freezes eventually, perhaps because it does not force the network to generate consistently longer sequences during training as our method does. We also demonstrate the possibility of creating hybrid motions by mixing training sets in third row of Figure 4.It should be noted that the motion in our framework, while never permanently failing, also does not remain perfectly realistic, and short-term freezing does sometimes occur. This, however, does not occur for the network trained just on walking. It is perhaps that the movement of the feet in conjunction with the absolute movement of the root joint is not as easy to discern when the feet leave the ground aperiodically, or there are jumping motions. BID4 0 BID4 0.32 0.44 0.63 0.90 1.14 1.40 1.61 1.88 seq2seq BID25 0 Figure 4: Motion sequences generated by acLSTM, sampled at various frames. Motion style from top to bottom: martial arts, Indian dancing, Indian/salsa hybrid and walking. All the motions are generated at 60 fps, and are initialized with 10 frames of ground truth data randomly picked up from the database. The number at the bottom of each image is the frame index. The images are rendered with BVHViewer 1.1 BID39 When motion does stagnate, it recovers relatively quickly, and the motion never diverges or freezes completely (see Figure 6). The short-term freezing could possibly be explained by "dead-times" in the training sequence, where the actor is beginning or ending a sequence which involves a rest position. Note that when training with two different datasets, as in the case of the Indian/Salsa combined network, motion borrows from both dance styles. We also demonstrate in Figure 5 that our method does not freeze even after 20,000 frames of synthesis, which is approximately 333 seconds of output. One can see a qualitative comparison of acLSTM with a basic LSTM-3LR in Figure 3, both trained on the Indian dance dataset. We find the performance of the vanilla network to be consistent with the reported in BID4 BID14 BID1 BID25, freezing at around 1000 ms. It never recovers from the motion. Our network, on the other hand, continues producing varied motion for the same time frame. We have shown the effectiveness of the acLSTM architecture to produce extended sequences of complex human motion. We believe our work demonstrates qualitative state-of-the-art in motion generation, as all previous work has focused on synthesizing relatively simple human motion for extremely short time periods. These works demonstrate motion generation up to a couple of seconds at most while acLSTM does not fail even after over 300 seconds. Though we are as of yet unable to prove indefinite stability, it seems empirically that acLSTM can generate arbitrarily long sequences. Current problems that exist include choppy motion at times, self-collision of the skeleton, and unrealistic sliding of the feet. Further developement of GAN methods, such as BID19, could in increased realism, though these models are notoriously hard to train as they often in mode collapse. Combining our technique with physically based simulation to ensure realism after synthesis is also a potential next step. Finally, it is important to study the effects of using various condition lengths during training. We begin the exploration of this topic in the appendix, but further analysis is needed. Figure 9 might imply some sort of trade off between motion change over time and short-term motion prediction error when training with different condition lengths. However, it is also possible that limiting motion magnitude on this particular dataset might correspond to lower error. Further experiments of various condition lengths on several motion styles need to be conducted to say anything meaningful about the effect. C VISUAL DIAGRAM OF AUTO-CONDITIONED LSTM
Synthesize complex and extended human motions using an auto-conditioned LSTM network
1,707
scitldr
{\em Saliency methods} attempt to explain a deep net's decision by assigning a {\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. Recently \citet{adebayosan} questioned the validity of many of these methods since they do not pass simple {\em sanity checks}, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs. % for the inputs. %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged. We propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call {\em competition for pixels}. This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map. Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods. Saliency methods attempt to explain a deep net's decision to humans by assigning a score to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input (from now on refered to as just "gradient"). Here we are interested in tasks involving multiclass classification, and for simplicity the exposition will assume the input is an image. Then a saliency method assigns scores to input pixels, which are presented as a heat map. (Extensions of these ideas to higher-level features of the net will not be discussed here.) While gradient alone is often too noisy, it as well as related notions are the basis of other more successful methods. In Gradient Input the pixel score is the product of the corresponding coordinate of gradient vector with the pixel value. Layer-wise Relevance Propagation (LRP) uses a back-propagation technique where every node in the deep net receives a share of the output which it distributes to nodes below it. This happens all the way to the input layer, whereby every pixel gets assigned a share of the output, which is its score. Another rule Deep-Lift does this in a different way and is related to Shapley values of cooperative game theory. is a state of the art method that performs an efficient approximation of the Shapley values. The perceived limitations of these methods in turn motivated a long list of new ones. provides a survey of existing methods, and brief discussion is presented in Section 2. The focus of the current paper is an evaluation of saliency methods called sanity checks in. This involves randomizing the model parameters or the data labels (see Section 2 for details). The authors show that maps produced using corrupted parameters and data are often difficult to visually distinguish from those produced using the original parameters and data. The authors concluded that "...widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters." The current paper shows how to pass sanity checks via a simple modification to existing methods: Competition for pixels. Section 3 motivates this idea by pointing out a significant issue with previous methods: they produce saliency maps for a chosen output (label) node using gradient information only for that node while ignoring the gradient information from the other (non-chosen) outputs. To incorporate information from non-chosen labels/outputs in the multiclass setting we rely on an axiom called completeness satisfied by many saliency methods, according to which the sum of pixel scores in a map is equal to the value of the chosen node (see Section 3). Existing methods design saliency maps for all outputs and the map for each label satisfies completeness. One can then view the various scores assigned to a single pixel as its "votes" for different labels. The competition idea is roughly to zero out any pixel whose vote for the chosen label was lower than for another (nonchosen) label. Section 4 develops theory to explain why this modification helps pass sanity checks in the multi-class setting, and yet produces maps not too different from existing saliency maps. It also introduces a notion called approximate completeness and suggests that it is both a reasonable alternative to completeness in practice, and also allows our analysis of the competition idea to go through. We the present an new empirical finding that saliency methods that were not designed to satisfy completeness in practice seem to satisfy approximate completeness anyway. This may be relevant for future research in this area. Section 5 reports experiments applying the competition idea to three well-regarded methods, Gradient Input, LRP, and DASP, and shows that they produce sensible saliency maps while also passing the sanity checks. List of testbeds and methods is largely borrowed from , except for inclusion of DASP, which draws inspiration from cooperative game theory. and provide surveys of saliency methods. Brief descriptions of some methods used in our experiments appear in Appendix Section 7.1. Here we briefly discuss the issue most relevant to the current paper, which is the interplay between tests/evaluations of saliency methods and principled design of new methods. Controversies here often boil down to interpretations of the word "saliency," which method designers have sought to codify via axioms. (Completeness is a simple example.) Evaluations then suggest that the axioms fail to ensure other desirable properties. To give an example, does the map change significantly if we blank out or modify a portion of the image that humans find insignificant, as depicted in? But in such cases it is sometimes unclear if the failure is due to the method alone, or traces to other unsolved issues in deep learning related to distribution shift, domain adaptation, adversarial examples, etc. ROAR evaluation greys out pixels/features found informative by a saliency method, and retrains the classifier on these modified inputs. The method is considered low quality if the accuracy drop is less than greying out the same fraction of randomly-chosen pixels. Many popular methods (including some tested here) do poorly on this, and a subset of ensemble methods outperform a random assignment of saliency scores. But clearly this evaluation is using a different definition of saliency than the axiomatic approaches. The axiomatic approaches seek to find a set of pixels that are sufficient to justify the output label. Since real-life images have high redundancy, multiple sets of pixels could be sufficient to justify the label. ROAR defines the goal as identifying every pixel that is potentially relevant, and then ensembling would clearly help. (To see this, imagine taking any dataset and duplicating each feature k times.) The current paper sidesteps some of these controversies by focusing solely on the sanity checks evaluation of , and their exact testbed and framework. At first sight the problems uncovered by the sanity checks appear quite serious. While the distribution-shift based evaluations were asking "Does the map shift too much upon changes to the input that humans find insignificant?," the sanity checks asks the simpler question "Does the map fail to shift when the model/labels are dramatically changed to become nonsensical?" So it is surprising that most methods already failed the following sanity checks (the authors suggest there could be others). The model parameter randomization test. According to the authors, this "compares the output of a saliency method on a trained model with the output of the saliency method on a randomly initialized untrained network of the same architecture." The saliency method fails if the maps are similar for trained models and randomized models. The randomization can be done in a cascading or layerwise fashion. The data randomization test "compares a given saliency method applied to a model trained on a labeled data set with the method applied to the same model architecture but trained on a copy of the data set in which we randomly permuted all labels." Clearly the model in the second case has learnt no useful relationship between the data and the labels and does not generalize. The saliency method fails if the maps are similar in the two cases on test data. To the best of our knowledge, no subsequent paper has designed reasonable saliency methods that pass the sanity checks. The idea of competition suggests itself naturally when one examines saliency maps produced using all possible labels/logits in a multiclass problem, rather than just the chosen label. Figure 1 shows some Gradient Input maps produced using AlexNet trained on MNIST, where the first layer was modified to accept one color channel instead of 3. Notice: Many pixels found irrelevant by humans receive heat (i.e., positive value) in all the maps, and many relevant pixels receive heat in more than one map. Our experiments showed similar phenomenon on more complicated datasets such as ImageNet. Figure 1 highlights an important point of , which is that many saliency maps pick up a lot of information about the input itself -e.g., presence of sharp edgesthat may be incidental to the final classification. Furthermore, these incidental features can survive during the various randomization checks, leading to failure in the sanity check. Thus it is a natural idea to create a saliency map by combining information from all labels, in the process filtering out or downgrading the importance of incidental features. Suppose the input is x and the net is solving a k-way classification. We assume a standard softmax output layer whose inputs are k logits, one per label. Let x be an input, y be its label and y denote the corresponding logit. The input image is of the number 3, which is clearly visible in all maps. Note how maps computed using logits/labels "7" and " 5" assign red color (resp., blue color) to pixels that would have been expected to be present (resp., absent) in those digits. The last figure shows the map produced using our CGI method. Usually prior methods do not combine information from maps from different labels, but we wish to do so to design a simple competition among labels for pixels. A priori it can be unclear how to compare scores across labels, since this could end up being an "apples vs oranges" comparison due to potentially different scaling. However, prior work has identified a property called completeness: this requires that the sum of the pixel scores is exactly the logit value. For example, Gradient Input satisfies completeness exactly for ReLU nets with zero bias 1. (Recall that the ReLU function with bias a is ReLU (z, a) max{z − a, 0}. ) Ensuring completeness for nonzero bias requires more computationally expensive operations. Enter competition. Completeness (and also approximate completeness, a property explained below in Section 4.1) allows an apples-to-apples comparison of saliency scores from different labels, and to view them as "votes" for a label. Now consider the case where y is the label predicted by the net for input x. Suppose pixel i has a positive score for label y and an even more positive score for label y 1. This pixel contributes positively to both logit values. But remember that since label y 1 was not predicted by the net as the label, the logit y1 is less than than logit y, so the contribution of pixel x i's "vote" to y1 is proportionately even higher than its contribution to y. This perhaps should make us realize that this pixel may be less relevant or even irrelevant to label y since it is effectively siding with label y 1 (recall Figure 1). We conclude that looking at saliency maps for non-chosen labels should allow us to fine-tune our estimate of the relevance of a pixel to the chosen label. Now we formalize the competition idea. Note that positive and negative pixel scores should in general be interpreted differently; the former should be viewed as supporting the chosen label, and the latter as opposing that label. Competitive version of saliency method Input: An image, I, an underlying saliency method, S u and some chosen label y Output: A saliency map S of same dimensions as the input, I For each pixel p i in the input 1. Calculate the saliency scores assigned by each label to pixel p i by S u 2. Zero out pixel p i in the saliency map S if either (a) p i has a positive saliency score for label y that is not maximal among its saliency scores for all labels (b) p i has a negative saliency score for label y that is not minimal among its saliency scores for all labels 4 WHY COMPETITION WORKS: SOME THEORY Figure 1 suggests that it is a good idea to zero out some pixels in existing saliency maps, as the existing saliency maps are reflecting information incidental to classification, such as edges. Here we develop a more principled understanding of why adding competition (a) is aggressive enough to zero out enough pixels to help pass sanity checks on randomized nets and (b) not too aggressive so as to retain a reasonable saliency map for properly trained nets. used linear models to explain why methods like Gradient Input fail their randomization tests. These tests turn the gradient into a random vector, and if ξ 1, ξ 2 are random vectors, then x ξ 1 and x ξ 2 are visually quite similar when x is an image. (See Figure 10 in their appendix.) Thus the saliency map retains a strong sense of x after the randomization test, even though the gradient is essentially random. Now it is immediately clear that with k-way competition among the labels, the saliency map would be expected to become almost blank in the randomization tests since each pixel is equally likely to receive its highest score from each label, and thus in each saliency map the pixel becomes zero with probability 1 − 1/k. Thus we would expect that adding competition enables the map to pass the sanity checks in multiclass settings, even for relatively small number of classes. Our experiments later show that the final map is indeed very sparse. But one cannot use the simple model to understand why competition yields a reasonable saliency map for properly trained nets. The reason being that the saliency map is not random and depends on the input -since completeness property requires that the sum of pixel saliencies is the label logit. We give a simple model for thinking about this dependence, which we hope will motivate further theory on the best way to aggregate information from different labels. We rely upon the completeness property, although Section 4.1 shows that a weaker property should suffice. Mental model for saliency maps. One should keep in mind that a saliency map is the outcome of a stochastic process: training a deep net, with stochasticity coming from dataset choice, random initialization, and SGD. Let y and µ x,y be random variables denoting respectively the logit value for label y on input x and the corresponding saliency map. For simplicity we assume the saliency method correlates with a certain ground truth saliency as follows. There exists a subset S x,y of pixels (e.g., the pixels of x that a human may find salient for label y) such that for each pixel in S x,y the corresponding coordinate of µ x,y is distributed as N (c, σ 2 1) and each pixel in S x,y is distributed as N (0, σ 2 2) where c = c(x, y) and σ 1, σ 2 are constants. This modeling assumption -"noisy signal" mixed with "white noise"-makes intuitive sense and has some experimental support; see Figure 2 where distribution of saliency scores does seem similar to mixture of two Gaussians (maybe three?), one centered at zero and other at some positive value. One could consider variants of our model, e.g., allowing means and variances to differ across pixels so the maps can include certain image portions (e.g., sharp edges) with very high probability. These don't qualitatively affect the phenomena discussed below. Now µ x,y · 1 is the sum of pixel saliencies (where 1 is the all-1's vector), which by completeness, is y. By linearity of expectation E[y] = c · |S x,y |. By measure concentration, with probability at least 1 − exp(−t 2 /4) we have Since the sets are fairly large (say, reasonable fraction of all pixels), this concentration should be good 2. After applying competition, the saliency map changes to µ x,y I x,y where I x,y is the vector with 1 in coordinates where label y wins the competition for pixels, and 0 elsewhere. Now we note various plausible conditions under which µ x,y ⊗ I x,y can be seen as a reasonable approximation to µ x,y. For simplicity we consider how well competition preserves completeness, though obviously other properties can be analysed. Theorem 1 (Informal meta-theorem) For many plausible conditions on the S x,y's there is a scale factor γ > 0 (depending on the conditions) such that the vector γµ x,y I x,y satisfies completeness up to a small additive error. Proof (sketch) One uses measure concentration. As an illustrative example suppose S x,y's are disjoint for different y's. For each coordinate in S x,y there is a certain probability p = p(c, σ 1, σ 2) that the coordinate is nonzero in I x,y. Thus the expected sum of coordinates in the map µ x,y I x,y is p|S x,y |, and so rescaling coordinates by c/p makes them satisfy completeness, up to an additive error given by measure concentration. Qualitatively similar are obtained when S x,y's are not disjoint but for S x,y ∩ ∪ y =y S x,y is small compared to S x,y. Remarks: Obviously, the additive error will have smaller impact for the larger logit values than for smaller ones. The above proof suggests that adding competition is akin to sampling a subset of the salient pixels. At least for image data, where pixel-level information has a lot of redundancy, the essence of the original map survives, as seen in the experiments later. While some methods satisfy completeness by design , others don't. Can our theory apply to the latter class of methods? While an exhaustive study was not performed, randomly sampling a few methods suggests that the saliency maps in practice satisfy approximate completeness anyway (theory is of course lacking), in sense of the following definition. Figure 3 depicts two saliency methods satisfying approximate completeness. Theorem 3 (Modified meta theorem) If a saliency method satisfies (α, β)-approximate completeness, then under various plausible conditions on the S x,y's similar to Theorem 1 there are constants (α, β) such that after applying competition the maps satisfy (α, β)-approximate completeness. We consider saliency methods Gradient Input, LRP, and DASP, and adding competition to them to get methods CGI, CLRP, and CDASP, respectively. CGI is tested using VGG-19 architecture trained on; LRP using VGG-16 architecture with Imagenet; DASP using a CNN model on MNIST. While these architectures are not the most powerful ones around, this is not a limitation for testing saliency methods, which are supposed to work for all architectures. Figure 4 shows underlying saliency maps versus maps with our modifications for VGG-19 on Imagenet. Applying competition does not visibly degrade the map. Some more examples (labeled "original") also appear in Figure 5. The goal of these experiments is to determine whether our modification may be applied to an underlying saliency method to pass the first sanity check in. We conduct cascading and layerwise randomization as described in from top to bottom. • The top figure in Figure 5 shows the of layerwise randomization on Gradient Input. (Figure 8 in the Appendix shows the full figure).The text underneath each image represents which layer of the model was randomized, with the leftmost label of'original' representing the original saliency map of the fully trained model. The top panel shows the saliency maps produced by CGI, and the bottom panel the maps produces by Gradient Input. We find that the Gradient Input method displays the bird no matter which layer is randomized, and that our method immediately stops revealing the structure of the bird in the saliency maps as soon as any layer is randomized. Figure 10 in the Appendix shows a similar but utilizing absolute value visualization. Notice that CGI's sensitivity to model parameters still holds. • The second figure in Figure 5 shows the for cascading randomization on Gradient Input. The rightmost column represents the original saliency map when all layer weights and biases are set to their fully trained values. The leftmost saliency map represents the map produced when only the softmax layer has been randomized. The image to the right of that when everything up to and including conv5 4 has been randomized, and so on. Again we find that CGI is much more sensitive to parameter randomization than Gradient Input. • The bottom figure in Figure 5 shows our for cascading randomization on LRP. We find that our competitive selection process (CLRP) benefits the LRP maps as well. The LRP maps show the structure of the bird even after multiple blocks of randomization, while our maps greatly reduce the prevalence of the bird structure in the images. • Figure 6 shows our for cascading randomization on , a state of the art method that satisfies approximate completeness. DASP still shows the structure of the digit even after randomization, while CDASP eliminates much of this structure. Figure 5: The first two figures depict layerwise and cascading randomization, respectively, for a VGG19 model trained on Imagenet for Gradient Input versus CGI. We find that in CGI, the saliency map is almost blank when any weights are reinitialized. By contrast, we find that the original Gradient Input method displays the structure of the bird, no matter which layer is randomized. The third figure depicts saliency map for cascading randomization on VGG -16 on Imagenet LRP versus CLRP. We notice that LRP shows the structure of the bird even after multiple blocks of randomization. CLRP eliminates much of the structure of the bird. We run experiments to determine whether our saliency method is sensitive to model training. We use a version of adjusted to accept one color channel instead of three On the middle image we find that using the original gradient times input method in an image where the original structure of the number 3 is still visible. On the right hand side image we find that our modification removes the structure of the original input image, as we would expect for a model that had been fitted on randomized data. and train on MNIST. We randomly permute the labels in the training data set and train the model to greater than 98 % accuracy and examine the saliency maps. Figure 7 shows our . On the left hand side is the original image. In the middle is the map produced by Gradient Input. We find that the input structure, the number 3, still shows through with the Gradient Input method. On the other hand, CGI removes the underlying structure of the number. Competition among labels is a simple modification to existing saliency methods that produces saliency maps by combining information from maps from all labels, instead of just the chosen label. Our modification keeps existing methods relevant for visual evaluation (as shown on three wellknown methods Gradient Input, LRP, and DASP) while allowing them to pass sanity checks of , which had called into question the validity of saliency methods. Possibly our modification even improves the quality of the map, by zero-ing out irrelevant features. We gave some theory in Section 4 to justify the competition idea for methods which satisfy approximate completeness. Many methods satisfy completeness by design, and experimentally we find other methods satisfy approximate completeness. We hope the simple analysis of Section 4-modeling the saliency map as "noisy signal" mixed with "white noise"-will inspire design of other new saliency maps. We leave open the question of what is the optimum way to design saliency maps by combining information from all labels 3. When pixel values are spatially correlated it is natural to involve that in designing the competition. This is left for future work. The sanity checks of randomize the net in a significant way, either by randomizing a layer or training on corrupted data. It is an interesting research problem to devise sanity checks that are less disruptive. also computes the gradient of the chosen class's logit. However, instead of evaluating this gradient at one fixed data point, integrated gradients consider the path integral of this value as the input varies from a baseline,x, to the actual input, x along a straight line. proposed an approach for propagating importance scores called Layerwise Relevance Propagation (LRP). LRP decomposes the output of the neural network into a sum of the relevances of coordinates of the input. Specifically, if a neural network computes a function f (x) they attempt to find relevance scores R p 4. Taylor decomposition As stated for special classes of piecewise linear functions that satisfy f (tx) = tf (x), including ReLU networks with no biases, one can always find a root point near the origin such that DeepLIFT explanation The DeepLIFT explanation calculates the importance of the input by comparing each neuron's activation to some'reference' activation. Each neuron is assigned an attribution that represents the amount of difference from the baseline that that neuron is responsible for. Reference activations are determined by propagating some reference input,x, through the neural network. Relationships between different methods. and showed that if modifications for numerical stability are not taken into account, the LRP rules are equivalent within a scaling factor to Gradient Input. showed that for ReLU networks (with zero baseline and no biases) the -LRP and DeepLIFT (Rescale) explanation methods are equivalent to the Gradient Input. , like DeepLIFT relies on the Shapley values, but designs an efficient way to approximately compute these values. Layerwise Randomization original Gradient Input conv1_1 conv1_2 conv2_1 conv2_2 conv3_1 conv3_2 conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc1 fc2 softmax Figure 8: Saliency map for layer-wise randomization of the learned weights. Diverging visualization where we plot the positive importances in red and the negative importances in blue. We find that with CGI, the saliency map is almost blank when any layer is reinitialized. By contrast, we find that Gradient Input displays the structure of the bird, no matter which layer is randomized. Cascading randomization for CLRP versus LRP original LRP dense_1 fc2 fc1 block5_conv3 block5_conv2 block5_conv1 block4_conv3 block4_conv2 block4_conv1 block3_conv3 block3_conv2 block3_conv1 block2_conv2 block2_conv1 block1_conv2 block1_conv1 conv1_1 conv1_2 conv2_1 conv2_2 conv3_1 conv3_2 conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc1 fc2 softmax Figure 10: Saliency map for layer-wise randomization of the learned weights. Absolute value visualization where we plot the absolute value of the saliency map. We find that using CGI, the saliency map is almost blank when any layer is reinitialized. By contrast, we find that Gradient Input displays the structure of the bird, no matter which layer is randomized.
We devise a mechanism called competition among pixels that allows (approximately) complete saliency methods to pass the sanity checks.
1,708
scitldr
Classification systems typically act in isolation, meaning they are required to implicitly memorize the characteristics of all candidate classes in order to classify. The cost of this is increased memory usage and poor sample efficiency. We propose a model which instead verifies using reference images during the classification process, reducing the burden of memorization. The model uses iterative non-differentiable queries in order to classify an image. We demonstrate that such a model is feasible to train and can match baseline accuracy while being more parameter efficient. However, we show that finding the correct balance between image recognition and verification is essential to pushing the model towards desired behavior, suggesting that a pipeline of recognition followed by verification is a more promising approach towards designing more powerful networks with simpler architectures. FIG3: Overview of our hybrid model in contrast with two opposing approaches. (a) Recognition network directly predicts the class given the input. (b) Verification network predicts binary output indicating the amount of similarity or likelihood that they belong in the same class. The verification network can be used to compare to all reference images from each class to produce the final class prediction. (c) Our approach, RVNN, queries for reference image from a particular class at each time step, and makes a class prediction at the last time step. BID9. Then to classify, the image is pair-wise compared with a support image from every class and 37 image with the maximum similarity score is chosen. Matching networks extends the verification-38 based approach by outputting a prediction based on a weighted-sum of similarity across classes. Additionally the work introduces an episodic-training regime which encourages the model to better 40 learn for the one-shot learning scenario. Prototypical Networks uses Euclidean Distance in embedding 41 space as a verification metric rather than a learned metric, while maintaining the same training regime 42 as Matching Networks to encourage different classes to have distant means in embedding space. One recent work outside of few-shot learning domain is the Retrieval-Augmented Convolutional Neural Networks (RaCNN), which combines CNN recognition network with a retrieval engine 45 for support images to help increase adversarial robustness. For all the above few shot learning approaches, verification with support images from all classes are required before a classification decision is made. Hence the classification decision is solely derived 48 from verifications. RaCNN is closer to our approach, which uses a hybrid between verification 49 and recognition. However, RaCNN simply retrieves the K closest support image neighbours in the 50 embedding space, whereas our model is required to form a hypothesis as to which class to compare 51 with. In cases in which there are a large number of classes, we expect our approach to excel. As well, this introduces a non-differentiable component in our model not present in previous work. Prior work has also looked at the concept of incorporating external knowledge into the decision 55 making process of neural networks, often with non-differentiable components. Buck order to produce a prediction C for the class. FIG2 illustrates the full process. The subsequent sections detail the implementations of the three components as well as training 84 considerations. The recurrent querying model f rnn was implemented using a Gated Recurrent Unit (GRU). We 105 also considered passing in additional information to f rnn, such as the query that was used in the 106 previous time steps. We implemented f q as sampling a class based on the categorical probability given by the softmax of 109 the logits from the f rnn. Therefore, f q can be written as: DISPLAYFORM0 DISPLAYFORM1 To sample S n+1, we can use the Gumbel-Max trick: DISPLAYFORM2 where g i...g k are i.i.d samples drawn from Gumbel distribution. However, the arg max operator 112 is not differentiable, so instead we explored two approaches during training. The first is to use the Gumbel-Softmax trick, also known as the Concrete estimator. We 114 relaxed the arg max operator to a differentiable softmax function with temperature parameter τ: DISPLAYFORM0 The τ parameter is annealed exponentially from τ = 1 to τ = 0.5 as the training iterations progresses. The second approach is to use simple Straight-Through estimator BID0. In the forward pass, we apply 117 the Gumbel-Max trick to take discrete query choices. Then on the backward pass, we set the derivative 118 of the query with respect to the softmax probabilities to be identity so that the out-going gradient 119 from the arg max operator is equal to the incoming gradient during backpropagation: DISPLAYFORM0 We highlight that our model differ from several existing networks in various aspects. The performance of the model is assessed by both reduced parameter usage and sample efficiency. Reduced parameter usage is measured relative to a baseline model, in this case the CNN architecture The performance of the model alone does not indicate whether our approach is functioning as intended. We also experiment with several small modifications to our architecture as well as a few hyper- parameters that are unique to our model. We list them here below. • Architectural Considerations -Query Memory (QM): The query from the past time step is passed to the RNN. the RNN learns to adapt its policy to varying levels of accuracy, and see whether a better can 184 be achieved when they are used in tandem. We also experiment with fixing query policy to assess 185 whether the RNN actually learns intelligent query behavior. From our overall performance metrics we observe that at both smaller and larger sizes of model, our usage. This is in agreement with our hypotheses that our model would be more parameter efficient. RNN's learned policy is compared against a random query policy and the optimal query policy (never 218 repeating a query). From the figure we see that the model is able to conduct a better than random 219 query policy, but is not able to achieve optimal performance. We also observed that performance 220 increases with a higher RNN size up to 200. This suggests that the model is in some part able to track 221 previously unsuccessful queries and remember if there was a match. However, its memory is not 222 perfect and it cannot achieve optimal performance. and a real (neural network) comparator. Of the query-based models, informed queries performed the best over random and no query models. appropriate pipeline for our model is to perform a recognition operation which is then followed by 241 verification, rather than perform them simultaneously. This new pipeline would also imply that our model may be less well suited for the one-shot learning 243 task than initially believed, as a reasonably-well trained recognition module is required as the first tasks of this form are pre-requisite if neural-network models are considered to be as such BID5.
Image classification via iteratively querying for reference image from a candidate class with a RNN and use CNN to compare to the input image
1,709
scitldr
To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately. However, it is un-clear how to combine the best of the two worlds to get extremely small and efficient models. In this paper, we, for the first time, define the filter-level pruning problem for binary neural networks, which cannot be solved by simply migrating existing structural pruning methods for full-precision models. A novel learning-based approach is proposed to prune filters in our main/subsidiary network frame-work, where the main network is responsible for learning representative features to optimize the prediction performance, and the subsidiary component works as a filter selector on the main network. To avoid gradient mismatch when training the subsidiary component, we propose a layer-wise and bottom-up scheme. We also provide the theoretical and experimental comparison between our learning-based and greedy rule-based methods. Finally, we empirically demonstrate the effectiveness of our approach applied on several binary models, including binarizedNIN, VGG-11, and ResNet-18, on various image classification datasets. For bi-nary ResNet-18 on ImageNet, we use 78.6% filters but can achieve slightly better test error 49.87% (50.02%-0.15%) than the original model Deep neural networks (DNN), especially deep convolution neural networks (DCNN), have made remarkable strides during the last decade. From the first ImageNet Challenge winner network, AlexNet, to the more recent state-of-the-art, ResNet, we observe that DNNs are growing substantially deeper and more complex. These modern deep neural networks have millions of weights, rendering them both memory intensive and computationally expensive. To reduce computational cost, the research into network acceleration and compression emerges as an active field. A family of popular compression methods are the DNN pruning algorithms, which are not only efficient in both memory and speed, but also enjoy relatively simple procedure and intuition. This line of research is motivated by the theoretical analysis and empirical discovery that redundancy does exist in both human brains and several deep models BID7 BID8. According to the objects to prune, we can categorize existing research according to the level of the object, such as connection (weights)-level pruning, unit/channel/filter-level pruning, and layer-level pruning BID28. Connection-level pruning is the most widely studied approach, which produces sparse networks whose weights are stored as sparse tensors. Although both the footprint memory and the I/O consumption are reduced BID12, Such methods are often not helpful towards the goal of computation acceleration unless specifically-designed hardware is leveraged. This is because the dimensions of the weight tensor remain unchanged, though many entries are zeroed-out. As a wellknown fact, the MAC operations on random structured sparse matrices are generally not too much faster than the dense ones of the same dimension. In contrast, structural pruning techniques BID28, such as unit/channel/filter-level pruning, are more hardware friendly, since they aim to produce tensors of reduced dimensions or having specific structures. Using these techniques, it is possible to achieve both computation acceleration and memory compression on general hardware and is common for deep learning frameworks. We consider the structural network pruning problem for a specific family of neural networks -binary neural networks. A binary neural network is a compressed network of a general deep neural network through the quantization strategy. Convolution operations in DCNN 1 inherently involve matrix multiplication and accumulation (MAC). MAC operations become much more energy efficient if we use low-precision (1 bit or more) fixed-point number to approximate weights and activation functions (i.e., to quantify neurons) BID3. To the extreme extent, the MAC operation can even be degenerated to Boolean operations, if both weights and activation are binarized. Such binary networks have been reported to achieve ∼58x computation saving and ∼32x memory saving in practice. However, the binarization operation often introduces noises into DNNs, thus the representation capacity of DNNs will be impacted significantly, especially if we also binarize the activation function. Consequently, binary neural networks inevitably require larger model size (more parameters) to compensate for the loss of representation capacity. Although Boolean operation in binary neural networks is already quite cheap, even smaller models are still highly desired for low-power embedded systems, like smart-phones and wearable devices in virtual reality applications. Even though quantization (e.g., binarization) has significantly reduced the redundancy of each weight/neuron representation, our experiment shows that there is still heavy redundancy in binary neural networks, in terms of network topology. In fact, quantization and pruning are orthogonal strategies to compress neural networks: Quantization reduces the precision of parameters such as weights and activations, while pruning trims the connections in neural networks so as to attain the tightest network topology. However, previous studies on network pruning are all designed for full-precision models and cannot be directly applied for binary neural networks whose both weights and activations are 1-bit numbers. For example, it no longer makes any sense to prune filters by comparing the magnitude or L 1 norm of binary weights, and it is nonsensical to minimize the distance between two binary output tensors. We, for the first time, define the problem of simplifying binary neural networks and try to learn extremely efficient deep learning models by combining pruning and quantization strategies. Our experimental demonstrate that filters in binary neural networks are redundant and learning-based pruning filter selection is constantly better than those existing rule-based greedy pruning criteria (like by weight magnitude or L 1 norm).We propose a learning-based method to simplify binary neural network with a main-subsidiary framework, where the main network is responsible for learning representative features to optimize the prediction performance, whereas the subsidiary component works as a filter selector on the main network to optimize the efficiency. The contributions of this paper are summarized as follows:• We propose a learning-based structural pruning method for binary neural networks to significantly reduce the number of filters/channels but still preserve the prediction performance on large-scale problems like the ImageNet Challenge.• We show that our non-greedy learning-based method is superior to the classical rule-based methods in selecting which objects to prune. We design a main-subsidiary framework to iteratively learn and prune feature maps. Limitations of the rule-based methods and advantages of the learning-based methods are demonstrated by theoretical and experimental . In addition, we also provide a mathematical analysis for L 1 -norm based methods.• To avoid gradient mismatch of the subsidiary component, we train this network in a layerwise and bottom-up scheme. Experimentally, the iterative training scheme helps the main network to adopt the pruning of previous layers and find a better local optimal point.2 RELATED WORK 2.1 PRUNING Deep Neural Network pruning has been explored in many different ways for a long time. BID13 proposed Optimal Brain Surgeon (OBS) to measure the weight importance using the second-order derivative information of loss function by Taylor expansion. BID9 further adapts OBS for deep neural networks and has reduced the retraining time. Deep Compression BID12 prunes connections based on weight magnitude and achieved great compression ratio. The idea of dynamic masks BID10 is also used for pruning. Other approaches used Bayesian methods and exploited the diversity of neurons to remove weights BID23 BID22. However, these methods focus on pruning independent connection without considering group information. Even though they harvest sparse connections, it is still hard to attain the desired speedup on hardware. To address the issues in connection-level pruning, researchers proposed to increase the groupsparsity by applying sparse constraints to the channels, filters, and even layers BID28 BID0 BID25 BID1. used LASSO constraints and reconstruction loss to guide network channel selection. introduced L 1 -Norm rank to prune filters, which reduces redundancy and preserves the relatively important filters using a greedy policy. BID21 leverages a scaling factor from batch normalization to prune channels. To encourage the scaling factor to be sparse, a regularization term is added to the loss function. On one hand, methods mentioned above are all designed for full-precision models and cannot be trivially transferred to binary networks. For example, to avoid introducing any non-Boolean operations, batch normalization in binary neural networks (like XNOR-Net) typically doesn't have scaling (γ) and shifting (β) parameters BID3. Since all weights and activation only have two possible values {1, −1}, it is also invalid to apply classical tricks such as ranking filters by their L 1 -Norms, adding a LASSO constraint, or minimizing the reconstruction error between two binary vectors. On the other hand, greedy policies that ignore the correlations between filters cannot preserve all important filters. Recent work shows that full precision computation is not necessary for the training and inference of DNNs BID11. Weights quantization is thus widely investigated, e.g., to explore 16-bit BID11 and 8-bit fixed-point numbers. To achieve higher compression and acceleration ratio, extremely low-bit models like binary weights BID5 BID18 and ternary weights BID30 BID29 BID27 have been studied, which can remove all the multiplication operations during computation. Weight quantization has relatively milder gradient mismatch issue as analyzed in Section 3.1.2, and lots of methods can achieve comparable accuracy with full-precision counterparts on even large-scale tasks. However, the ultimate goal for quantization networks is to replace all MAC operations by Boolean operations, which naturally desires that both activation and weights are quantized, even binarized. The activation function of quantized network has the form of a step function, which is discontinuous and non-differentiable. Gradient cannot flow through a quantized activation function during backpropagation. The straight-through estimator (STE) is widely adopted to circumvents this problem, approximating the gradient of step function as 1 in a certain range BID16 BID2. BID4 proposed the Half-wave Gaussian Quantization (HWGQ) to further reduce the mismatch between the forward quantized activation function and the backward ReLU BID24. Binary Neural Networks (BNN) proposed in BID6 and BID3 use only 1 bit for both activation functions and weights, which ends up with an extremely small and faster network. BNNs inherit the drawback of acceleration via quantization strategy and their accuracy also need to be further improved.. Because both weights and activations are binary, we remove the subscripts of F b and W b for clarity. The goal of pruning is to remove certain filters W i n,:,:,:, n ∈ Ω, where Ω is the indices of pruned filters. If a filter is removed, the corresponding output feature map of this layer (which is also the input feature map of next layer) will be removed, too. Furthermore, the input channels of all filters in the next layer would become unnecessary. If all filters in one layer can be removed, the filter-level pruning will upgrade to layerlevel pruning naturally. The goal of our method is to remove as many filters as possible for binary neural networks which are already compact and have inferior numerical properties, thus this task is more challenging compared with pruning a full-precision model. We borrow the ideas from binary network optimization to simplify binary networks. While it sounds tautological, note that the optimization techniques were originally invented to solve the quantization problem, but we will show that it can be crafted to solve the pruning problem for binary networks. A new binary network, called subsidiary component, acts as learnable masks to screen out redundant features in the main network, which is the network to complete classification tasks. Each update of the subsidiary component can be viewed as the exploration in the mask search space. We try to find a (local) optimal mask in that space with the help of the subsidiary component. The process of training subsidiary and main networks is as follows: For layer i, the weights of subsidiary component M i ∈ R Ni+1×Ni×Ki+1×Ki+1 are initialized by the uniform distribution: DISPLAYFORM0 In practice, σ is chosen to be less than 10 −5. To achieve the goal of pruning filters, all elements whose first index is the same share the same value. DISPLAYFORM1 is an output tensor from the subsidiary component. In the first stage, we use the Iden(·) function (identity transforma- DISPLAYFORM2 We apply the filter mask O i to screen main network's weights W i, DISPLAYFORM3, where ⊗ is element-wise product. Ŵ i denotes the weights of the main network after transformation, which is used to be convolved with the input feature maps, F i, to produce the output feature maps F i+1 . Then, weights of the main network, W j, j ∈ [1, I], are set to be trainable while weights of the subsidiary component, M j, j ∈ [1, I], are fixed. Because subsidiary weights are fixed and initialized to be near-zero, it will not function in the Feature Learning stage, thuŝ DISPLAYFORM4 The whole main binary neural network will be trained from scratch. Training Subsidiary Component within a Single Layer i: After training the whole main network from scratch, we use a binary operator to select features in a layer-wise manner. In opposite to the previous Feature Learning stage, the weights of all layers W j, j ∈ [1, I] of the main network and the weights except layer i of the subsidiary component M j, j ∈ [1, I]/[i] are set to be fixed, while the subsidiary component's weights at the current layer M i are trainable when selecting features for Layer i. The transformation function for the filter mask O i is changed from Iden(·) to Bin(·) (sign transformation + linear affine), DISPLAYFORM0 2 By doing this, we project the float-point M i to binarized numbers ranging from 0 to 1. Elements in O i which are equal to 0 indicate that the corresponding filters are removed and the elements of value 1 imply to keep this filter. Since Bin(·) is not differentiable, we use the following function instead of the sign function in back propagation when training the subsidiary component M i BID16 BID2, DISPLAYFORM1 Apart from the transformation, we also need to add regularization terms to prevent all O i from degenerating to zero, which is a trivial solution. So the loss function of training Layer i in the subsidiary component is, arg min DISPLAYFORM2 where L cross entropy is the loss on data and L distill is the distillation loss defined in.Finally, we fix the layers M j, j ∈ [1, I] in the subsidiary component and layers before i in the main network (i.e., W j, j ∈ [1, i − 1]), and retrain the main layers after Layer i (i.e., W j, j ∈ [i, I]).Bottom-up Layer-wise Training for Multiple Layers: We showed how to train a layer in the subsidiary component above. To alleviate the gradient mismatch and keep away from the trivial solution during Features Selection, next, we propose a layer-wise and bottom-up training scheme for the subsidiary component: Layers closer to the input in the subsidiary component will be trained with priority. As Layer i is under training, all previous layers (which should have already been trained) will be fixed and subsequent layers will constantly be the initial near-zero value during training. There are three advantages of this training scheme. First, as in, we use STE as in BID16 BID2 to approximate the gradient of the sign function. By chain rule, for each activation node j in Layer i, we would like to compute an "error term" δ i j = ∂L ∂a i j which measures how much that node is responsible for any errors in the output. For binary neural networks, activation is also binarized by a sign function which need STE for back-propagation. The "Error term" for binary neural networks is given by, DISPLAYFORM3 where and can be obtained by the chain rule, and and are estimated from STE, which will introduce gradient mismatch into back-propagation. We refer as weight gradient mismatch issue and as activation gradient mismatch issue. They are two open problems in the optimization of binary neural networks, both caused by the quantization transform functions like Sign(·). Starting from bottom layers, we can train and fix layers who are harder to train as early as possible for the subsidiary component. In addition, because of the retraining part in Features Selection, bottom-up training scheme allows bottom layers to be fixed earlier, as well. In practice, this scheme in more stable training curves and can find a better local optimal point. Second, the bottom-up layer-wise training scheme helps the main network to better accommodate the feature distribution shift caused by the pruning of previous layers. As mentioned before, the main difference in the motivation between our pruning method and rule-based methods is that we have more learnable parameters to fit the data by focusing on the final network output. With the bottom-up and layer-wise scheme, even if the output of Layer i changes, subsequent layers in the main network can accommodate this change by modifying their features. Lastly and most importantly, we achieve higher pruning ratio by this scheme. According to our experiments, a straight-forward global training scheme leads to limited pruning ratio. Some layers are pruned excessively and hence damaged the accuracy, while some layers are barely pruned, which hurts the pruning ratio. The layer-wise scheme would enforce all layer to be out of the comfort zone and allow balancing between accuracy and pruning ratio. The pipeline of our method is as follows:1. Initialize weights of subsidiary component M j, j ∈ [1, I] with near-zero σ's.2. Set M j, j ∈ [1, I] to be fixed, and train the whole main network from scratch.3. Train starting from the first binary kernel. Each layer is the same as in the algorithm shown below:• Change the activation function for M i from Iden(·) to Bin(·). And all other parameters apart from M i are fixed. Train subsidiary component according to.• Fix the subsidiary layers M j, j ∈ [1, I] and main layers before i-th layer W j, j ∈ [1, i − 1], and retrain main layers after i-th layer W j, j ∈ [i, I]. Though pruning network filters is not an explicit transfer learning task, the aim is to guide the thin network to learn more similar output distributions with the original network. The model is supposed to learn a soft distribution but not a hard one as proposed in previous traditional classifier networks. Hence, we add a distillation loss to guide the training subsidiary component to be more stable, as shown in FIG2. DISPLAYFORM0 We set p to be the original binary neural network distribution. Because the distribution is fixed, the H(p) is a constant and can be removed from L distill. It means that the distillation loss can be written as DISPLAYFORM1 ) where z i and t i represent the final output of the pruned and original networks before the softmax layer. T is a temperature parameter for the distillation loss defined in BID17. We set T as 1 in practice. M is the number of classes. Previous methods use rules to rank the importance of each filter and then remove the top k least important filters. The rules can be weight magnitude, e.g., measured by the L 1 norm, or some other well-designed criteria. Studies in this line share the same motivation that individual filters have their own importance indication, and filters with less importance can be removed relatively safely. This assumption ignores interactions among filters. As mentioned before, rule-based pruning algorithms use a greedy way to prune filters, i.e., they assume that individual filters behave independently and their own importance (or function) for representation learning. We give a theoretical analysis in Section 3.3 about this point. In fact, pruning filters independently may cause problems when filter are strongly correlated. For example, if two filters have learned the same features (or concepts), these two filters may be pruned out together by rule-based methods, because their rankings are very close. Clearly, pruning one of them is a better choice. However, almost all these criteria are based on value statistics and are completely unsuitable for the binary scenario with only two discrete values. One possible pruning method is to exhaustively search the optimal pruning set, but this is NP-Hard and prohibitive for modern DNNs that have thousands of filters. Our method uses the subsidiary component to "search" the optimal solution. Our soft "search" strategy is gradient-based and batch-based compared to exhaustive search, and it is much more efficient. If our main network is full-precision, the L 1 -Norm based pruning technique would be strongly relevant to our method, except that we target at optimizing the final output of the network, whereas the L 1 -Norm based method greedily controls the perturbation of the feature map in the next layer. Suppose that W = [w 1 ; . . . ; w n] is the original filter blocked by rows, W = [w 1 ; . . . ; w n] is the pruned filter, and x is the input feature map. Let ∆w i ≡ w i − w i. Then, the L 1 -Norm approach minimizes the upper bound of the following problem: max x ∞ <τ W x − W x. To see this, note DISPLAYFORM0 To minimize i ∆w 1 by zeroing-out a single row w i, obviously, the solution is to select the one with the smallest L 1 -Norm. However, note that this strategy cannot be trivially applied for binary networks, because the L 1 -Norm for any filter that is a {−1, +1} tensor of the same shape is always identical. Previous work (He et al.) uses the LASSO regression to minimize the reconstruction error of each layer: DISPLAYFORM1 β 0 ≤ C. Solving this L 0 minimization problem is NPhard, so the L 0 regularization is usually relaxed to L 1. In the binary/quantization scenario, activations only have two/several values and the least reconstruction error is not applicable. Instead of minimizing the reconstruction error of a layer, our method pays attention on the final network output with the help of the learnable subsidiary component. We directly optimize the discrete variables of masks (a.k.a subsidiary component) without the relaxation. To evaluate our method, we conduct several pruning experiments for VGG-11, Net-InNet (NIN), and ResNet-18 on CIFAR-10 and ImageNet. Since our goal is to simplify binary neural networks, whose activation and weights are both 1-bit, all main models and training settings in our experiments inherit from XNOR-Net BID26 ). Since we are, to the best of our knowledge, the first work to define filter-level pruning for binary neural networks, we proposed a rule-based method by ourselves as the baseline. Instead of ranking filters according to the L 1 -Norm, we use the magnitude of each filter's scaling factor (MSF) as our pruning criterion. Inspired by, we test both the "prune once and retrain" scheme 2 and the "prune and retrain iteratively" scheme 3. Figure 3: Learning curve for subsidiary component. We train the subsidiary component with different learning rate. These curves are smoothed for the directly seeing the trend of the learning Subsidiary Component. All the dotted lines represent the learning curve of the large learning rate 10 −3, the normal lines represent the learning curves of the small learning rate 10 −4.As pointed out in BID26 we set weights of the first layer and last layer as full-precision, which also means that we only do pruning for the intermediate binary layers. We measure effectiveness of pruning methods in terms of PFR, the ratio of the number of pruned filters to original filter number, and error rate before and after retraining. For error ratio, smaller is better. For PFR, larger is better. For CIFAR-10, when training the main network, learning rate starts from 10 −4, and learningrate-decay is equal to 0.1 for every 20 epochs. Learning rate is fixed with 10 −3 when training the subsidiary component. For ImageNet, we set a constant learning rate of 10 −3 for the subsidiary component and main work. For fair comparison, we control PFR for each layer of these methods to be the same to observe the final Retrain-Error. In FIG3, MSF-Layerwise refers to the "prune once and retrain" scheme, and the MSF-Cascade refers the "prune and retrain iteratively" scheme. The first three figures of experiments were done on the CIFAR-10 dataset. The last figure refers to on Imagenet. 4.1 NIN AND VGG-11 ON CIFAR-10 NIN is a fully convolutional network, using two 1 × 1 convolution layers instead of fully connected layer, and has quite compact architecture. VGG-11 is a high-capacity network for classification. VGG-11 on CIFAR-10 consists of 8 convolutional layers(including 7 binary layers) and 1 fully connected layers. Batch normalization is used between every binary convolution and activation layer, which makes the training process more stable and converge with high performance. For both MSF-Layerwise and MSF-Cascade, with the same PCR, the performance is worse than us. With 30% ∼ 40% of pruning filter ratio, the pruned network error rate only increased 1% ∼ 2%. An interesting phenomenon is observed when training subsidiary components for different models. We try different learning rates in our experiments and observe it impacts final convergent point a lot as shown in Figure 3. The relatively smaller learning rate (10 −4) will converge with lower accuracy and higher pruning number; however, the larger learning rate (10 −3) leads to the opposite . One possible explanation is that the solution space of the high-dimensional manifold for binary neural networks is more discrete compared to full-precision networks, so it is difficult for a subsidiary component to jump out of a locally optimal point to a better one. Moreover, in the binary scenario, larger learning rate will increase the frequency of value changing for weights. Our motivation is to use a learnable subsidiary components to approximate exhaustive search, so using a larger learning rate will enable the subsidiary component to "search" more aggressively. A large learning rate may be unsuitable for normal binary neural networks like the main network in this paper, but it is preferred by the subsidiary component. As mentioned in section 3.1.1, we use the uniform distribution to initialize the mask. According to the expectation of the uniform distribution, E(SP) = 0.5, where SP is the ratio of the number of positive elements in subsidiary weights to size of weights. However, since we use Sign(·), different SP may impact the a lot. We conduct six experiments on different models across different layers and show that initialization with 0.4, 0.6, 1.0 SP will all converge to the same state. However, when SP is 0.2, final performance will be very poor. A possible reason is that the number of filters thrown out by the initialization is too large, and due to the existence of the regularization term, the network's self-adjustment ability is limited and cannot converge to a good state. Hence we recommend the SP to be intialized to greater than 0.4. Compared with NIN and VGG-11, ResNet has identity connections within residual block and much more layers. As the depth of network increases, the capacity of network also increases, which then leads to more redundancy. From experimental , we find that when the identification mapping network has a downsampling layer, the overall sensitivity of the residual block will increase. Overall for ResNet on CIFAR-10 is shown in table, and statistics for each layer can be found in Appendix. We further verify our method with ResNet-18 on ImageNet. α can be set from 10 −7 to 10 −9 depending on the expected PFR, the accuracy and pruning ratio are balanced before retraining. After 20 epoches retraining for each layer, the final PFR is 21.4%, with the retrained error has decreased from 50.02% to 49.87%. Using STE, weights gradient mismatch will be introduced here. Using STE, activation gradient mismatch will be introduced here. Figure 5: Gradient flow of binary neural networks during back-propagation. Rectangles represent the weight tensor and ellipses represent functional operation. In this paper, we use binary operation as a special quantization function. MAC is short for multiplication and accumulate operations, or the equivalent substitution like XNOR BID3 in BNN. For fair comparison, we control PFR for each layer of these methods to be the same to observe the final Retrain-Error. In TAB1, MSF-Layerwise refers to the "prune once and retrain" scheme, and the MSF-Cascade refers the "prune and retrain iteratively" scheme. The first three groups of experiments were done on the CIFAR-10 dataset. The last group refers to on Imagenet.
we define the filter-level pruning problem for binary neural networks for the first time and propose method to solve it.
1,710
scitldr
Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements. To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy. AntMan extends knowledge distillation based training to learn the compressed models efficiently. Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models. Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art. Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware. Remarkable advances in deep learning (DL) have produced great models across a wide variety of tasks such as computer vision, machine reading, speech generation and image recognition BID7. However, wide adoption of these models is still limited by their inference performance, cost and memory requirements. On the client side, all pervasive devices like smart-phones, tablets and laptops have limited memory and computational resources to handle large complex DL models. On the server side, intensive computation can render the models too slow to meet responsiveness requirements and too expensive to scale, preventing their deployment in production. Model Compression is a flourishing area that aims to reduce the computational and memory complexity of DL models to address the aforementioned problems without significantly affecting accuracy. Compressing Convolution Neural Networks (CNNs) have already been widely explored in the past few years BID3, while our work focuses on Recurrent Neural Networks (RNNs), which are broadly used among various natural language processing tasks BID17 BID24 BID29. It is well known that large RNN models are computation and memory intensive (Zhang et al.). In particular, their computation increases linearly with sequence length, and their recurrent unit has to be computed sequentially, one step at a time with limited parallelism, both of which makes long execution time a crucial issue for RNN inference computation. Compressing RNNs, however, is challenging, because a recurrent unit is shared across all the time steps in sequence, compressing the unit will aggressively affect all the steps. Inducing sparsity is one of the prominent approaches used for RNN compression. BID18 proposed a pruning approach that deletes up to 90% connections in RNNs. The obtained sparse matrices, however, have an irregular/non-structured pattern of non-zero weights, which is unfriendly for efficient computation in modern hardware systems BID12 BID25. To address this issue, BID19 proposed inducing block-sparsity in RNNs via pruning or group lasso regularization. Similarly, BID26 introduces ISS, intrinsic structured sparsity for LSTMs BID9, a type of RNN, such that a sparse LSTM can be transformed into a dense one but with smaller size. ISS conveniently turns sparsity into efficient execution, but as its sparse structure is quite coarse-grained, it is hard to push out high sparsity without degrading accuracy, especially in RNNs where the hidden dimension is smaller than input dimension (elaborated in Section 5.1).Our work explores a new line of structured sparsity on RNNs, using predefined compact structures as opposed to pruning and regularization based approaches. We take inspiration from predefined compact CNN structures such as group convolutions BID11 and depth-wise separable convolutions BID4. Specifically, we replace matrix-vector multiplications (MVs), the dominant part of RNN computations, with localized group projections (LGP).LGP divides the input and output vectors into groups where the elements of the output group is computed as a linear combination of those from the corresponding input group. In addition, to empower the information flow across multiple groups along the steps of RNN computation, we use a permutation matrix or a dense-square matrix to combine outputs across groups, helping the compact structure to retain accuracy. Furthermore, we combine LGP with low-rank matrix decomposition in order to further reduce the computations. This is possible as low rank and sparsity are complimentary to each other. Low-rank decomposition such as SVD approximates a low-rank multiplication Ax as P Qx, where P and Q are dense. By imposing LGP-based sparsity on P and Q, we reduce the computation further. For a given rank reduction factor of r, we reduce the computation cost and model size by O(r 2), compared to O(r) by using low-rank decomposition methods like SVD BID6 alone. We call our compression approach AntMan -'shrink in scale' by synergistically combining structured sparsity and low-rank decomposition, but'increase in strength' by enabling the flow across structured groups along RNN sequence to retain accuracy. To train RNN models with AntMan, we use teacher-student training paradigm BID1 by combining the label loss with teacher-MSE-loss and teacher-KL-divergence-loss. To improve the training efficiency, we develop a new technique to decide proper coefficients to obtain high accuracy efficiently with minimal trials. We evaluate AntMan on multiple RNN based models for machine reading comprehension and language modeling. For a well-known MRC model BID24, we reduce the computational complexity and model size of LSTMs (a particular type of RNN) by up to 25x with less than 1pt drop in F1 score. For PTB BID29 language model, we achieve a computational reduction of 50x with no drop in perplexity, and 100x with just a single point drop in perplexity. We also construct language models for PTB with perplexities ranging from 64 to 70, but with 3x to 5x fewer overall model weights (5x to 25x reduction in RNN weights) than the state-of-art. Last but not least, we develop efficient implementations of inference kernels on CPUs to serve models compressed by AntMan. We show that unlike computation with unstructured sparsity, AntMan offers significant performance improvement for large RNN models even with modest levels of sparsity. Our evaluations show that a 2x to 10x theoretical reduction in computation can in up to 2x to 30x actual speedup, respectively, for moderate to large RNNs, demonstrating attractive practical value of AntMan on commodity hardware. Compressing RNNs via Sparsity: Described in Section 1 and empirically compared in Section 5.1.Compressing RNNs via Low-Rank Approximations: BID20 BID13 use SVD to compress LSTM models by 3-4x for acoustic modeling and speech recognition tasks with negligible loss in accuracy. AntMan achieves significantly higher compression rate than SVD based methods for the same rank reduction. BID27 uses Block Tensor Decomposition to compress LSTMs for vision tasks. Their work is specifically designed to exploit redundancies present in the image vector (input to the LSTMs) obtained from upstream CNN layers, while AntMan is designed to compress general RNNs, where the inputs do not exhibit such redundancies in many cases. Teacher-Student training paradigm: Knowledge Distillation (KD) technique developed by BID8 is a popular approach to compress deep and wide networks into sparser ones, where the compressed model mimics the function learned by the complex model. KD usually optimizes a weighted average of two different objective functions. The first objective function can be one of the following three: cross entropy, or mean square error, or Kullerback Leiber divergence, all computed with respect to the soft targets, and the second objective function is the cross entropy with the correct labels. Several similar approaches BID23 BID14 BID2 BID0 BID28 extend the idea of KD.In contrast, AntMan optimally combines three objective functions, MSE loss, KL divergence loss and the cross entropy of the true labels, powered by an efficient method of deciding their coefficients. AntMan compresses RNN computation by combining benefits of structured sparsity and low rank decomposition. It consists of three components: i) localized group projections that sparsify matrix multiplications using block diagonal matrices, ii) group mixing that exchanges information across different local groups along the sequence of RNN computation, and iii) low rank approximation that uses SVD like decomposition to reduce the rank of the projection. By composing them, we construct a few variations of AntMan compression modules that exhibit varying degree of compression rate and accuracy impact. We also analyze the cost complexity of AntMan modules and discuss efficient implementation on commodity hardware such as CPUs using off-the-shelf BLAS libraries. AntMan reduces the computation and size of RNNs by replacing dense matrix-vector product (MV) with sparse but structured MV. It divides the input and output vectors into g local groups such that the elements of an output group is a weighted linear sum of the elements in the corresponding input group. Since output elements of one group only depend on the input elements of the corresponding group, we call it localized group projections (LGP). Mathematically, we replace the matrix vector product Ax with D g x, where D g is a block-diagonal matrix with g blocks. In an RNN cell computation, the hidden-state vector at time-step t − 1 is an input to the MV used to compute the hidden-state vector at time-step t. Therefore, using LGP to replace MV in RNN restricts the information flow within a single local group across multiple time steps of the RNN. This restriction reduces the expressibility of the RNN, potentially degrading accuracy. AntMan uses'group mixing' to address this issue. To facilitate the information flow across multiple localized groups along RNN sequence computation, AntMan multiplies the output (or input) vector of LGP with a square matrix, which we call mixing matrix. We develop two types of mixing with varying memory and computational complexity -shuffle mix and dense mix -inspired by the shuffling layer used with group convolutions, or 1x1 convolutions used with depth-separable convolutions BID4.Shuffle mix: The shuffle-mix matrix is a permutation matrix, which evenly distributes the elements of the same group across the entire output vector across different groups. FIG0 shows the effect of shuffle mix following LGP. Mathematically, shuffle mix is equivalent to a transpose operation. If the output vector v ing from the block diagonal MV has m elements, we can represent the vector as a matrix O of size [g, m/g], where each row represents an output group computed from the corresponding input group. Shuffle mix simply transforms DISPLAYFORM0 Dense mix: This technique uses a dense square matrix for group mixing when the matrix in the MV is non-square. Mathematically, given Ax, where size of A is m x n, we can decompose it into M D g x, when m < n, or D g M x, when n < m, and M is a dense-mixing matrix of size m x m, or n x n, respectively. FIG0 shows an example of dense mix preceding LGP.Dense mix has added cost of the dense matrix vector multiply compared to shuffle mix (quadratic vs linear). However, unlike shuffle mix that simply permutes the elements of the output vector, dense mix takes a weighted linear combination, making it more general. It helps retain accuracy at the expense of additional computation. When combined with low-rank decomposition discussed next, dense mix provides high compression while maintaining accuracy, which we elaborate further in evaluation (Table 4). Low-rank decomposition such as SVD approximates a low-rank matrix-vector product Ax as P Qx, where A, P and Q are dense with shapes m x n, m x n r and n r x n, respectively, and n r is the reduced rank. We combine it with LGP by adding LGP-based sparsity on P and Q, further reducing computation. This combination is likely to obtain more compression than using either of the techniques alone because structured sparsity and low-rank decomposition operate in a complimentary fashion. In particular, low rank reduces the computation by factorizing A into smaller matrices P and Q, while LGP reduces computation by sparsifying these matrices without changing their dimensions. Composed from the three components, LGP, group mixing and low-rank decomposition, we construct variations of AntMan compression modules to address varying efficiency and accuracy demand across DL models. FIG0 shows three of such compression modules: (a) LGP-shuffle -LGP with shuffle mix; (b) LGP-dense -LGP with dense mix; (c) LowRank-LGP -low rank with LGP-dense. We elaborate the compression modules by taking FIG0 (c), LowRank-LGP, as an example. LowRank-LGP combines structured sparsity with low rank decomposition. First, it decomposes an MV into an SVD-like form, i.e., Ax ← P Qx, where A is a matrix of size m x n, P and Q are decomposed matrices of size m x n r and n r x n, and n r represents the reduced rank. Next, we replace P and Q using LGP-Dense, i.e., Ax DISPLAYFORM0 DISPLAYFORM1 This module, combining all three components of AntMan, exhibits the potential of achieving significantly higher cost reduction than using SVD alone, which we quantify shortly. LGP-Shuffle TAB0 discusses the reduction in computation and model size over the original Ax, where A is a matrix of size m x n. The third column reports the total number of multiply-add operations, which is also the size of weight matrix in the case of MV. The final column represents the reduction in computation (that is equal to the reduction in model size) compared to the original MV. DISPLAYFORM2 We highlight two key messages: LGP-Dense reduces the total cost by ≈ max(m,n) min(m,n) when g max(m,n) min(m,n), i.e., the larger difference between m and n, the more reduction it gets. When g out and g in are large enough, LowRank-LGP can enable significantly higher cost reduction over SVD, while maintaining the same reduced rank. To see this, let's assume m = n, and g out = g in = g. In this case, the cost reduction from SVD is r/2, while the cost reduction from LowRank-LGP is r 2 g 2r+g. Now, if g ≥ r, then the cost reduction is at least r 2 /3, and it goes up to r 2 when g r. Therefore, the reduction in computational cost scales as O(r) for SVD, while it scales as O(r 2) for LowRank-LGP assuming g ≥ r. As a concrete example, consider a MV of size 1000x400, where the number of parameters and the number of multiply-add (MADD) operations in 400K. Using LGP-Shuffle with g = 10, we can reduce both the number of parameters and MADD operations to 40K. Using LGP-Dense with g = 10, we can reduce them to 200K (40K from LGP + 160K from dense mix). Using LowRank-LGP with g = 10 and r = 4, we can reduce the parameters and MADD operations to 1000 * 400 4 * 10 + 400 * 400 4 * 4 + 400 * 400 4 * 10, which is 24K. We develop efficient implementation of AntMan modules (LGP-Shuffle, LGP-Dense, LowRank-LGP) on CPUs to empower their usage in practice. The implementation consists of three building blocks: i) Regular matrix-vector multiply for dense mix, ii) shuffle-mix multiply and iii) blockdiagonal MVs. BLAS libraries such as Intel MKL already provides efficient implementation of matrix-vector multiplication. Shuffle mix is implemented efficiently as a matrix transpose operation as described in Section 3.1. The block-diagonal MV is viewed as multiple smaller MVs, each corresponding to one of the blocks in the block-diagonal matrix. With multicores, each of these blocks can be computed in parallel using OpenMP for parallelization and Intel MKL for MV computation. In summary, AntMan modules can be implemented efficiently on commodity hardware, such as CPUs, conveniently applicable to various devices on cloud and on edge. We observe that while training AntMan models directly on target labels alone does not generalize well on test data, using knowledge distillation or teacher-student training helps greatly on retaining accuracy. We use the original uncompressed model as the teacher, and train the compressed model (student) to imitate the output distribution of the teacher, in addition to training on the target labels. We describe how we apply and extend teacher-student training. We define the loss function of the compressed model as a weighted combination of three losses -the raw loss from the target labels, and the MSE and the KL divergence losses of the student's output distribution with respect to the teacher's corresponding output distribution: DISPLAYFORM0 where C target, C mse, C kl are the coefficient values corresponding to the target loss, MSE loss and KL divergence loss, respectively. S o, T o are the output distributions of student and teacher model, respectively, whereas T target is the target distribution. Deciding loss coefficients: The final performance of the compressed model significantly depends on the values of the loss coefficients, C target, C mse and C kl. Searching for appropriate values for these coefficients via grid or random search is time and resource consuming. We develop an efficient method to decide them with the following intuition. The direction of the gradient updates to the model is dictated by the relative magnitudes of the individual losses. If one is significantly smaller than the other, then it would have minimal impact on the direction of the updates during training. Therefore, we want to scale each of the three losses such that the overall magnitude of each of the three terms in Eqn. 1 is roughly the same. To this end, we initially train the compressed model separately using each of the three losses and record the loss values once the model converges. Then we use these values as reference to identify loss coefficients such that each of the three terms is roughly the same. We use these coefficients to train the compressed model to optimize accuracy. We demonstrate the effectiveness of our approach by training a 10x compressed language model constructed by replacing the LSTMs in BID29 with LGP-Shuffle with g = 10. For this compressed model, the validation loss values at convergence when training separately with the three individual losses were: target = 4.110, MSE = 0.133 and KL = 0.004. Table 3 shows the test perplexity values (lower the better) obtained by training the compressed model with varying C mse and C KL, while fixing C target = 1. Note that the lowest perplexity is achieved by setting coefficient values of C target = 1, C mse = 30 and C KL = 1000. At these values, each term in Eqn. 1 is roughly equal to 4, demonstrating the effectiveness of our method. Table 3 also shows the benefits of combining three losses. Note that when C mse = 0, the best achievable perplexity is 121.97. Similarly, when C KL = 0, the best achievable perplexity is 75.61. However, combining all three gives the lowest perplexity of 74.69. Table 3: Different choices of coefficients vs test perplexities for student model with 10x computation reduction on the PTB dataset. We evaluate AntMan on three aspects. We use AntMan to obtain order(s) of magnitude computation reduction for language modeling and machine reading comprehension tasks while getting similar accuracy. We use AntMan to construct models with several times fewer parameters than the state-of-art models with the same accuracy targets. Not limited by theoretical speedup, we measure real speedup of AntMan on CPUs and observe super-linear computational efficiency on large RNNs, demonstrating attractive practical value of AntMan on commodity hardware. We evaluate the effectiveness of AntMan on reducing model computation: on BID29 model for word level completion task, we obtain 50x reduction without sacrificing any accuracy; on BID24 model for machine reading compression task, we obtain up to 25x reduction with less than 1pt drop on F1 score. Word level completion task predicts the next word given a partial input sequence. We use Penn Tree Bank(PTB) dataset BID15 ) that consists of 929k training words, 73k validation words and 82k test words. As the teacher model, we chose the model in BID29 consisting of 2 layered LSTMs each with hidden dimension of 1500. For the student model, we replace all the MVs in the LSTMs with LGP-Shuffle, and use g = 10 to g = 100 groups. We do not use any low-rank decomposition for this model. TAB2 shows the perplexity values of the compressed models for different levels of computation reductions. Matching the perplexity of the original model, AntMan (g = 50) achieves 50x computation reduction. With g = 10, AntMan achieves 10x computation reduction while 3pt better perplexity. With g = 100, AntMan achieves 100x computation reduction with only 1pt loss in perplexity. In addition, comparing with the state-of-art compressed model in BID26 using ISS, AntMan reduces computations further by 5-10x under comparable test perplexities. MRC tasks have gained significant popularity in last few years within NLP and computer vision communities. The models answer a query about a given context paragraph, evaluated based on exact match (EM) and F1 score (higher the better).Dataset: We use Stanford Question Answering Dataset (SQuAD) BID22, which consists of a large set of Wikipedia articles and more than 100,000 questions. The answer to every question is always a small excerpt of the article. Teacher Model: We chose our teacher model as the BiDirectional Attention Flow Model (BiDAF) BID24, which is a hierarchical multi-stage model with 6 layers. We focus on compressing the layers having RNNs, which are also the most computationally expensive ones. Specifically, the modeling layer uses 2 layers of bi-directional LSTMs, denoted by ModFwd1, ModBwd1, ModFwd2, ModBwd2, while the output layer has a single bi-directional LSTM, denoted by OutFwd, OutBwd. Compressed Models: We created three compressed models using AntMan with different levels of compression to replace the LSTMs in the BiDAF model: i) LGP-Shuffle (g im = 10, g hm = 4), Table 4: Comparision of computation reduction between AntMan and ISS for BiDAF ii) LowRank-LGP 1 (g im = 10, g hm = 5, r im = 4, r hm = 2), and iii) LowRank-LGP 2 (g im = 5, g hm = 5, r im = 8, r hm = 4). Here, g im and g hm refers to the number of groups, and r im and r hm refers to the low-rank reduction factors for input and hidden MVs of the LSTMs, respectively. The computation reduction for each LSTM is shown in Table 4.Results: Table 4 shows that both LGP-Shuffle and LowRank-LGP achieve significant computation reduction over the original; their reduction is much higher than the existing work ISS BID26 with better EM and F1 scores. ISS compresses an RNN by reducing hidden dimension. The amount of computation per LSTM step for ISS is proportional to (i + h/r) * (h/r), where i is the input dimension, h is the hidden dimension, and 1/r is fraction of hidden dimension removed by ISS. When i h, the compression is proportional to r. In BiDAF, i h in the first modeling layers (800 vs 100). Therefore, compression in these layers is proportional to the reduction in the hidden dimension. However, h = 100 is already very small. By reducing it further, ISS experiences near 2pt drop in F1 score with less than 2.5x compression on the first modeling layers. LGP-Shuffle uses structured sparsity to compress both the input and hidden MVs without reducing hidden dimension. For a comparable EM and F1 scores to ISS, LGP-shuffle achieves significantly higher reduction on the first modeling layers, while doing modestly better on all other layers. LowRank-LGP improves further upon LGP-Shuffle, increasing accuracy by leveraging dense mix to enrich the connection among multiple localized groups, and reducing computation by combining low-rank decomposition. It achieves significantly higher computation reduction across all layers than both ISS and LGP-Shuffle, while achieving nearly 1pt higher F1 scores. Different applications have various accuracy requirements while the devices they are running on also impose different constraints on the model size. For given accuracy targets, smaller models are desired; and for given model sizes, higher accuracy is desired. We show that AntMan improves the Pareto curve of model size against accuracy, providing more compressed models with the same accuracy targets of several recent models at word level completion task. Teacher Model: We use the state-of-art language model as of , AWD-LSTM BID16, consisting of 3-layer LSTMs with 1150 hidden units and an embedding size of 400.Compressed Models: Our compressed models replace all the MVs in the LSTMs of AWD-LSTM with AntMan (LGP-Shuffle with g = 5 to g = 50 groups). FIG3 compares AntMan with other models. LGP-Shuffle (g = 5) achieves perplexity of 63 with 5x fewer LSTM parameters and 3x fewer total parameters than NAS-Cell BID33, the state-of-art model obtaining this range of accuracy. LGP-Shuffle (g = 10) achieves perplexity of 66 with 10x fewer LSTM parameters and 4x fewer total parameters than Var-RHN BID32, and LGP-Shuffle (g = 50) achieves perplexity of 74 with 50x fewer LSTM parameters and 5x fewer total parameters than Var-LSTM-avg1 BID10. These notably improve the Pareto curve of the task by reducing model sizes against different accuracy targets. 1 1 We did not aim to reproduce the state-of-art perplexity (57.3px at 24M parameters) of AWD-LSTM model. AWD-LSTM uses various regularization techniques, each with its own set of hyper-parameters, requiring extensive hyper-parameter tuning to reach its state-of-art perplexity. The AntMan presented in FIG3 was achieved without any regularization. Trying to match AWD-LSTM perplexity using AntMan with regularization could be an exercise in large scale hyper-parameter tuning, which is beyond the scope of this paper. Table 5: Measured speedup on CPU using LGP-Shuffle and LowRank-LGP compared to the theoretical speedup for various input and hidden dimension. For LGP-Shuffle, we use g = 2 and g = 10 to get a theoretical speedup of 2x and 10x. For LowRank-LGP, we use g = 2 and r = 2, and g = 10, and r = 2 to get a speedup of 2.66x and 8x, respectively. By using efficient implementation of AntMan described in Section 3.6, we turn the theoretical speedup (computation reduction) to actual speedup (execution time reduction) in practice. Furthermore, we show that the actual speedup can be significantly higher than the theoretical speedup for large problem sizes. The of our evaluation is shown in Table 5.Problem Configuration: We measure the execution time of LSTMs with and without AntMan varying input and hidden dimensions from 100 to 1600. We use a batch size of 1, which is common in serving scenarios, and a sequence length of 100. We use an efficient implementation as discussed in Elsen: Fuse 4 input MVs across all time steps into a single large matrix multiplication, and fuse 4 hidden MVs within each time step. The experiments are run on a single core of Intel CPU E5-2650 v4 @ 2.20GHz. We use just a single core for two reasons: i) to emulate the limited resource availability in many use cases such as laptops and smart phones, ii) performance of multi-core RNN is highly implementation dependent (Zhang et al.) even for regular RNNs and therefore is difficult to make apple-to-apple comparison. We use Intel MKL library for GEMM implementation. Discussion: Table 5 shows that, for very small problem size, AntMan offers no speedup regardless of the reduction in the computation. This is expected as GEMM performance gets worse as the problem size decreases. However, as the problem size is already very small, memory reduction or performance improvement is less crucial for such problems. For medium sized problems, AntMan offers good actual speedup compared to the theoretical speedup. Notice that unlike unstructured sparsity, where significant levels of sparsity is necessary to see actual performance improvement, with AntMan, even a modest 50% sparsity or 2x computation reduction in significant performance gain at problem size 400 and 800. Furthermore, for large problem sizes the actual speedup is significantly larger than the theoretical speedup. At problem size of 1200 and 1600, the weight matrices in the LSTM are too large to fit in L3 cache (30 MB in this case), thus spilling into memory. These LSTMs have much lower efficiency as the memory bandwidth of a CPU is much lower than the L3 cache bandwidth. By reducing the memory footprint, AntMan-based LSTM fits in L3 cache, leading to an actual speed up that is considerably higher than the theoretical speedup. These demonstrate attractive practical value of AntMan on commodity hardware. We develop AntMan, combining structured sparsity and low-rank decomposition, to reduce the computation, size and execution time of RNN models by order(s) of magnitude while achieving similar accuracy. We hope its compression efficiency and effectiveness would help unblock and enable many great RNN-based models deployed in practice. We discuss and compare AntMan with several compression techniques as below. Quantization: 16 and 8-bit quantization (original 32-bit) can be supported fairly easily on commodity hardware, ing in a maximum compression of 4x. Even more aggressive quantization (e.g., 2-7 bit) hardly provides additional computational benefit because commodity hardware does not support those in their instruction set, while 1-bit quantization does not offer comparable accuracy. In comparison, we demonstrate that AntMan achieves up to 100x reduction in computation without loss in accuracy. Moreover, quantization can be applied to AntMan to further reduce the computation, and vice versa, as quantization and AntMan are complementary techniques. Pruning: Pruning can be used to generate both unstructured and structured sparsity. The former is not computationally efficient while the latter requires specialized implementation for efficient execution. While we did not present pruning in the paper, we did try out techniques on both PTB and BiDAF models to generate random sparsity as well as blocked sparsity. In both cases, we were able to get more that 10x reduction in computation even in the absence of Knowledge distillation. Therefore pruning provides excellent computation reduction. However, as discussed in the paper, those theoretical computational reductions cannot be efficiently converted into practical performance gains: Unstructured sparsity ing from pruning suffers from poor computation efficiency; a 10x theoretical reduction leads to less than 4x improvement in performance while AntMan achieves 30x performance gain with 10x reduction for PTB like models. TAB7 It is possible to achieve structured sparsity such as block sparsity through pruning. However, structured sparsity requires implementing specialized kernels to take advantage of the computation reduction. Its efficiency greatly depends on the implementation, and in general is far from the theoretical computation reduction. On the contrary both ISS and AntMan achieve good computation reduction, and can be efficiently executed using readily available BLAS libraries such as Intel MKL ing in super linear speedups as shown in the paper. Direct Design: We compared AntMan with smaller RNN models (with smaller hidden dimension) trained using the larger teacher model. Our show that for the same level of compression AntMan achieves much higher accuracy. TAB6 SVD RNN:We constructed compressed models by replacing matrix-multiplication with SVD of various rank, and trained the SVD based models using knowledge distillation. Once again, we find that for the same level of compression, AntMan achieves much higher accuracy than SVD. TAB6 Block Tensor Decomposition (BTD): BTD is designed to compress RNNs whose inputs are produced by convolution based models, and contain certain redundancies. AntMan, on the other hand, is generic to all RNN based models. Also, BTD is designed to compress only the input vector and not the hidden vectors. This hinders the performance of BTD over a range of RNNs, where the hidden vectors are also large. Here, we compare the performance of AntMan with ISS, without using any knowledge distillation. Please note that knowledge distillation is part of the training process for AntMan, but it is not for ISS. Nevertheless, it is interesting to see how AntMan performs in the absence of a teacher. When trained without knowledge distillation, our experiments show that AntMan and ISS have complimentary strengths. On the PTB dataset, with a 10x compute reduction, AntMan does not generalize well without a teacher, while ISS incurs less than 1pt loss in perplexity compared to the original model. This is demonstrated by the first row and column in Table 3, and the third row in TAB2. On the contrary, for the BiDAF, AntMan incurs less than 1pt reduction in F1 score for nearly 10x compute reduction 2, while ISS incurs nearly 2pt reduction in F1 score with less than 5x compute reduction on average. This is shown in TAB9.AntMan can successfully compress BiDAF, while ISS fails because ISS compresses an LSTM by effectively reducing its hidden dimension, while AntMan preserves the hidden dimension size. The LSTMs in the BiDAF model have large input dimensions making them computationally expensive, but they have very small hidden dimensions. Therefore, reducing the already small hidden dimension in significant loss of accuracy. On the contrary, the PTB model has large input as well as hidden dimensions, allowing ISS to work effectively. Optimizers All the models in the evaluation section were trained using ADAM optimizer. Hyperparameters For both PTB and SQUAD, all the hyper-parameters for training the respective models were set as default from their standard and recommended implementations on github (PTB 3, BiDAF 4).
Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation.
1,711
scitldr
Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks. Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced. In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks. We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. Numerical show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than variational (non-learning) techniques. Finally, the most effective graph ConvNet architecture uses gated edges and residuality. Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance. Convolutional neural networks of BID20 and recurrent neural networks of BID17 are deep learning architectures that have been applied with great success to computer vision (CV) and natural language processing (NLP) tasks. Such models require the data domain to be regular, such as 2D or 3D Euclidean grids for CV and 1D line for NLP. Beyond CV and NLP, data does not usually lie on regular domains but on heterogeneous graph domains. Users on social networks, functional time series on brain structures, gene DNA on regulatory networks, IP packets on telecommunication networks are a a few examples to motivate the development of new neural network techniques that can be applied to graphs. One possible classification of these techniques is to consider neural network architectures with fixed length graphs and variable length graphs. In the case of graphs with fixed length, a family of convolutional neural networks has been developed on spectral graph theory by BID6. The early work of BID5 proposed to formulate graph convolutional operations in the spectral domain with the graph Laplacian, as an analogy of the Euclidean Fourier transform as proposed by BID14. This work was extended by BID16 to smooth spectral filters for spatial localization. BID9 used Chebyshev polynomials to achieve linear complexity for sparse graphs, BID21 applied Cayley polynomials to focus on narrow-band frequencies, and BID27 dealt with multiple (fixed) graphs. Finally, BID19 simplified the spectral convnets architecture using 1-hop filters to solve the semi-supervised clustering task. For related works, see also the works of, and references therein. For graphs with variable length, a generic formulation was proposed by BID12; BID29 based on recurrent neural networks. The authors defined a multilayer perceptron of a vanilla RNN. This work was extended by BID22 using a GRU architecture and a hidden state that captures the average information in local neighborhoods of the graph. The work of BID30 introduced a vanilla graph ConvNet and used this new architecture to solve learning communication tasks. BID25 introduced an edge gating mechanism in graph ConvNets for semantic role labeling. Finally, BID4 designed a network to learn nonlinear approximations of the power of graph Laplacian operators, and applied it to the unsupervised graph clustering problem. Other works for drugs design, computer graphics and vision are presented by BID10; BID1; BID26.In this work, we study the two fundamental classes of neural networks, RNNs and ConvNets, in the context of graphs with arbitrary length. Section 2 reviews the existing techniques. Section 3 presents the new graph NN models. Section 4 reports the numerical experiments. Generic formulation. Consider a standard RNN for word prediction in natural language processing. Let h i be the feature vector associated with word i in the sequence. In a regular vanilla RNN, h i is computed with the feature vector h j from the previous step and the current word x i, so we have: DISPLAYFORM0 The notion of neighborhood for regular RNNs is the previous step in the sequence. For graphs, the notion of neighborhood is given by the graph structure. If h i stands for the feature vector of vertex i, then the most generic version of a feature vector for a graph RNN is DISPLAYFORM1 where x i refers to a data vector and {h j : j → i} denotes the set of feature vectors of the neighboring vertices. Observe that the set {h j} is unordered, meaning that h i is intrinsic, i.e. invariant by vertex re-indexing (no vertex matching between graphs is required). Other properties of f G-RNN are locality as only neighbors of vertex i are considered, weight sharing, and such vector is independent of the graph length. In summary, to define a feature vector in a graph RNN, one needs a mapping f that takes as input an unordered set of vectors {h j}, i.e. the feature vectors of all neighboring vertices, and a data vector x i, FIG0 (a).We refer to the mapping f G-RNN as the neighborhood transfer function in graph RNNs. In a regular RNN, each neighbor as a distinct position relatively to the current word (1 position left from the center). In a graph, if the edges are not weighted or annotated, neighbors are not distinguishable. The only vertex which is special is the center vertex around which the neighborhood is built. This explains the generic formulation of Eq.. This type of formalism for deep learning for graphs with variable length is described by BID29 BID11 with slightly different terminology and notations. Graph Neural Networks of BID29. The earliest work of graph RNNs for arbitrary graphs was introduced by BID12; BID29. The authors proposed to use a vanilla RNN with a multilayer perceptron to define the feature vector h i: DISPLAYFORM2 with DISPLAYFORM3 and σ is the sigmoid function, A, B, U, V are the weight parameters to learn. Minimization of Eq. does not hold a closed-form solution as the dependence computational graph of the model is not a directed acyclic graph (DAG). BID29 proposed a fixed-point iterative scheme: for t = 0, 1, 2,... DISPLAYFORM4 The iterative scheme is guaranteed to converge as long as the mapping is contractive, which can be a strong assumption. Besides, a large number of iterations can be computational expensive. Gated Graph Neural Networks of BID22. In this work, the authors use the gated recurrent units (GRU) of BID7: DISPLAYFORM5 As the minimization of Eq. does not have an analytical solution, BID22 designed the following iterative scheme: DISPLAYFORM6 DISPLAYFORM7 and DISPLAYFORM8 where is the Hadamard point-wise multiplication operator. This model was used for NLP tasks by BID22 and also in quantum chemistry by BID11 for fast organic molecule properties estimation, for which standard techniques (DFT) require expensive computational time. Tree-Structured LSTM of BID31. The authors extended the original LSTM model of BID17 to a tree-graph structure: DISPLAYFORM9 where C(i) refers the set of children of node i. DISPLAYFORM10 Unlike the works of BID29 BID22, Tree-LSTM does not require an iterative process to update its feature vector h i as the tree structure is also a DAG as original LSTM. Consequently, the feature representation can be updated with a recurrent formula. Nevertheless, a tree is a special case of graphs, and such recurrence formula cannot be directly applied to arbitrary graph structure. A key property of this model is the function f ij which acts as a gate on the edge from neighbor j to vertex i. Given the task, the gate will close to let the information flow from neighbor j to vertex i, or it will open to stop it. It seems to be an essential property for learning systems on graphs as some neighbors can be irrelevant. For example, for the community detection task, the graph neural network should learn which neighbors to communicate (same community) and which neighbors to ignore (different community). In different contexts, BID8 added a gated mechanism inside the regular ConvNets in order to improve language modeling for translation tasks, and van den Oord et al. FORMULA1 considered a gated unit with the convolutional layers after activation, and used it for image generation. Generic formulation. Consider now a classical ConvNet for computer vision. Let h ij denote the feature vector at layer associated with pixel (i, j). In a regular ConvNet, h +1 ij is obtained by applying a non linear transformation to the feature vectors h i j for all pixels (i, j) in a neighborhood of pixel (i, j). For example, with 3 × 3 filters, we would have: DISPLAYFORM0 In the above, the notation {h i j : |i − i | ≤ 1 and |j − j | ≤ 1} denote the concatenation of all feature vectors h i j belonging to the 3 × 3 neighborhood of vertex (i, j). In ConvNets, the notion of neighborhood is given by the euclidian distance. As previously noticed, for graphs, the notion of neighborhood is given by the graph structure. Thus, the most generic version of a feature vector h i at vertex i for a graph ConvNet is DISPLAYFORM1 where {h j : j → i} denotes the set of feature vectors of the neighboring vertices. In other words, to define a graph ConvNet, one needs a mapping f G-CNN taking as input a vector h i (the feature vector of the center vertex) as well as an unordered set of vectors {h j} (the feature vectors of all neighboring vertices), see FIG0 (b). We also refer to the mapping f G-CNN as the neighborhood transfer function. In a regular ConvNet, each neighbor as a distinct position relatively to the center pixel (for example 1 pixel up and 1 pixel left from the center). As for graph RNNs, the only vertex which is special for graph ConvNets is the center vertex around which the neighborhood is built. CommNets of BID30. The authors introduced one of the simplest instantiations of a graph ConvNet with the following neighborhood transfer function: DISPLAYFORM2 where denotes the layer level, and ReLU is the rectified linear unit. We will refer to this architecture as the vanilla graph ConvNet. BID30 used this graph neural network to learn the communication between multiple agents to solve multiple tasks like traffic control. Syntactic Graph Convolutional Networks of BID25. The authors proposed the following transfer function: DISPLAYFORM3 where η ij act as edge gates, and are computed by: DISPLAYFORM4 These gated edges are very similar in spirit to the Tree-LSTM proposed in BID31. We believe this mechanism to be important for graphs, as they will be able to learn what edges are important for the graph learning task to be solved. Proposed Graph LSTM. First, we propose to extend the Tree-LSTM of BID31 to arbitrary graphs and multiple layers: DISPLAYFORM5 As there is no recurrent formula is the general case of graphs, we proceed as BID29 and use an iterative process to solve Eq.: At layer, for t = 0, 1,..., T DISPLAYFORM6 DISPLAYFORM7 In other words, the vector h We leverage the vanilla graph ConvNet architecture of BID30, Eq. FORMULA13, and the edge gating mechanism of , Eq., by considering the following model: DISPLAYFORM0 where h =0 i = x i, ∀i, and the edge gates η ij are defined in Eq.. This model is the most generic formulation of a graph ConvNet (because it uses both the feature vector h i of the center vertex and the feature vectors h j of neighboring vertices) with the edge gating property. Residual Gated Graph ConvNets. In addition, we formulate a multi-layer gated graph ConvNet using residual networks (ResNets) introduced by BID15. This boils down to add the identity operator between successive convolutional layers: DISPLAYFORM1 As we will see, such multi-layer strategy work very well for graph neural networks. We consider the subgraph matching problem presented by BID29, see FIG3 (a). The goal is to find the vertices of a given subgraph P in larger graphs G k with variable sizes. Identifying similar localized patterns in different graphs is one of the most basic tasks for graph neural networks. The subgraph P and larger graph G k are generated with the stochastic block model (SBM), see for example BID0. A SBM is a random graph which assigns communities to each node as follows: any two vertices are connected with the probability p if they belong to the same community, or they are connected with the probability q if they belong to different communities. For all experiments, we generate a subgraph P of 20 nodes with a SBM q = 0.5, and the signal on P is generated with a uniform random distribution with a vocabulary of size 3, i.e. {0, 1, 2}. Larger graphs G k are composed of 10 communities with sizes randomly generated between 15 and 25. The SBM of each community is p = 0.5. The value of q, which acts as the noise level, is 0.1, unless otherwise specified. Besides, the signal on G k is also randomly generated between {0, 1, 2}. Inputs of all neural networks are the graphs with variable size, and outputs are vertex classification vectors of input graphs. Finally, the output of neural networks are simple fully connected layers from the hidden states. BID25. The learning schedule is as follows: the maximum number of iterations, or equivalently the number of randomly generated graphs with the attached subgraph is 5,000 and the learning rate is decreased by a factor 1.25 if the loss averaged over 100 iterations does not decrease. The loss is the cross-entropy with 2 classes (the subgraph P class and the class of the larger graph G k) respectively weighted by their sizes. The accuracy is the average of the diagonal of the normalized confusion matrix w.r.t. the cluster sizes (the confusion matrix measures the number of nodes correctly and badly classified for each class). We also report the time for a batch of 100 generated graphs. The choice of the architectures will be given for each experiment. All algorithms are optimized as follow. We fix a budget of parameters of B = 100K and a number of layers L = 6. The number of hidden neurons H for each layer is automatically computed. Then we manually select the optimizer and learning rate for each architecture that best minimize the loss. For this task, BID22; BID30; BID25 and our gated ConvNets work well with Adam and learning rate 0.00075. Graph LSTM uses SGD with learning rate 0.075. Besides, the value of inner iterative steps T for graph is 3.The first experiment focuses on shallow graph neural networks, i.e. with a single layer L = 1. We also vary the level of noise, that is the probability q in the SBM that connects two vertices in two different communities (the higher q the more mixed are the communities). The hyper-parameters are selected as follows. Besides L = 1, the budget is B = 100K and the number of hidden neurons H is automatically computed for each architecture to satisfy the budget. First row of FIG6 FORMULA1 ) performs much better, but they also take more time than the graph ConvNets architectures we propose, as well as BID30; BID25. As expected, all algorithms performances decrease when the noise increases. The second experiment demonstrates the importance of having multiple layers compared to shallow networks. We vary the number of layers L = {1, 2, 4, 6, 10} and we fix the number of hidden neurons to H = 50. Notice that the budget is not the same for all architectures. Second row of FIG6 reports the accuracy and time w.r.t. L (middle figure is a zoom in the left figure). All models clearly benefit with more layers, but RNN-based architectures see their performances decrease for a large number of layers. The ConvNet architectures benefit from large L values, with the proposed graph ConvNet performing slightly better than BID30; BID25. Besides, all ConvNet models are faster than RNN models. In the third experiment, we evaluate the algorithms for different budgets of parameters B = {25K, 50K, 75K, 100K, 150K}. For this experiment, we fix the number of layers L = 6 and the number of neurons H is automatically computed given the budget B. The are reported in the third row of FIG6. For this task, the proposed graph ConvNet best performs for a large budget, while being faster than RNNs. We also show the influence of hyper-parameter T for BID22 and the proposed graph LSTM. We fix H = 50, L = 3 and B = 100K. FIG7 reports the for T = {1, 2, 3, 4, 6}. The T value has an undesirable impact on the performance of graph LSTM. Multi-layer BID22 is not really influenced by T. Finally, the computational time naturally increases with larger T values. In this section, we consider the semi-supervised clustering problem, see FIG3 (b). This is also a standard task in network science. For this work, it consists in finding 10 communities on a graph given 1 single label for each community. This problem is more discriminative w.r.t. to the architectures than the previous single pattern matching problem where there were only 2 clusters to find (i.e. 50% random chance). For clustering, we have 10 clusters (around 10% random chance). As in the previous section, we use SBM to generate graphs of communities with variable length. The size for each community is randomly generated between 5 and 25, and the label is randomly selected in each community. Probability p is 0.5, and q depends on the experiment. For this task, BID22; BID30; BID25 and the proposed gated ConvNets work well with Adam and learning rate 0.00075. Graph LSTM uses SGD with learning rate 0.0075. The value of T for graph is 3.The same set of experiments as in the previous task are reported in FIG8. ConvNet architectures get clearly better than RNNs when the number of layers increase (middle row), with the proposed Gated ConvNet outperforming the other architectures. For a fixed number of layers L = 6, our graph ConvNets and BID25 best perform for all budgets, while paying a reasonable computational cost. Next, we report the learning speed of the models. We fix L = 6, B = 100K with H being automatically computed to satisfy the budget. FIG10 reports the accuracy w.r.t. time. The ConvNet architectures converge faster than RNNs, in particular for the semi-supervised task. To close this study, we are interested in comparing learning based approaches to non-learning variational ones. To this aim, we solve the variational Dirichlet problem with labeled and unlabelled data as proposed by BID13. We run 100 experiments and report an average accuracy of 45.37% using the same setting as the learning techniques (one label per class). The performance of the best learning model is 82%. Learning techniques produce better performances with a different paradigm as they use training data with ground truth, while variational techniques do not use such information. The downside is the need to see 2000 training graphs to get to 82%. However, when the training is done, the test complexity of these learning techniques is O(E), where E is the number of edges in the graph. This is an advantage over the variational Dirichlet model that solves a sparse linear system of equations with complexity O(E 3/2), see BID24. This work explores the choice of graph neural network architectures for solving learning tasks with graphs of variable length. We developed analytically controlled experiments for two fundamental graph learning problems, that are subgraph matching and graph clustering. Numerical experiments showed that graph ConvNets had a monotonous increase of accuracy when the network gets deeper, unlike graph RNNs for which performance decreases for a large number of layers. This led us to consider the most generic formulation of gated graph ConvNets, Eq.. We also explored the benefit of residuality for graphs, Eq.. Without residuality, existing graph neural networks are not able to stack more than a few layers. This makes this property essential for graph neural networks, which receive a 10% boost of accuracy when more than 6 layers were stacked. Future work will focus on solving domain-specific problems in chemistry, physics, and neuroscience.
We compare graph RNNs and graph ConvNets, and we consider the most generic class of graph ConvNets with residuality.
1,712
scitldr
Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of . Existing literature ignores the number of parameters used. We compared complex- and real-valued neural networks using five activation functions. We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks. However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks. Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers. In recent years complex numbers in neural networks are increasingly frequently used. ComplexValued neural networks have been sucessfully applied to a variety of tasks specifically in signal processing where the input data has a natural interpretation in the complex domain. In most publications complex-valued neural networks are compared to real-valued architectures. We need to ensure that these architectures are comparable in their ability to approximate functions. A common metric for their capacity are the number of real-valued parameters. The number of parameters of complex-valued neural networks are rarely studied aspects. While complex numbers increase the computational complexity, their introduction also assumes a certain structure between weights and input. Hence, it is not sufficient to increase the number of parameters. Even more important than in real-valued networks is the choice of activation function for each layer. We test 5 functions: identity or no activation function, rectifier linear unit, hyperbolic tangent, magnitude, squared magnitude. This paper explores the performance of complex-valued multi-layer perceptrons (MLP) with varying depth and width in consideration of the number of parameters and choice of activation function on benchmark classification tasks. In section 2 we will give an overview of the past and current developments in the applications of complex-valued neural networks. We shortly present the multi-layer perceptron architecture in section 3 using complex numbers and review the building blocks of complex-valued network. In section 4 we consider the multi-layer perceptron with respect to the number of real-valued parameters in both the complex and real case. We construct complex MLPs with the same number of units in each layer. We propose two methods to define comparable networks: A fixed number of real-valued neurons per layer or a fixed budget of real-valued parameters. In the same section we also consider the structure that is assumed by introducing complex numbers into a neural network. We present the activation function to be used in our experiments in section 5. In section 6 we present our experiments and their settings. Section 7 discuss the of different multi-layer perceptrons on MNIST digit classification, CIFAR-10 image classification, CIFAR-100 image classification, Reuters topic classification and bAbI question answering. We identify a general direction of why and how to use complex-valued neural networks. The idea of artificial neural networks with complex-valued input, complex-valued weights and complex-valued output was proposed in the 1970s BID0. A complex-valued backpropogation algorithm to train complex multi-layer networks was proposed in the 1990s by several authors BID2 BID10 BID4. In the 2000s complex neural networks, like real-valued neural networks, have been successfully applied to a variety of tasks. These tasks included the processing and analysis of complex-valued data or data with an intuitive mapping to complex numbers. Particularly, signals in their wave form were used as input data to complex-valued neural networks BID7 ).Another natural application of complex numbers are complex convolutions BID3, since they have an application in both image and signal processing. While real convolutions are widely used in deep neural networks for image processing, complex convolution can replace realvalued convolutions BID14 BID5 BID12 BID6.The properties of complex numbers and matrices introduce constraints into deep learning models. Introduced by BID1 and developed further by recurrent networks, which constrain their weights to be unitary, reduce the impact of the vanishing or exploding gradient problem. More recently complex numbers have been (re)discovered by a wider audience and used in approaches to other tasks like embedding learning BID15 BID13, knowledge base completion BID16 or memory networks BID9.Despite their success in signal processing tasks, complex-valued neural networks have been less popular than their real-valued counter-parts. This may be due to training and reports of varying in related tasks. The training process and architecture design are less intuitive, which stems from difficulties in differentiability of activation functions in the complex plane (; BID8 BID11).An aspect that has received little attention is an appropriate comparison of real-and complex-valued neural networks. Many publications ignore the number of parameters all together , consider only the number of parameters of the entire model or do not distinguish in complex-or real-valued parameters . While the latter is most confusing for the reader, all three problems lead to an inappropriate comparison of the overall performance. There exists a significant body of work on exploring deep learning architectures for real-valued neural networks. Deep complex-valued neural networks are still to be explored. Previous work has also shown the significance of the activation, not only for the training and gradient computation, but also for the accuracy. Therefore, the width, depth and the choice of activation function need to be considered together. We aim to fill this gap by systematically exploring the performance of multi-layered architectures on simple classification taks. Many fundamental building blocks of neural networks can be used in the complex domain by replacing real-valued parameters with complex parameters. However, there are some differences in training complex-valued neural networks. We introduce the building blocks and consider differences in structure and training. While the convolution on the complex plane using complex-valued filters is natural, it has been investigated in related literature (see section 2). In this work we focus on layers consisting of complex-valued neurons as building blocks and their use in multi-layer architecture. We define a complex-valued neuron analogous to its real-valued counter-part. In consequence we can use projection onto a complex weight matrix to realise complex-numbered embeddings. The complex valued neuron can be defined as: DISPLAYFORM0 with the (real or complex) input x ∈ ¼ n, complex weight w ∈ ¼ n and complex bias b ∈ ¼. Arranging m neurons into a layer: DISPLAYFORM1 Similarly, we can define the projection onto a complex matrix if the input x is a projector (e.g. one-hot vector).The activation function φ in all of the above definitions can be a real function φ: ¼ → or complex function φ: ¼ → ¼, but the function always acts on a complex variable. We will consider the choice of the non-linear activation function φ in more detail in section 5.The loss function J should be a real function J: ¼ → or J:→. Since there is no total ordering on the field of complex numbers, because the i 2 = −1, a complex-valued function may lead to added difficulties in training. To be able to interpret the output of the last layer as probability one, can use an additional activation function. Thus the activation of the output layer is sigmoid(φ(z)) resp. so f tmax(φ(z)) with φ: ¼ → and is used as a real number in a classical loss function (e.g. cross entropy).Both activation and loss functions are not always complex-differentiable. Hence, the training process in the complex domain differs. Similar to a real function, a complex function f: DISPLAYFORM2 A complex-valued function of one or more complex variables that is entire and complex differentiable is called holomorphic. While in the real-valued case the existence of a limit is sufficient for differentiability, the complex definition in equation 3 implies a stronger property. We map ¼ to 2 to illustrate this point. A complex function f (x + iy) = u(x, y) + iv(x, y) with real-differentiable functions u(x, y) and v(x, y) is complex-differentiable if they satisfy the Cauchy-Riemann equations: DISPLAYFORM3 We simply separate a complex number z ∈ ¼ into two real numbers z = x + iy. For f to be holomorphic, the limit not only needs to exist for the two functions u(x, y) and v(x, y), but they must also satisfy the Cauchy-Riemann equations. That also means that a function can be nonholomorphic (not complex-differentiable) in z, but still be analytic in its parts x, y. That is exactly if the two functions are real-differentiable, but do not satisfy the Cauchy-Riemann equations. To be able to apply the chain rule, the basic principle of backpropagation, to non-holomorphic functions, we exploit the fact that many non-holymorphic functions, are still differentiable in their real and imaginary parts. We consider the complex function f to be a function of z and its complex conjugatez. Effectively, we choose a different basis for our partial derivatives. DISPLAYFORM4 These derivatives are a consequence of the Wirtinger calculus (or CR-calculus). With the new basis we are able allow the application of the chain rule to non-holomorphic functions for multiple complex variables z i: DISPLAYFORM5 4 Real and Complex ParametersIn this section we discuss complex-and real-valued parameters in consideration of the number of parameters per layer and the structure assumed by complex numbers. Any complex number z = x + iy = r * e iϕ can be represented by two real numbers: the real part Re(z) = x and the imaginary part Im(z) = y or as length or magnitude |z| = x 2 + y 2 = r and a phase ϕ. Effectively the number of real parameters of each layer is doubled: DISPLAYFORM6 The number of (real-valued) parameters is a metric of capacity or the ability to approximate functions. Too many and a neural network tends to overfit the data, too few and the neural network tends to underfit. For a comparison of architectures the real-valued parameters per layer should be equal (or at least as close as possible) in the real architecture and its complex counter part. This ensures that models have the same capacity and a comparison shows the performance difference due to the added structure, not due to varying capacity. Consider the number of parameters in a fully-connected layer in the real case and in the complex case. Let n be the input dimension and m the number of neurons. DISPLAYFORM7 For a multi-layer perceptron with k the number of hidden layers, and output dimension c the number of real-valued parameters without bias is given by: DISPLAYFORM8 At first glance designing comparable multi-layer neural network architectures, i.e. they have the same number of real-valued parameters in each layer, is trivial. Halving the number of neurons in every layer will not achieve a comparison, because the number of neurons define the output dimensions of the layer and the following layer's input dimension. We adressed this problem by choosing MLP architectures with an even number of hidden layers k and choose the number of neurons per layer to be alternating between m and m 2. Thus we receive the same number of real parameters in each layer compared to a real-valued network. As an example, let us consider the dimensions of outputs and weights in k = 4. For the real-valued case: DISPLAYFORM9 where m i is the number of (complex or real) neurons of the i-th layer. The equivalent with m complex-valued neurons would be: DISPLAYFORM10 Another approach to the design of comparable architectures is to work with a parameter budget. Given a fixed budget of real parameters p we can define real or complex multi-layer perceptron with an even number k of hidden layers such that the network's parameters are within that budget. All k + 2 layers have the same number of real-valued neurons m or complex-valued neurons m ¼. DISPLAYFORM11 DISPLAYFORM12 Despite the straight forward use and representation in neural networks, complex numbers define an additional structure compared to real-valued networks. This interaction of the two parts becomes particularly apparant if we consider operations on complex numbers to be composed of the real and imaginary part or magnitude and phase: DISPLAYFORM13 with complex numbers z 1 = a + ib, z 2 = c + id. In an equivalent representation with Euler's constant e iϕ = cos(ϕ) + isin(ϕ) the real parts do not interact.z 1 z 2 = (r 1 e iϕ 1)(r 2 e iϕ 2) = (r 1 r 2 e iϕ 1 +ϕ 2), DISPLAYFORM14 Complex parameters increase the computational complexity of a neural network, since more operations are required. Instead of a single real-valued multiplication operation, up to four real multplication and two real additions are required. Depending on the implementation and the representation, this can be significantly reduced. Nevertheless, it is not sufficient to double the numbers of real parameters per layer to achieve the same effect as in complex-valued neural networks. This is also illustrated expressing a complex number z = a + ib ∈ ¼ as 2 × 2 matrices M in the ring of M 2 : DISPLAYFORM15 An augmented representation of an input x allows the represention of complex matrix multiplication with an weight matrix W as larger real matrix multiplication: DISPLAYFORM16 This added structure, however, also means that architecture design needs to be reconsidered. A deep learning architecture which performs well with real-valued parameters, may not work for complexvalued parameters and vice versa. In later sections we experimentally investigate what consequences this particular structure has for the overall performance of a model. In any neural network, real or complex, an important decision is the choice of non-linearity. With the same number of parameters in each layer, we are able to study the effects activation functions have on the overall performance. The Liouville Theorem states that any bounded holomorphic function f: ¼to¼ (that is differentiable on the entire complex plane) must be constant. Hence, we need to choose unbounded and/or non-holomorphic activation functions. We chose the identity function to investigate the performance of complex models assuming a function which is linearly separable in the complex parameters by not introducing a non-linearity into the model. The hyperbolic tangents is a well-studied function and defined for both complex and real numbers. The rectifier linear unit is a well-studied function and illustrates the separate application of an activation function. The magnitude and squared magnitude functions are chosen to map complex numbers to real numbers.• Identity (or no activation function): DISPLAYFORM0 • Hyperbolic tangent: DISPLAYFORM1 • Rectifier linear unit (relU): DISPLAYFORM2 • Magnitude squared: DISPLAYFORM3 • Magnitude (or complex absolute): DISPLAYFORM4 In the last layer we apply an activation function φ: ¼ → before using the softmax or sigmoid to use in a receive a real loss. Note that the squared magnitude allows a more efficient implementation than the magnitude. Thus we change the activation function of the last layer to: DISPLAYFORM5 Applying the two functions in the opposite order as in |sigmoid(z)| 2 and |so f tmax(z)| 2 does not return probabilities from the last layer of a network and would take away the direct interpretability of the models output. To compare real and complex-valued neural networks and their architecture we chose various classification tasks and defined experiments. The settings are as follows:• Experiment 1: We tested multi-layer perceptrons (MLP) with with k = 0, 2, 4, 8 hidden layers, fixed width of units in each layer in real-valued architectures and alternating 64, 32 units in complex-valued architectures (see section 4), no fixed budget, applied to Reuters topic classification, MNIST digit classification, CIFAR-10 Image classification, CIFAR-100 image classification. Reuters topic classification and MNIST digit classification use 64 units per layer, CIFAR-10 and CIFAR-100 use 128 units per layer. All tested activation functions are introduced in 5.• Experiment 2: We tested multi-layer perceptrons (MLP) with fixed budget of 500,000 realvalued parameters, no fixed width, applied to MNIST digit classification, CIFAR-10 Image classification, CIFAR-100 image classification and Reuters topic classification. All tested activation functions introduced are in section 5. Used sigmoid(|z| 2) function for the gates.• Experiment 3: We tested the Memory Network architecture introduced by BID17 as complex-valued network in two versions -one below and one above parameter budget of the real-valued network. We used the bAbI question answering tasks with one, two and three supporting facts. Activation functions in each layer were defined by the original publication. The network used a recurrent layer, which defined by replacing the real-valued weight matrices with complex weight matrices. For all of our experiments we used the weight initialisation discussed by BID14. However, to reduce the impact of the initialisation we ran each model at least 10 times. The larger memory networks were initialised 30 times. All models were trained over 100 epochs with an Adam optimisation. We used categorical or binary cross entropy as a loss function for all of our experiments and models. We used sigmoid(|z| 2) or so f tmax(|z| 2) as activation function for the last layer of the complex models.7 Results and Discussion TAB1, 3, 4 show the for experiment 1. Generally, the performance of complex and real neural network in this setting is similar, altough the complex valued neural network tends to perform slightly worse. We found that the best choice for the activation function for the complex neural network is relu applied separatly to the imaginary and real parts. Suprisingly the hyperbolic tangents tanh and squared magnitude |z| 2 perform significantly worse Tables 5, 6, 7, 8 show the for experiment 2. Similar to experiment 1 the show that the best choice for the activation function for the complex neural network is relu applied separatly to the imaginary and real parts. In both experiments with the depth of the architecture the performance of the complex neural network decreases significantly. These experiments illustrate that an increased width per layers outperforms an increased depth in classification tasks. This is true for the real and the complex case. Table 9 shows the for experiment 3. For a single supporting fact in the bABi data set realvalued neural network. In the first bABi task the real-valued version outperforms the two complex version of the memory network. In the more diffcult tasks with two or three supporting facts both, the small and large version, of the complex-valued neural network outperform the real-valued versiondespite the reduce number of parameters. We made the observation that the assumed structure with introducing complex numbers into neural networks has a regularising effect on the training procedure if used in combination with real-valued input. We also found that complex-valued neural networks are more sensitive towards their initialisation than real-valued neural networks. Overall the complex-valued neural networks do not perform as well as expected. This may be due to the nature of the chosen tasks or the simple architecture of a multi-layer perceptron. Complex neural networks should be used if the data is naturally in the complex domain or can be mapped to complex numbers. The architecture should be selected to respect the expected structure complex numbers introduce to the network. In the architecture needs to reflect the interaction of the real and imaginary part. If the structure is ignored, the model will not perform as well as the corresponding real-valued network. Table 5: Test accuaracy of multi-layer perceptron conisting of k + 2 dense layers with an overall budget of 500, 000 real-valued parameters on MNIST digit classification. Selected from best of 10 runs with each run 100 epochs to converge. Table 6: Test accuaracy of multi-layer perceptron conisting of k + 2 dense layers with an overall budget of 500, 000 real-valued parameters on Reuters topic classification. Selected from best of 10 runs with each run 100 epochs to converge. Table 7: Test accuaracy of multi-layer perceptron conisting of k + 2 dense layers with an overall budget of 500, 000 real-valued parameters on CIFAR-10 image classification. Selected from best of 10 runs with each run 100 epochs to converge. Table 8: Test accuaracy of multi-layer perceptron conisting of k + 2 dense layers with an overall budget of 500, 000 real-valued parameters on CIFAR-100 image classification. Selected from best of 10 runs with each run 100 epochs to converge. identity 0.0000 0.0000 tanh 0.0000 0.0000 relU 0.0000 0.0000 |z| 2 0.0000 0.0000 |z| 0.0000 0.0000 Table 9: Test accuaracy of Memory Networks BID17 in complex and real version on the first three bAbI tasks. Selected from best of 30 runs with each run 100 epochs to converge.
Comparison of complex- and real-valued multi-layer perceptron with respect to the number of real-valued parameters.
1,713
scitldr
The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks. In many domains, one has to deal with large-scale data with underlying non-Euclidean structure. Prominent examples of such data are social networks, genetic regulatory networks, functional networks of the brain, and 3D shapes represented as discrete manifolds. The recent success of deep neural networks and, in particular, convolutional neural networks (CNNs) BID19 have raised the interest in geometric deep learning techniques trying to extend these models to data residing on graphs and manifolds. Geometric deep learning approaches have been successfully applied to computer graphics and vision; BID3 a); BID24, brain imaging BID18, and drug design BID10 problems, to mention a few. For a comprehensive presentation of methods and applications of deep learning on graphs and manifolds, we refer the reader to the review paper BID4.Related work. The earliest neural network formulation on graphs was proposed by BID11 and BID27, combining random walks with recurrent neural networks (their paper has recently enjoyed renewed interest in BID20 ; BID30). The first CNN-type architecture on graphs was proposed by BID5. One of the key challenges of extending CNNs to graphs is the lack of vector-space structure and shift-invariance making the classical notion of convolution elusive. Bruna et al. formulated convolution-like operations in the spectral domain, using the graph Laplacian eigenbasis as an analogy of the Fourier transform BID29 ). BID13 used smooth parametric spectral filters in order to achieve localization in the spatial domain and keep the number of filter parameters independent of the input size. BID8 proposed an efficient filtering scheme using recurrent Chebyshev polynomials applied on the Laplacian operator. BID17 simplified this architecture using filters operating on 1-hop neighborhoods of the graph. BID0 proposed a Diffusion CNN architecture based on random walks on graphs. BID24 (and later,) proposed a spatial-domain generalization of CNNs to graphs using local patch operators represented as Gaussian mixture models, showing a significant advantage of such models in generalizing across different graphs. In BID25, spectral graph CNNs were extended to multiple graphs and applied to matrix completion and recommender system problems. Main contribution. In this paper, we construct graph CNNs employing an efficient spectral filtering scheme based on Cayley polynomials that enjoys similar advantages of the Chebyshev filters BID8 ) such as localization and linear complexity. The main advantage of our filters over BID8 is their ability to detect narrow frequency bands of importance during training, and to specialize on them while being well-localized on the graph. We demonstrate experimentally that this affords our method greater flexibility, making it perform better on a broad range of graph learning problems. Notation. We use a, a, and A to denote scalars, vectors, and matrices, respectively.z denotes the conjugate of a complex number, Re{z} its real part, and i is the imaginary unit. diag(a 1, . . ., a n) denotes an n×n diagonal matrix with diagonal elements a 1,..., a n. Diag(A) = diag(a 11, . . ., a nn) denotes an n × n diagonal matrix obtained by setting to zero the off-diagonal elements of A. Off(A) = A − Diag(A) denotes the matrix containing only the off-diagonal elements of A. I is the identity matrix and A • B denotes the Hadamard (element-wise) product of matrices A and B. Proofs are given in the appendix. Spectral graph theory. Let G = ({1, . . ., n}, E, W) be an undirected weighted graph, represented by a symmetric adjacency matrix W = (w ij). We define DISPLAYFORM0 We denote by N k,m the k-hop neighborhood of vertex m, containing vertices that are at most k edges away from m. The unnormalized graph Laplacian is an n × n symmetric positive-semidefinite matrix DISPLAYFORM1 In the following, we use the generic notation ∆ to refer to some Laplacian. Since both normalized and unnormalized Laplacian are symmetric and positive semi-definite matrices, they admit an eigendecomposition ∆ = ΦΛΦ, where Φ = (φ 1, . . . φ n) are the orthonormal eigenvectors and Λ = diag(λ 1, . . ., λ n) is the diagonal matrix of corresponding non-negative eigenvalues (spectrum) 0 = λ 1 ≤ λ 2 ≤... ≤ λ n. The eigenvectors play the role of Fourier atoms in classical harmonic analysis and the eigenvalues can be interpreted as (the square of) frequencies. Given a signal f = (f 1, . . ., f n) on the vertices of graph G, its graph Fourier transform is given byf = Φ f. Given two signals f, g on the graph, their spectral convolution can be defined as the element-wise product of the Fourier transforms, f g = Φ (Φ g)•(Φ f) = Φ diag(ĝ 1, . . .,ĝ n)f, which corresponds to the property referred to as the Convolution Theorem in the Euclidean case. Spectral CNNs. BID5 used the spectral definition of convolution to generalize CNNs on graphs, with a spectral convolutional layer of the form DISPLAYFORM2 (Here the n × p and n × q matrices DISPLAYFORM3 represent respectively the p-and q-dimensional input and output signals on the vertices of the graph, (ĝ l,l,1, . . .,ĝ l,l,k) is a k × k diagonal matrix of spectral multipliers representing a learnable filter in the frequency domain, and ξ is a nonlinearity (e.g., ReLU) applied on the vertex-wise function values. Pooling is performed by means of graph coarsening, which, given a graph with n vertices, produces a graph with n < n vertices and transfers signals from the vertices of the fine graph to those of the coarse one. This framework has several major drawbacks. First, the spectral filter coefficients are basis dependent, and consequently, a spectral CNN model learned on one graph cannot be transferred to another graph. Second, the computation of the forward and inverse graph Fourier transforms incur expensive O(n 2) multiplication by the matrices Φ, Φ, as there is no FFT-like algorithms on general graphs. Third, there is no guarantee that the filters represented in the spectral domain are localized in the spatial domain (locality property simulates local reception fields, BID7); assuming k = O(n) Laplacian eigenvectors are used, a spectral convolutional layer requires O(pqk) = O(n) parameters to train. DISPLAYFORM4 To address the latter issues, BID13 argued that smooth spectral filter coefficients in spatially-localized filters (an argument similar to vanishing moments). The filter coefficients are represented asĝ i = g(λ i), where g(λ) is a smooth transfer function of frequency λ. Applying such filter to signal f can be expressed as DISPLAYFORM5 where applying a function to a matrix is understood in the operator functional calculus sense (applying the function to the matrix eigenvalues). BID13 used parametric functions of the form g(λ) = r j=1 α j β j (λ), where β 1 (λ),..., β r (λ) are some fixed interpolation kernels such as splines, and α = (α 1, . . ., α r) are the interpolation coefficients used as the optimization variables during the network training. In matrix notation, the filter is expressed as Gf = Φdiag(Bα)Φ f, where B = (b ij) = (β j (λ i)) is a k × r matrix. Such a construction in filters with r = O parameters, independent of the input size. However, the authors explicitly computed the Laplacian eigenvectors Φ, ing in high complexity. ChebNet. BID8 used polynomial filters represented in the Chebyshev basis DISPLAYFORM6 applied to rescaled frequencyλ ∈ [−1, 1]; here, α is the (r + 1)-dimensional vector of polynomial coefficients parametrizing the filter and optimized for during the training, and Such an approach has several important advantages. First, since g α (∆) = r j=0 α j T j (∆) contains only matrix powers, additions, and multiplications by scalar, it can be computed avoiding the explicit expensive O(n 3) computation of the Laplacian eigenvectors. Furthermore, due to the recursive definition of the Chebyshev polynomials, the computation of the filter g α (∆)f entails applying the Laplacian r times, ing in O(rn) operations assuming that the Laplacian is a sparse matrix with O non-zero elements in each row (a valid hypothesis for most real-world graphs that are sparsely connected). Second, the number of parameters is O as r is independent of the graph size n. Third, since the Laplacian is a local operator affecting only 1-hop neighbors of a vertex and a polynomial of degree r of the Laplacian affects only r-hops, the ing filters have guaranteed spatial localization. DISPLAYFORM7 A key disadvantage of Chebyshev filters is the fact that using polynomials makes it hard to produce narrow-band filters, as such filters require very high order r, and produce unwanted non-local filters. This deficiency is especially pronounced when the Laplacian has clusters of eigenvalues concentrated around a few frequencies with large spectral gap (FIG4, middle right). Such a behavior is characteristic of graphs with community structures, which is very common in many real-world graphs, for instance, social networks. To overcome this major drawback, we need a new class of filters, that are both localized in space, and are able to specialize in narrow bands in frequency. A key construction of this paper is a family of complex filters that enjoy the advantages of Chebyshev filters while avoiding some of their drawbacks. A Cayley polynomial of order r is a real-valued function with complex coefficients, DISPLAYFORM0 where c = (c 0, . . ., c r) is a vector of one real coefficient and r complex coefficients and h > 0 is the spectral zoom parameter, that will be discussed later. A Cayley filter G is a spectral filter defined on real signals f by where the parameters c and h are optimized for during training. Similarly to the Chebyshev filters, Cayley filters involve basic matrix operations such as powers, additions, multiplications by scalars, and also inversions. This implies that application of the filter Gf can be performed without explicit expensive eigendecomposition of the Laplacian operator. In the following, we show that Cayley filters are analytically well behaved; in particular, any smooth spectral filter can be represented as a Cayley polynomial, and low-order filters are localized in the spatial domain. We also discuss numerical implementation and compare Cayley and Chebyshev filters. DISPLAYFORM1 Analytic properties. Cayley filters are best understood through the Cayley transform, from which their name derives. Denote by e iR = {e iθ : θ ∈ R} the unit complex circle. The Cayley transform DISPLAYFORM2 x+i is a smooth bijection between R and e iR \ {1}. The complex matrix C(h∆) DISPLAYFORM3 −1 obtained by applying the Cayley transform to the scaled Laplacian h∆ has its spectrum in e iR and is thus unitary. Since DISPLAYFORM4. Therefore, using 2Re{z} = z + z, any Cayley filter can be written as a conjugateeven Laurent polynomial w.r.t. C(h∆), DISPLAYFORM5 Since the spectrum of C(h∆) is in e iR, the operator C j (h∆) can be thought of as a multiplication by a pure harmonic in the frequency domain e iR for any integer power j, DISPLAYFORM6 A Cayley filter can be thus seen as a multiplication by a finite Fourier expansions in the frequency domain e iR. Since is conjugate-even, it is a (real-valued) trigonometric polynomial. Note that any spectral filter can be formulated as a Cayley filter. Indeed, spectral filters g(∆) are specified by the finite sequence of values g(λ 1),..., g(λ n), which can be interpolated by a trigonometric polynomial. Moreover, since trigonometric polynomials are smooth, we expect low order Cayley filters to be well localized in some sense on the graph, as discussed later. Finally, in definition we use complex coefficients. If c j ∈ R then FORMULA14 is an even cosine polynomial, and if c j ∈ iR then is an odd sine polynomial. Since the spectrum of h∆ is in R +, it is mapped to the lower half-circle by C, on which both cosine and sine polynomials are complete and can represent any spectral filter. However, it is beneficial to use general complex coefficients, since complex Fourier expansions are overcomplete in the lower half-circle, thus describing a larger variety of spectral filters of the same order without increasing the computational complexity of the filter. Spectral zoom. To understand the essential role of the parameter h in the Cayley filter, consider C(h∆). Multiplying ∆ by h dilates its spectrum, and applying C on the maps the non-negative spectrum to the complex half-circle. The greater h is, the more the spectrum of h∆ is spread apart in R +, ing in better spacing of the smaller eigenvalues of C(h∆). On the other hand, the smaller h is, the further away the high frequencies of h∆ are from ∞, the better spread apart are the high frequencies of C(h∆) in e iR (see FIG1). Tuning the parameter h allows thus to'zoom' in to different parts of the spectrum, ing in filters specialized in different frequency bands. Numerical properties. The numerical core of the Cayley filter is the computation of C j (h∆)f for j = 1,..., r, performed in a sequential manner. Let y 0,..., y r denote the solutions of the following linear recursive system, DISPLAYFORM7 Note that sequentially approximating y j in using the approximation of y j−1 in the rhs is stable, since C(h∆) is unitary and thus has condition number 1.Equations FORMULA16 can be solved with matrix inversion exactly, but it costs O(n 3). An alternative is to use the Jacobi method, 1 which provides approximate solutionsỹ j ≈ y j. Let J = −(Diag(h∆ + iI)) −1 Off(h∆ + iI) be the Jacobi iteration matrix associated with equation. For the unnormalized Laplacian, J = (hD + iI) −1 hW. Jacobi iterations for approximating for a given j have the form DISPLAYFORM8 initialized withỹ DISPLAYFORM9. The application of the approximate Cayley filter is given by Gf = r j=0 c jỹj ≈ Gf, and takes O(rKn) operations under the previous assumption of a sparse Laplacian. The method can be improved by normalizing ỹ j 2 = f 2.Next, we give an error bound for the approximate filter. For the unnormalized Laplacian, let DISPLAYFORM10 < 1. For the normalized Laplacian, we assume that (h∆ n + iI) is dominant diagonal, which gives κ = J ∞ < 1. Proposition 1. Under the above assumptions, Proposition 1 is pessimistic in the general case, while requires strong assumptions in the regular case. We find that in most real life situations the behavior is closer to the regular case. It also follows from Proposition 1 that smaller values of the spectral zoom h in faster convergence, giving this parameter an additional numerical role of accelerating convergence. DISPLAYFORM11 Complexity. In practice, an accurate inversion of (h∆ + iI) is not required, since the approximate inverse is combined with learned coefficients, which "compensate", as necessary, for the inversion inaccuracy. In a CayleyNet for a fixed graph, we fix the number of Jacobi iterations. Since the convergence rate depends on κ, that depends on the graph, different graphs may need different numbers of iterations. The convergence rate also depends on h. Since there is a trade-off between the spectral zoom amount h, and the accuracy of the approximate inversion, and since h is a learnable parameter, the training finds the right balance between the spectral zoom amount and the inversion accuracy. We study the computational complexity of our method, as the number of edges n of the graph tends to infinity. For every constant of a graph, e.g d, κ, we add the subscript n, indicating the number of edges of the graph. For the unnormalized Laplacian, we assume that d n and h n are bounded, which gives κ n < a < 1 for some a independent of n. For the normalized Laplacian, we assume that κ n < a < 1. By Theorem 1, fixing the number of Jacobi iterations K and the order of the filter r, independently of n, keeps the Jacobi error controlled. As a , the number of parameters is O, and for a Laplacian modeled as a sparse matrix, applying a Cayley filter on a signal takes O(n) operations. Localization. Unlike Chebyshev filters that have the small r-hop support, Cayley filters are rational functions supported on the whole graph. However, it is still true that Cayley filters are well localized on the graph. Let G be a Cayley filter and δ m denote a delta-function on the graph, defined as one at vertex m and zero elsewhere. We show that Gδ m decays fast, in the following sense: Definition 2 (Exponential decay on graphs). Let f be a signal on the vertices of graph G, 1 ≤ p ≤ ∞, and 0 < < 1. Denote by S ⊆ {1, . . ., n} a subset of the vertices and by S c its complement. We say that the L p -mass of f is supported in S up to if f | S c p ≤ f p, where f | S c = (f l) l∈S c is the restriction of f to S c. We say that f has (graph) exponential decay about vertex m, if there exists some γ ∈ and c > 0 such that for any k, the L p -mass of f is supported in N k,m up to cγ k. Here, N k,m is the k-hop neighborhood of m. 1 We remind that the Jacobi method for solving Ax = b consists in decomposing A = Diag(A) + Off(A) and obtaining the solution iteratively as Remark 3. Note that Definition 2 is analogous to classical exponential decay on Euclidean space: DISPLAYFORM12 DISPLAYFORM13 Theorem 4. Let G be a Cayley filter of order r. Then, Gδ m has exponential decay about m in L 2, with constants c = 2M1 Gδm 2 and γ = κ 1/r (where M and κ are from Proposition 1). Chebyshev as a special case of Cayley. For a regular graph with D = dI, using Jacobi inversion based on zero iterations, we get that any Cayley filter of order r is a polynomial of ∆ in the monomial base h∆−i hd+i j. In this situation, a Chebyshev filter, which is a real valued polynomial of ∆, is a special case of a Cayley filter. Spectral zoom and stability. Generally, both Chebyshev polynomials and trigonometric polynomials give stable approximations, optimal for smooth functions. However, this crude statement is oversimplified. One of the drawbacks in Chebyshev filters is the fact that the spectrum of ∆ is always mapped to [−1, 1] in a linear manner, making it hard to specialize in small frequency bands. In Cayley filters, this problem is mitigated with the help of the spectral zoom parameter h. As an example, consider the community detection problem discussed in the next section. A graph with strong communities has a cluster of small eigenvalues near zero. Ideal filters g(∆) for extracting the community information should be able to focus on this band of frequencies. Approximating such filters with Cayley polynomials, we zoom in to the band of interest by choosing the right h, and then project g onto the space of trigonometric polynomials of order r, getting a good and stable approximation (FIG4, bottom right). However, if we project g onto the space of Chebyshev polynomials of order r, the interesting part of g concentrated on a small band is smoothed out and lost (FIG4, middle right). Thus, projections are not the right way to approximate such filters, and the stability of orthogonal polynomials cannot be invoked. When approximating g on the small band using polynomials, the approximation will be unstable away from this band; small perturbations in g will in big perturbations in the Chebyshev filter away from the band. For this reason, we say that Cayley filters are more stable than Chebyshev filters. Regularity. We found that in practice, low-order Cayley filters are able to model both very concentrated impulse-like filters, and wider Gabor-like filters. Cayley filters are able to achieve a wider range of filter supports with less coefficients than Chebyshev filters FIG3 ), making the Cayley class more regular than Chebyshev. Complexity. Under the assumption of sparse Laplacians, both Cayley and Chebyshev filters incur linear complexity O(n). Besides, the new filters are equally simple to implement as Chebyshev filters; as seen in Eq.7, they boil down to simple sparse matrix-vector multiplications providing a GPU friendly implementation. Experimental settings. We test the proposed CayleyNets reproducing the experiments of Defferrard et al. FORMULA7; BID17; BID24 a) and using ChebNet BID8 ) as our main baseline method. All the methods were implemented in TensorFlow of M. BID21. The experiments were executed on a machine with a 3.5GHz Intel Core i7 CPU, 64GB of RAM, and NVIDIA Titan X GPU with 12GB of RAM. SGD+Momentum and Adam BID16 ) optimization methods were used to train the models in MNIST and the rest of the experiments, respectively. Training and testing were always done on disjoint sets. Community detection. We start with an experiment on a synthetic graph consisting of 15 communities with strong connectivity within each community and sparse connectivity across communities FIG4. Though rather simple, such a dataset allows to study the behavior of different algorithms in controlled settings. On this graph, we generate noisy step signals, defined as f i = 1 + σ i if i belongs to the community, and f i = σ i otherwise, where DISPLAYFORM0 The goal is to classify each such signal according to the community it belongs to. The neural network architecture used for this task consisted of a spectral convolutional layer (based on Chebyshev or Cayley filters) with 32 output features, a mean pooling layer, and a softmax classifier for producing the final classification into one of the 15 classes. The classification accuracy is shown in FIG4 (right, top) along with examples of learned filters (right, bottom). We observe that CayleyNet significantly outperforms ChebNet for smaller filter orders, with an improvement as large as 80%.Studying the filter responses, we note that due to the capability to learn the spectral zoom parameter, CayleyNet allows to generate band-pass filters in the low-frequency band that discriminate well the communities FIG4. Complexity. We experimentally validated the computational complexity of our model applying filters of different order r to synthetic 15-community graphs of different size n using exact matrix inversion and approximation with different number of Jacobi iterations FIG5 center and right, Figure 6 in the appendix). All times have been computed running 30 times the considered models and averaging the final . As expected, approximate inversion guarantees O(n) complexity. We further conclude that typically very few Jacobi iterations are required FIG5, left shows that our model with just one Jacobi iteration outperforms ChebNet for low-order filters on the community detection problem).MNIST. Following BID8; BID24, for a toy example, we approached the classical MNIST digits classification as a learning problem on graphs. Each pixel of an image is a vertex of a graph (regular grid with 8-neighbor connectivity), and pixel color is a signal on the graph. We used a graph CNN architecture with two spectral convolutional layers based on Chebyshev and Cayley filters (producing 32 and 64 output features, respectively), interleaved with pooling layers performing 4-times graph coarsening using the Graclus algorithm (Dhillon et al. FORMULA7 finally a fully-connected layer (this architecture replicates the classical LeNet5, BID19, architecture, which is shown for comparison). MNIST classification are reported in TAB0. CayleyNet (11 Jacobi iterations) achieves the same (near perfect) accuracy as ChebNet with filters of lower order (r = 12 vs 25).Examples of filters learned by ChebNet and CayleyNet are shown in FIG3. 0.1776 +/-0.06079 sec and 0.0268 +/-0.00841 sec are respectively required by CayleyNet and ChebNet for analyzing a batch of 100 images at test time. FORMULA7 ) 86.64 % 47K ChebNet BID8 ) 87.07 % 46K CayleyNet 88.09 % 46KCitation network. Next, we address the problem of vertex classification on graphs using the popular CORA citation graph, BID28. Each of the 2708 vertices of the CORA graph represents a scientific paper, and an undirected unweighted edge represents a citation (5429 edges in total). For each vertex, a 1433-dimensional binary feature vector representing the content of the paper is given. The task is to classify each vertex into one of the 7 groundtruth classes. We split the graph into training (1,708 vertices), validation (500 vertices) and test (500 vertices) sets, for simulating the labeled and unlabeled information. We train ChebNet and CayleyNet with the architecture presented in BID17; BID24 (two spectral convolutional layers with 16 and 7 outputs), DCNN BID0 ) with 2 diffusion layer (10 hidden features and 2 diffusion hops) and GCN BID17 ) with 3 convolutional layer (32 and 16 hidden features). Figure 5 compares ChebNets and CayleyNets, in a number of different settings. Since ChebNets require Laplacians with spectra bounded in [−1, 1], we consider both the normalized Laplacian (the two left figures), and the scaled unnormalized Laplacian (2∆/λ max − I), where ∆ is the unnormalized Laplacian and λ max is its largest eigenvalue (the two right figures). For fair comparison, we fix the order of the filters (top figures), and fix the overall number of network parameters (bottom figures). In the bottom figure, the Cayley filters are restricted to even cosine polynomials by considering only real filter coefficients. TAB1 shows a comparison of the performance obtained with different methods (all architectures roughly present the same amount of parameters). The best CayleyNets consistently outperform the best competitors. Recommender system. In our final experiment, we applied CayleyNet to recommendation system, formulated as matrix completion problem on user and item graphs, BID24. The task is, given a sparsely sampled matrix of scores assigned by users (columns) to items (rows), to fill in the missing scores. The similarities between users and items are given in the form of column and row graphs, respectively. BID24 approached this problem as learning with a Recurrent Graph CNN (RGCNN) architecture, using an extension of ChebNets to matrices defined on multiple graphs in order to extract spatial features from the score matrix; these features are then fed into an Figure 5: ChebNet (blue) and CayleyNet (orange) test accuracies obtained on the CORA dataset for different polynomial orders. Polynomials with complex coefficients (top) and real coefficients (bottom) have been exploited with CayleyNet in the two analysis. Orders 1 to 6 have been used in both comparisons. The best CayleyNet consistently outperform the best ChebNet requiring at the same time less parameters (CayleyNet with order r and complex coefficients requires a number of parameters equal to ChebNet with order 2r).RNN producing a sequential estimation of the missing scores. Here, we repeated verbatim their experiment on the MovieLens dataset BID23 ), replacing Chebyshev filters with Cayley filters. We used separable RGCNN architecture with two CayleyNets of order r = 4 employing 15 Jacobi iterations. The are reported in TAB3. To present a complete comparison we further extended the experiments reported in BID24 by training sRGCNN with ChebNets of order 8, this provides an architecture with same number of parameters as the exploited CayleyNet (23K coefficients). Our version of sRGCNN outperforms all the competing methods, including the previous with Chebyshev filters reported in BID24. sRGCNNs with Chebyshev polynomials of order 4 and 8 respectively require 0.0698 +/-0.00275 sec and 0.0877 +/-0.00362 sec at test time, sRGCNN with Cayley polynomials of order 4 and 15 jacobi iterations requires 0.165 +/-0.00332 sec. BID14; BID31 ) 1.653 GMC BID15 0.996 GRALS BID26 ) 0.945 sRGCNN Cheby,r=4 BID24 0.929 sRGCNN Cheby,r=8 BID24 0.925 sRGCNN Cayley 0.922 In this paper, we introduced a new efficient spectral graph CNN architecture that scales linearly with the dimension of the input data. Our architecture is based on a new class of complex rational Cayley filters that are localized in space, can represent any smooth spectral transfer function, and are highly regular. The key property of our model is its ability to specialize in narrow frequency bands with a small number of filter parameters, while still preserving locality in the spatial domain. We validated these theoretical properties experimentally, demonstrating the superior performance of our model in a broad range of graph learning problems. First note the following classical for the approximation of Ax = b using the Jacobi method: if the initial condition is DISPLAYFORM0 In our case, note that if we start with initial conditionỹ j = 0, the next iteration givesỹ j = b j, which is the initial condition from our construction. Therefore, since we are approximating y j = C(h∆)ỹ j−1 byỹ j =ỹ DISPLAYFORM1 Define the approximation error in C(h∆) j f by DISPLAYFORM2 By the triangle inequality, by the fact that C j (h∆) is unitary, and by DISPLAYFORM3 where the last inequality is due to DISPLAYFORM4 Now, using standard norm bounds, in the general case we have DISPLAYFORM5 The solution of this recurrent sequence is DISPLAYFORM6 If we use the version of the algorithm, in which eachỹ j is normalized, we get by e j ≤ e j−1 + √ nκ K+1. The solution of this recurrent sequence is DISPLAYFORM7 We denote in this case M j = j √ nIn case the graph is regular, we have D = dI. In the non-normalized Laplacian case, DISPLAYFORM8 The spectral radius of ∆ is bounded by 2d. This can be shown as follows. a value λ is not an eigenvalue of ∆ (namely it is a regular value) if and only if (∆ − λI) is invertible. Moreover, the matrix (∆ − λI) is strictly dominant diagonal for any |λ| > 2d. By Levy-Desplanques theorem, any strictly dominant diagonal matrix is invertible, which means that all of the eigenvalues of ∆ are less than 2d in their absolute value. As a , the spectral radius of (dI − ∆) is realized on the smallest eigenvalue of ∆, namely it is |d − 0| = d. This means that the specral radius of J is DISPLAYFORM9. As a DISPLAYFORM10 = κ. We can now continue from to get e j ≤ e j−1 + J K+1 2(1 + e j−1) = e j−1 + κ K+1 (1 + e j−1).As before, we get e j ≤ jκ K+1 + O(κ 2K+2), and e j ≤ jκ K+1 if eachỹ j is normalized. We denote in this case M j = j. In the case of the normalized Laplacian of a regular graph, the spectral radius of ∆ n is bounded by 2, and the diagonal entries are all 1. Equation in this case reads J = h h+i (I − ∆ n), and J has spectral radius h √ h 2 +1. Thus J 2 = h √ h 2 +1= κ and we continue as before to get e j ≤ jκ K+1 and M j = j. In all cases, we have by the triangle inequality Figure 6: Test (above) and training (below) times with corresponding ratios as function of filter order r and graph size n on our community detection dataset.
A spectral graph convolutional neural network with spectral zoom properties.
1,714
scitldr
We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model’s accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy. Semantic segmentation predicts pixel-level annotations of different semantic categories for an image. Despite its performance breakthrough thanks to the prosperity of convolutional neural networks (CNNs) , as a dense structured prediction task, segmentation models commonly suffer from heavy memory costs and latency, often due to stacking convolutions and aggregating multiple-scale features, as well as the increasing input image resolutions. However, recent years witness the fast-growing demand for real-time usage of semantic segmentation, e.g., autonomous driving. Such has motivated the enthusiasm on designing low-latency, more efficient segmentation networks, without sacrificing accuracy notably (; a). The recent success of neural architecture search (NAS) algorithms has shed light on the new horizon in designing better semantic segmentation models, especially under latency of other resource constraints. Auto-DeepLab (a) first introduced network-level search space to optimize resolutions (in addition to cell structure) for segmentation tasks. and adopted pre-defined network-level patterns of spatial resolution, and searched for operators and decoders with latency constraint. Despite a handful of preliminary successes, we observe that the successful human domain expertise in designing segmentation models appears to be not fully integrated into NAS frameworks yet. For example, human-designed architectures for real-time segmentation (; a) commonly exploit multi-resolution branches with proper depth, width, operators, and downsample rates, and find them contributing vitally to the success: such flexibility has not been unleashed by existing NAS segmentation efforts. Furthermore, the trade-off between two (somewhat conflicting) goals, i.e., high accuracy and low latency, also makes the search process unstable and prone to "bad local minima" architecture options. As the well-said quote goes: "those who do not learn history are doomed to repeat it". Inheriting and inspired by the successful practice in hand-crafted efficient segmentation, we propose a novel NAS framework dubbed FasterSeg, aiming to achieve extremely fast inference speed and competitive accuracy. We designed a special search space capable of supporting optimization over multiple branches of different resolutions, instead of a single backbone. These searched branches are adaptively aggregated for the final prediction. To further balance between accuracy versus latency and avoiding collapsing towards either metric (e.g., good latency yet poor accuracy), we design a decoupled and fine-grained latency regularization, that facilitates a more flexible and effective calibration between latency and accuracy. Moreover, our NAS framework can be easily extended to a collaborative search (co-searching), i.e., jointly searching for a complex teacher network and a light-weight student network in a single run, whereas the two models are coupled by feature distillation in order to boost the student's accuracy. We summarize our main contributions as follows: • A novel NAS search space tailored for real-time segmentation, where multi-resolution branches can be flexibility searched and aggregated. • A novel decoupled and fine-grained latency regularization, that successfully alleviates the "architecture collapse" problem in the latency-constrained search. • A novel extension to teacher-student co-searching for the first time, where we distill the teacher to the student for further accuracy boost of the latter. • Extensive experiments demonstrating that FasterSeg achieves extremely fast speed (over 30% faster than the closest manually designed competitor on CityScapes) and maintains competitive accuracy. Human-designed CNN architectures achieve good accuracy performance nowadays (; . However, designing architectures to balance between accuracy and other resource constraints (latency, memory, FLOPs, etc.) requires more human efforts. To free human experts from this challenging trade-off, neural architecture search (NAS) has been recently introduced and drawn a booming interest (; ; ; a; a; ; c; . These works optimize both accuracy and resource utilization, via a combined loss function, or a hybrid reward signal for policy learning), or a constrained optimization formulation. Most existing resource-aware NAS efforts focus on classification tasks, while semantic segmentation has higher requirements for preserving details and rich contexts, therefore posing more dilemmas for efficient network design. Fortunately, previous handcrafted architectures for real-time segmentation have identified several consistent and successful design patterns. ENet adopted early downsampling, and ICNet further incorporated feature maps from multiresolution branches under label guidance. BiSeNet (a) fused a context path with fast downsampling and a spatial path with smaller filter strides. More works target on segmentation efficiency in terms of computation cost ) and memory usage. Their multi-resolution branching and aggregation designs ensure sufficiently large receptive fields (contexts) while preserving high-resolution fine details, providing important clues on how to further optimize the architecture. There have been recent studies that start pointing NAS algorithms to segmentation tasks. AutoDeepLab (a) pioneered in this direction by searching the cells and the networklevel downsample rates, to flexibly control the spatial resolution changes throughout the network. and introduced resource constraints into NAS segmentation. A multi-scale decoder was also automatically searched. However, compared with manually designed architectures, those search models still follow a single-backbone design and did not fully utilize the prior wisdom (e.g., multi-resolution branches) in designing their search spaces. Lastly, we briefly review knowledge distillation , that aims to transfer learned knowledge from a sophisticated teacher network to a light-weight student, to improve the (more efficient) student's accuracy. For segmentation, Liu et al. (2019b) and proposed to leverage knowledge distillation to improve the accuracy of the compact model and speed-up convergence. There was no prior work in linking distillation with NAS yet, and we will introduce the extension of FasterSeg by integrating teacher-student model collaborative search for the first time. 3 FASTERSEG: FASTER REAL-TIME SEGMENTATION Our FasterSeg is discovered from an efficient and multi-resolution search space inspired by previous manual design successes. A fine-grained latency regularization is proposed to overcome the challenge of "architecture collapse" . We then extend our FasterSeg to a teacherstudent co-searching framework, further ing in a lighter yet more accurate student network. The core motivation behind our search space is to search multi-resolution branches with overall low latency, which has shown effective in previous manual design works (; a). Our NAS framework automatically selects and aggregates branches of different resolutions, based on efficient cells with searchable superkernels. Figure 1: The multi-resolution branching search space for FasterSeg, where we aim to optimize multiple branches with different output resolutions. These outputs are progressively aggregated together in the head module. Each cell is individually searchable and may have two inputs and two outputs, both of different downsampling rates (s). Inside each cell, we enable searching for expansion ratios within a single superkernel. . Bottom: BiSeNet (a) Inspired by (a), we enable searching for spatial resolutions within the L-layer cells (Figure 1), where each cell takes inputs from two connected predecessors and outputs two feature maps of different resolutions. Hand-crafted networks for real-time segmentation found multi-branches of different resolutions to be effective (; a). However, architectures explored by current NAS algorithms are restricted to a single backbone. Our goal is to select b (b > 1) branches of different resolutions in this L-layer framework. Specifically, we could choose b different final output resolutions for the last layer of cells, and decode each branch via backtrace (section 3.4). This enables our NAS framework to explore b individual branches with different resolutions, which are progressively "learned to be aggregated" by the head module (Figure 1). We follow the convention to increase the number of channels at each time of resolution downsampling. To enlarge the model capacity without incurring much latency, we first downsample the input image to 1 8 original scale with our stem module, and then set our searchable downsample rates s ∈ {8, 16, 32}. Figure 2 shows that our multi-resolution search space is able to cover existing human-designed networks for real-time segmentation. See Appendix B for branch selection details. As we aim to boost the inference latency, the speed of executing an operator is a direct metric (rather than indirect metrics like FLOPs) for selecting operator candidates O. Meanwhile, as we previously discussed, it is also important to ensure sufficiently large receptive field for spatial contexts. We analyze typical operators, including their common surrogate latency measures (FLOPs, parameter numbers), and their real-measured latency on an NVIDIA 1080Ti GPU with TensorRT library, and their receptive fields, as summarized in Table 1. Compared with standard convolution, group convolution is often used for reducing FLOPs and number of parameters . Convolving with two groups has the same receptive field with a standard convolution but is 13% faster, while halving the parameter amount (which might not be preferable as it reduces the model learning capacity). Dilated convolution has an enlarged receptive field and is popular in dense predictions ). However, as shown in Table 1 (and as widely acknowledged in engineering practice), dilated convolution (with dilation rate 2) suffers from dramatically higher latency, although that was not directly reflected in FLOPs nor parameter numbers. In view of that, we design a new variant called "zoomed convolution", where the input feature map is sequentially processed with bilinear downsampling, standard convolution, and bilinear upsampling. This special design enjoys 40% lower latency and 2 times larger receptive field compared to standard convolution. Our search space hence consists of the following operators: • skip connection • 3×3 conv. • 3×3 conv. ×2 • "zoomed conv.": bilinear downsampling + 3×3 conv. + bilinear upsampling • "zoomed conv. ×2": bilinear downsampling + 3×3 conv. ×2 + bilinear upsampling As mentioned by , network fragmentation can significantly hamper the degree of parallelism, and therefore practical efficiency. Therefore, we choose a sequential search space (rather than a directed acyclic graph of nodes), i.e., convolutional layers are sequentially stacked in our network. In Figure 1, each cell is differentiable 2019a) and will contain only one operator, once the discrete architecture is derived (section 3.4). It is worth noting that we allow each cell to be individually searchable across the whole search space. We further give each cell the flexibility to choose different channel expansion ratios. In our work, we search for the width of the connection between successive cells. That is however non-trivial due to the exponentially possible combinations of operators and widths. To tackle this problem, we propose a differentiably searchable superkernel, i.e., directly searching for the expansion ratio χ within a single convolutional kernel which supports a set of ratios X ⊆ N +. Inspired by (c) and , from slim to wide our connections incrementally take larger subsets of input/output dimensions from the superkernel. During the architecture search, for each superkernel, only one expansion ratio is sampled, activated, and back-propagated in each step of stochastic gradient descent. This design contributes to a simplified and memory-efficient super network and is implemented via the renowned "Gumbel-Softmax" trick (see Appendix C for details). To follow the convention to increase the number of channels as resolution downsampling, in our search space we consider the width = χ × s, where s ∈ {8, 16, 32}. We allow connections between each pair of successive cells flexibly choose its own expansion ratio, instead of using a unified single expansion ratio across the whole search space. Denote the downsample rate as s and layer index as l. To facilitate the search of spatial resolutions, we connect each cell with two possible predecessors' outputs with different downsample rates: Each cell could have at most two outputs with different downsample rates into its successors: The expansion ratio χ j s,l is sampled via "Gumbel-Softmax" trick according to p(χ = χ j s,l) = γ j s,l. Here, α, β, and γ are all normalized scalars, associated with each operator O k ∈ O, each predecessor's output O l−1, and each expansion ratio χ ∈ X, respectively (Appendix D). They encode the architectures to be optimized and derived. Low latency is desirable yet challenging to optimize. Previous works (; observed that during the search procedure, the supernet or search policy often fall into bad "local minimums" where the generated architectures are of extremely low latency but with poor accuracy, especially in the early stage of exploration. In addition, the searched networked tend to use more skip connections instead of choosing low expansion ratios . This problem is termed as "architecture collapse" in our paper. The potential reason is that, finding architectures with extremely low latency (e.g. trivially selecting the most light-weight operators) is significantly easier than discovering meaningful compact architectures of high accuracy. To address this "architecture collapse" problem, we for the first time propose to leverage a fine-grained, decoupled latency regularization. We first achieve the continuous relaxation of latency similar to the cell operations in section 3.1.4, via replacing the operator O in Eqn. 1 and 2 with the corresponding latency. We build a latency lookup table that covers all possible operators to support the estimation of the relaxed latency. Figure 3 demonstrates the high correlation of 0.993 between the real and estimated latencies (see details in appendix E). We argue that the core reason behind the "architecture collapse" problem is the different sensitivities of supernet to operator O, downsample rate s, and expansion ratio χ. Operators like "3×3 conv. ×2" and "zoomed conv." have a huge gap in latency. Similar latency gap (though more moderate) exists between slim and wide expansion ratios. However, downsample rates like "8" and "32" do not differ much, since resolution downsampling also brings doubling of the number of both input and output channels. We quantitatively compared the influence of O, s, and χ towards the supernet latency, by adjusting one of the three aspects and fixing the other two. Taking O as the example, we first uniformly initialize β and γ, and calculate ∆Latency(O) as the gap between the supernet which dominantly takes the slowest operators and the one adopts the fastest. Similar calculations were performed for s and χ. Values of ∆Latency in Table 2 indicate the high sensitivity of the supernet's latency to operators and expansion ratios, while not to resolutions. Figure 4 (a) shows that the unregularized latency optimization will bias the supernet towards light-weight operators and slim expansion ratios to quickly minimize the latency, ending up with problematic architectures with low accuracy. Based on this observation, we propose a regularized latency optimization leveraging different granularities of our search space. We decouple the calculation of supernet's latency into three granularities of our search space (O, s, χ), and regularize each aspect with a different factor: where we by default set w 1 = 0.001, w 2 = 0.997, w 3 = 0.002 1. This decoupled and fine-grained regularization successfully addresses this "architecture collapse" problem, as shown in Figure 4 Knowledge Distillation is an effective approach to transfer the knowledge learned by a large and complex network (teacher T) into a much smaller network (student S). In our NAS framework, we can seamlessly extend to teacher-student cosearching, i.e., collaboratively searching for two networks in a single run (Figure 5). Specifically, we search a complex teacher and light-weight student simultaneously via adopting two sets of architectures in one supernet: (α T, β T) and (α S, β S, γ S). Note that the teacher does not search the expansion ratios and always select the widest one. This extension does not bring any overhead in memory usage or size of supernet since the teacher and student share the same supernet weights W during the search process. Two sets of architectures are iteratively optimized during search (please see details in Appendix F), and we apply the latency constraint only on the student, not on the teacher. Therefore, our searched teacher is a sophisticated network based on the same search space and supernet weights W used by the student. During training from scratch, we apply a distillation loss from teacher T to student S: ) KL denotes the KL divergence. q s i and q t i are predicted logit for pixel i from S and T, respectively. Equal weights (1.0) are assigned to the segmentation loss and this distillation loss. Once the search is completed, we derive our discrete architecture from α, β, and γ: • α, γ: We select the optimum operators and expansion ratios by taking the argmax of α and γ. We shrink the operator "skip connection" to obtain a shallower architecture with less cells. • β: Different from (a), for each cell s,l we consider β 0 and β 1 as probabilities of two outputs from cell s 2,l−1 and cell s,l−1 into cell s,l. Therefore, by taking the l * = argmax l (β 0 s,l), we find the optimum position (cell s,l *) where to downsample the current resolution (It is worth noting that, the multi-resolution branches will share both cell weights and feature maps if their cells are of the same operator type, spatial resolution, and expansion ratio. This design contributes to a faster network. Once cells in branches diverge, the sharing between the branches will be stopped and they become individual branches (See Figure 6). 1 These values are obtained by solving equations derived from Table 2 in order to achieve balanced sensitivities on different granularities: 10.42 × w1 = 0.01 × w2 = 5.54 × w1, s.t. w1 + w2 + w3 = 1. 2 For a branch with two searchable downsampling positions, we consider the argmax over the joint proba- We use the Cityscapes as a testbed for both our architecture search and ablation studies. After that, we report our final accuracy and latency on Cityscapes, CamVid , and BDD (b). In all experiments, the class mIoU (mean Intersection over Union per class) and FPS (frame per second) are used as the metrics for accuracy and speed, respectively. Please see Appendix G for dataset details. In all experiments, we use Nvidia Geforce GTX 1080Ti for benchmarking the computing power. We employ the high-performance inference framework TensorRT v5.1.5 and report the inference speed. During this inference measurement, an image of a batch size of 1 is first loaded into the graphics memory, then the model is warmed up to reach a steady speed, and finally, the inference time is measured by running the model for six seconds. All experiments are performed under CUDA 10.0 and CUDNN V7. Our framework is implemented with PyTorch. The search, training, and latency measurement codes are available at https://github.com/TAMU-VITA/FasterSeg. We consider a total of L = 16 layers in the supernet and our downsample rate s ∈ {8, 16, 32}. In our work we use number of branches b = 2 by default, since more branches will suffer from high latency. We consider expansion ratio χ s,l ∈ X = {4, 6, 8, 10, 12} for any "downsample rate" s and layer l. The multi-resolution branches have 1695 unique paths. For cells and expansion ratios, we have (1 + 4 × 5) (15+14+13) + 5 3 ≈ 3.4 × 10 55 unique combinations. This in a search space in the order of 10 58, which is much larger and challenging, compared with preliminary studies. Architecture search is conducted on Cityscapes training dataset. Figure 6 visualizes the best spatial resolution discovered (FasterSeg). Our FasterSeg achieved mutli-resolutions with proper depths. The two branches share the first three operators then diverge, and choose to aggregate outputs with downsample rates of 16 and 32. Operators and expansion ratios are listed in Table 7 in Appendix I, where the zoomed convolution is heavily used, suggesting the importance of low latency and large receptive field. We conduct ablation studies on Cityscapes to evaluate the effectiveness of our NAS framework. More specifically, we examine the impact of operators (O), downsample rate (s), expansion ratios (χ), and also distillation on the accuracy and latency. When we expand from a single backbone (b = 1) to multi-branches (b = 2), our FPS drops but we gain a much improvement on mIoU, indicating the multiresolution design is beneficial for segmentation task. By enabling the search for expansion ratios (χ), we discover a faster network with FPS 163.9 without sacrificing accuracy (70.5%), which proves that the searchable superkernel gets the benefit from eliminating redundant channels while maintaining high accuracy. This is our student network (S) discovered in our co-searching framework (see below). We further evaluate the efficacy of our teacher-student co-searching framework. After the collaboratively searching, we obtain a teacher architecture (T) and a student architecture (S). As mentioned above, S is searched with searchable expansion ratios (χ), achieving an FPS of 163.9 and an mIoU of 70.5%. In contrast, when we directly compress the teacher (channel pruning via selecting the slimmest expansion ratio) and train with distillation from the well-trained original cumbersome teacher, it only achieved mIoU = 66.1% with only FPS = 146.7, indicating that our architecture cosearching surpass the pruning based compression. Finally, when we adopt the knowledge distillation from the well-trained cumbersome teacher to our searched student, we boost the student's accuracy to 73.1%, which is our final network FasterSeg. This demonstrates that both a student discovered by co-searching and training with knowledge distillation from the teacher are vital for obtaining an accurate faster real-time segmentation model. In this section, we compare our FasterSeg with other works for real-time semantic segmentation on three popular scene segmentation datasets. Note that since we target on real-time segmentation, we measure the mIoU without any evaluation tricks like flipping, multi-scale, etc. 67.7 69.5 37.7 1024×2048 BiSeNet (a) 69.0 68.4 105.8 768×1536 CAS 71.6 70.5 108.0 768×1536 Fast-SCNN 68.6 68.0 123.5 1024×2048 DF1-Seg-d8 (73.1 71.5 163.9 1024×2048 Cityscapes: We evaluate FasterSeg on Cityscapes validation and test sets. We use original image resolution of 1024×2048 to measure both mIoU and speed inference. In Table 4, we see the superior FPS (163.9) of our FasterSeg, even under the maximum image resolution. This high FPS is over 1.3× faster than human-designed networks. Meanwhile, our FasterSeg still maintains competitive accuracy, which is 73.1% on the validation set and 71.5% on the test set. This accuracy is achieved with only Cityscapes fine-annotated images, without using any extra data (coarse-annotated images, ImageNet, etc.). We directly transfer the searched architecture on Cityscapes to train on CamVid. Table 5 reveals that without sacrificing much accuracy, our FasterSeg achieved an FPS of 398.1. This extremely high speed is over 47% faster than the closest competitor in FPS (a), and is over two times faster than the work with the best mIoU. This impressive verifies both the high performance of FasterSeg and also the transferability of our NAS framework. BDD: In addition, we also directly transfer the learned architecture to the BDD dataset. In Table 6 we compare our FasterSeg with the baseline provided by Yu et al. (2018b). Since no previous work has considered real-time segmentation on the BDD dataset, we get 15 times faster than the DRN-D-22 with slightly higher mIoU. Our FasterSeg still preserve the extremely fast speed and competitive accuracy on BDD. ENet 68.3 61.2 ICNet 67.1 27.8 BiSeNet (a) 65.6 269.1 CAS 71.2 169.0 FasterSeg (ours) 71.1 398.1 We introduced a novel multi-resolution NAS framework, leveraging successful design patterns in handcrafted networks for real-time segmentation. Our NAS framework can automatically discover FasterSeg, which achieved both extremely fast inference speed and competitive accuracy. Our search space is intrinsically of low-latency and is much larger and challenging due to flexible searchable expansion ratios. More importantly, we successfully addressed the "architecture collapse" problem, by proposing the novel regularized latency optimization of fine-granularity. We also demonstrate that by seamlessly extending to teacher-student co-searching, our NAS framework can boost the student's accuracy via effective distillation. A STEM AND HEAD MODULE Stem: Our stem module aims to quickly downsample the input image to 1 8 resolution while increasing the number of channels. The stem module consists of five 3 × 3 convolution layers, where the first, second, and fourth layer are of stride two and double the number of channels. Head: As shown in Figure 1, feature map of shape (C 2s × H × W) is first reduced in channels by a 1 × 1 convolution layer and bilinearly upsampled to match the shape of the other feature map (C s × 2H × 2W). Then, two feature maps are concatenated and fused together with a 3 × 3 convolution layer. Note that we not necessarily have C 2s = 2C s because of the searchable expansion ratios. Since our searchable downsample rates s ∈ {8, 16, 32} and the number of selected branches b = 2, our supernet needs to select branches of three possible combinations of resolutions: {8, 16}, {8, 32}, and {16, 32}. For each combination, branches of two resolutions will be aggregated by our head module. Our supernet selects the best b branches based on the criterion used in: where m is a searched model aggregating b branches, with accuracy ACC(m) and latency LAT (m). w is the weight factor defined as: We empirically set α = β = −0.07 and the target latency T = 8.3 ms in our work. Formally, suppose we have our set of expansion ratios X ⊆ N +, and we want to sample one ratio χ from X. For each χ i we have an associated probability γ i, where helps us approximate differentiable sampling. We first sample a "Gumbel-Noise" ). We set the temperature parameter τ = 1 in our work. D NORMALIZED SCALARS α, β, γ α, β, and γ are all normalized scalars and implemented as softmax. They act as probabilities associating with each operator O k ∈ O, each predecessor's output O l−1, and each expansion ratio χ ∈ X, respectively: where s is downsample rate and l is index of the layer in our supernet. We build a latency lookup table that covers all possible situations and use this lookup table as building blocks to estimate the relaxed latency. To verify the continuous relaxation of latency, we randomly sample networks of different operators/downsample rates/expansion ratios out of the supernet M, and measured both the real and estimated latency. We estimate the network latency by accumulating all latencies of operators consisted in the network. In Figure 3, we can see the high correlation between the two measurements, with a correlation coefficient of 0.993. This accurate estimation of network latency benefits from the sequential design of our search space. Given our supernet M, the overall optimization target (loss) during architecture search is: We adopt cross-entropy with "online-hard-element-mining" as our segmentation loss L seg. Lat(M) is the continuously relaxed latency of supernet, and λ is the balancing factor. We set λ = 0.01 in our work. As the architecture α, β, and γ are now involved in the differentiable computation graph, they can be optimized using gradient descent. Similar in (a), we adopt the first-order approximation , randomly split our training dataset into two disjoint sets trainA and trainB, and alternates the optimization between: H ARCHITECTURE SEARCH IMPLEMENTATIONS As stated in the second line of Eqn. 2, a stride 2 convolution is used for all s → 2s connections, both to reduce spatial size and double the number of filters. Bilinear upsampling is used for all upsampling operations. We conduct architecture search on the Cityscapes dataset. We use 160 × 320 random image crops from half-resolution (512 × 1024) images in the training set. Note that the original validation set or test set is never used for our architecture search. When learning network weights W, we use SGD optimizer with momentum 0.9 and weight decay of 5×10 −4. We used the exponential learning rate decay of power 0.99. When learning the architecture parameters α, β, andγ, we use Adam optimizer with learning rate 3×10 −4. The entire architecture search optimization takes about 2 days on one 1080Ti GPU. In Table 7 we list the operators (O) and expansion ratios (χ) selected by our FasterSeg. The downsample rates s in Table 7 and Figure 6 match. We have the number of output channels c out = s × χ. We observed that the zoomed convolution is heavily used, suggesting the importance of low latency and large receptive field.
We present a real-time segmentation model automatically discovered by a multi-scale NAS framework, achieving 30% faster than state-of-the-art models.
1,715
scitldr
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration. Reinforcement learning from demonstrations has proven to be an effective strategy for attacking problems that require sample efficiency and involve hard exploration. For example,, and Salimans and Chen (2018b) have shown that RL with demonstrations can address the hard exploration problem in Montezuma's Revenge. Večerík et al., and have demonstrated similar in robotics. Many other works have shown that demonstrations can accelerate learning and address hard-exploration tasks (e.g. see ; ;). In this paper, we attack the problem of learning from demonstrations in hard exploration tasks in partially observable environments with highly variable initial conditions. These three aspects together conspire to make learning challenging: 1. Sparse rewards induce a difficult exploration problem, which is a challenge for many state of the art RL methods. An environment has sparse reward when a non-zero reward is only seen after taking a long sequence of correct actions. Our approach is able to solve tasks where standard methods run for billions of steps without seeing a single non-zero reward. 2. Partial observability forces the use of memory, and also reduces the generality of information provided by a single demonstration, since trajectories cannot be broken into isolated transitions using the Markov property. An environment has partial observability if the agent can only observe a part of the environment at each timestep. 3. Highly variable initial conditions (i.e. changes in the starting configuration of the environment in each episode) are a big challenge for learning from demonstrations, because the demonstrations can not account for all possible configurations. When the initial conditions are fixed it is possible to be extremely efficient through tracking ); however, with a large variety of initial conditions the agent is forced to generalize over environment configurations. Generalizing between different initial conditions is known to be difficult . Our approach to these problems combines demonstrations with off-policy, recurrent Q-learning in a way that allows us to make very efficient use of the available data. In particular, we vastly outperform behavioral cloning using the same set of demonstrations in all of our experiments. Another desirable property of our approach is that our agents are able to learn to outperform the demonstrators, and in some cases even to discover strategies that the demonstrators were not aware of. In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of all the demonstrators completing the task in the intended way. Learning from a small number of demonstrations under highly variable initial conditions is not straight-forward. We identify a key parameter of our algorithm, the demo-ratio, which controls the proportion of expert demonstrations vs agent experience in each training batch. This hyper-parameter has a dramatic effect on the performance of the algorithm. Surprisingly, we find that the optimal demo ratio is very small (but non-zero) across a wide variety of tasks. The mechanism our agents use to efficiently extract information from expert demonstrations is to use them in a way that guides (or biases) the agent's own autonomous exploration of the environment. Although this mechanism is not obvious from the algorithm construction, our behavioral analysis confirms the presence of this guided exploration effect. To demonstrate the effectiveness of our approach we introduce a suite of tasks (which we call the Hard-Eight suite) that exhibit our three targeted properties. The tasks are set in a procedurally-generated 3D world, and require complex behavior (e.g. tool use, long-horizon memory) from the agent to succeed. The tasks are designed to be difficult challenges in our targeted setting, and several state of the art methods (themselves ablations of our approach) fail to solve them. The main contributions of this paper are, firstly we design a new agent that makes efficient use of demonstrations to solve sparse reward tasks in partially observed environments with highly variable initial conditions. Secondly, we provide an analysis of the mechanism our agents use to exploit information from the demonstrations. Lastly, we introduce a suite of eight tasks that support this line of research. Figure 1: The R2D3 distributed system diagram. The learner samples batches that are a mixture of demonstrations and the experiences the agent generates by interacting with the environment over the course of training. The ratio between demos and agent experiences is a key hyper-parameter which must be carefully tuned to achieve good performance. We propose a new agent, which we refer to as Recurrent Replay Distributed DQN from Demonstrations (R2D3). R2D3 is designed to make efficient use of demonstrations to solve sparse reward tasks in partially observed environments with highly variable initial conditions. This section gives an overview of the agent, and detailed pseudocode can be found in Appendix A. The architecture of the R2D3 agent is shown in Figure 1. There are several actor processes, each running independent copies of the behavior against an instance of the environment. Each actor streams its experience to a shared agent replay buffer, where experience from all actors is aggregated and globally prioritized ) using a mixture of max and mean of the TD-errors with priority exponent η = 1.0 as in. The actors periodically request the latest network weights from the learner process in order to update their behavior. In addition to the agent replay, we maintain a second demo replay buffer, which is populated with expert demonstrations of the task to be solved. Expert trajectories are also prioritized using the scheme of. Maintaining separate replay buffers for agent experience and expert demonstrations allows us to prioritize the sampling of agent and expert data separately. The learner process samples batches of data from both the agent and demo replay buffers simultaneously. A hyperparameter ρ, the demo ratio, controls the proportion of data coming from expert demonstrations versus from the agent's own experience. The demo ratio is implemented at a batch level by randomly choosing whether to sample from the expert replay buffer independently for each element with probability ρ. Using a stochastic demo ratio in this way allows us to target demo ratios that are smaller than the batch size, which we found to be very important for good performance. The objective optimized by the learner uses of n-step, double Q-learning (with n = 5) and a dueling architecture (; . In addition to performing network updates, the learner is also responsible for pushing updated priorities back to the replay buffers. In each replay buffer, we store fixed-length (m = 80) sequences of (s,a,r) tuples where adjacent sequences overlap by 40 time-steps. The sequences never cross episode boundaries. Given a single batch of trajectories we unroll both online and target networks on the same sequence of states to generate value estimates with the recurrent state initialized to zero. Proper initialization of the recurrent state would require always replaying episodes from the beginning, which would add significant complexity to our implementation. As an approximation of this we treat the first 40 steps of each sequence as a burn-in phase, and apply the training objective to the final 40 steps only. An Hard-Eight task suite. In each task an agent must interact with objects in its environment in order to gain access to a large apple that provides reward. The 3D environment is also procedurally generated so that every episode the state of the world including object shapes, colors, and positions is different. From the point of view of the agent the environment is partially observed. Because it may take hundreds of low-level actions to collect an apple the reward is sparse which makes exploration difficult. alternative approximation would be to store stale recurrent states in replay, but we did not find this to improve performance over zero initialization with burn-in. Exploration remains one of the most fundamental challenges for reinforcement learning. So-called "hard-exploration" domains are those in which rewards are sparse, and optimal solutions typically have long and sparsely-rewarded trajectories. Hard-exploration domains may also have many distracting dead ends that the agent may not be able to recover from once it gets into a certain state. In recent years, the most notable such domains are Atari environments, including Montezuma's Revenge and Pitfall . These domains are particularly tricky for classical RL algorithms because even finding a single non-zero reward to bootstrap from is incredibly challenging. A common technique used to address the difficulty of exploration is to encourage the agent to visit under-explored areas of the state-space . Such techniques are commonly known as intrinsic motivation or count-based exploration . However, these approaches do not scale well as the state space grows, as they still require exhaustive search in sparse reward environments. Additionally, recent empirical suggest that these methods do not consistently outperform -greedy exploration (Taïga et al., 2019). The difficulty of exploration is also a consequence of the current inability of our agents to abstract the world and learn scalable, causal models with explanatory power. Instead they often use low-level features or handcrafted heuristics and lack the generalization power necessary to work in a more abstract space. Hints can be provided to the agent which bias it towards promising regions of the state space either via reward-shaping or by introducing a sequence of curriculum tasks . However, these approaches can be difficult to specify and, in the case of reward shaping, often lead to unexpected behavior where the agent learns to exploit the modified rewards. Another hallmark of hard-exploration benchmarks is that they tend to be fully-observable and exhibit little variation between episodes. Nevertheless, techniques like random no-ops and "sticky actions" have been proposed to artificially increase episode variance in Atari , an alternative is to instead consider domains with inherent variability. Other recent work on the Obstacle Tower challenge domain is similar to our task suite in this regard. Reliance on determinism of the environment is one of the chief criticisms of imitation leveled by , who offers a valuable critique on, and Salimans and Chen (2018a). In contrast, our approach is able to solve tasks with substantial per-episode variability. GAIL is another imitation learning method, however GAIL has never been successfully applied to complex partially observable environments that require memory. Even the maze task inŻołna et al. has distinguishable rooms, uses a single layout across all episodes, and as a does not require a recurrent policy or discriminator. High-level steps necessary to solve the Baseball task. Each step in this sequence must be completed in order, and must be implemented by the agent as a sequence of low level actions (no option structure is available to the agent). The necessity of completing such a long sequence of high level steps makes it unlikely that the task will ever be solved by random exploration. Note each step involves interaction with physical objects, shown in bold. To address the difficulty of hard exploration in partially observable problems with highly variable initital conditions we introduce a collection of eight tasks, which exhibit these properties. Due to the generated nature of these tasks and the rich form of interaction between the agent and environment, we see greatly increased levels of variability between episodes. From the perspective of the learning process, these tasks are particularly interesting because just memorizing an open loop sequence of actions is unlikely to achieve even partial success on a new episode. The nature of interaction with the environment combined with a limited field of view also necessitates the use of memory in the agent. All of the tasks in the Hard-Eight task suite share important common properties that make them hard exploration problems. First, each task emits sparse rewards-in all but one task the only positive instantaneous reward obtained also ends the episode. The visual observations in each task are also first-person and thus the state of the world is only ever partially observed. Several of the tasks are constructed to ensure that that it is not possible to observe all task relevant information simultaneously. Finally, each task is subject to a highly variable initial conditions. This is accomplished by including several procedural elements, including colors, shapes and configurations of task relevant objects. The procedural generation ensures that simply copying the actions from a demonstration is not sufficient for successful execution, which is a sharp contrast to the the case of Atari . A more detailed discussion of these aspects can be found in Appendix B and videos of agents and humans performing these tasks can be found at https://bit.ly/2mAAUgg. Each task makes use of a standardized avatar with a first-person view of the environment, controlled by the same discretized action space consisting of 46 discrete actions. In all tasks the agent is rewarded for collecting apples and often this is the only reward obtained before the episode ends. A depiction of each task is shown in Figure 2. A description of the procedural elements and filmstrip of a successful episode for each task is provided in Appendix B. Each of these tasks requires the agent to complete a sequence of high-level steps to complete the task. An example from the task suite is shown in Figure 3. The agent must: find the bat, pick up the bat, knock the ball off the plinth, pick up the ball, activate the sensor with the ball (opening the door), walk through the door, and collect the large apple. In this section we discuss the baselines and ablations we use to compare against our R2D3 agent in the experiments. We compare to Behavior Cloning (a common baseline for learning from demonstrations) as well as two ablations of our method which individually remove either recurrence or demonstrations from R2D3. The two ablations correspond to two different state of the art methods from the literature. Behavior Cloning BC is a simple and common baseline method for learning policies from demonstrations . This algorithm corresponds to a supervised Frames of size 96x72 are fed into a ResNet, the output is then augmented by concatenating the previous action at−1, previous reward rt−1, and other proprioceptive features ft, such as accelerations, whether the avatar hand is holding an object, and the hand's relative distance to the avatar. learning approach to imitation learning which uses only expert trajectories as its training dataset to fit a parameterized policy mapping states to actions. For discrete actions this corresponds to a classification task, which we fit using the cross-entropy loss. If the rewards of trajectories in the training dataset are consistently high, BC is known to outperform recent batch-RL methods . To enable fair comparison we trained our BC agent using the same recurrent neural network architecture that we used for our R2D3 algorithm (see Figure 4). The first ablation we consider is to remove demonstrations from R2D3. This corresponds to setting the demo ratio (see Figure 1) to ρ = 0. This special case of R2D3 corresponds exactly to the R2D2 agent of , which itself extends DQN to partially observed environments by combining it with recurrence and the distributed training architecture of Ape-X DQN. This ablation is itself state of the art on Atari-57 and DMLab-30, making it an extremely strong baseline. No Recurrence The second ablation we consider is to replace the recurrent value function of R2D3 with a feed-forward reactive network. We do this separately from the no demonstrations ablation, leaving the full system in Figure 1 in tact, with only the structure of the network changed. If we further fix the demo ratio to ρ = 0.25 then this ablation corresponds to the DQfD agent of , which is competitive on hard-exploration Atari environments such as Montezuma's Revenge. However, we do not restrict ourselves to ρ = 0.25, and instead optimize over the demo ratio for the ablation as well as for our main agent. We evaluate the performance of our R2D3 agent alongside state-of-the-art deep RL baselines. As discussed in Section 5, we compare our R2D3 agent to BC (standard LfD baseline) R2D2 (off-policy SOTA), DQfD (LfD SOTA). We use our own implementations for all agents, and we plan to release code for all agents including R2D3. For each task in the Hard-Eight suite, we trained R2D3, R2D2, and DQfD using 256 -greedy CPUbased actors and a single GPU-based learner process. Following, the i-th actor was assigned a distinct noise parameter i ∈ [0.4 8,0.4] where each i is regularly spaced in log 0.4 space. For each of the algorithms their common hyperparameters were held fixed. Additionally, for R2D3 and DQfD the demo ratio was varied to study its effect. For BC we also varied the learning rate independently in a vain attempt to find a successful agent. All agents act in the environment with an action-repeat factor of 2, i.e. the actions received by the environment are repeated twice before passing the observation to the agent. Using an action repeat of 4 is common in other domains like Atari ; however, we found that using an action repeat of 4 made the Hard-Eight tasks too difficult for our demonstrators. Using Figure 5: Reward vs actor steps curves for R2D3 and baselines on the Hard-Eight task suite. The curves are computed as the mean performance for the same agent across 5 different seeds per task. Error regions show the 95% confidence interval for the mean reward across seeds. Several curves overlap exactly at zero reward for the full range of the plots. R2D3 can perform human-level or better on Baseball, Drawbridge, Navigate Cubes and Wall Sensor. R2D2 could not get any positive rewards on any of the tasks. DQfD and BC agents occasionally see rewards on Drawbridge and Navigate Cubes tasks, but this happens rarely enough that the effect is not visible in the plots. Indicators mark analysis points in Section 6.3. an action repeat of 2 allowed us to strike a compromise between ease of demonstration (high action repeats prohibiting smooth and intuitive motion) and ease of learning for the agents (low action repeats increase the number of steps required to complete the task). Figure 4 illustrates the neural network architecture of the different agents. As much as possible we use the same network architecture across all agents, deviating only for DQfD, where the recurrent head is replaced with an equally sized feed-forward layer. We briefly outline the training setup below, and give an explicit enumeration of the hyperparameters in Appendix C. For R2D3, R2D2 and DQfD we use the Adam optimizer with a fixed learning rate of 2×10 −4. We use hyperparameters that are shown to work well for similar environments. We use distributed training with 256 parallel actors, trained for at least 10 billion actor steps for all tasks. For the BC agent the training regime is slightly different, since this agent does not interact with the environment during training. For BC we also use the Adam optimizer but we additionally perform a hyperparameter sweep over learning rates {10 −5,10 −4,10 −3}. Since there is no notion of actor steps in BC we trained for 500k learner steps instead. During the course of training, an evaluator process periodically queries the learner process for the latest network weights and runs the ing policy on an episode, logging both the final return and the total number of steps (actor or learner steps, as appropriate) performed at the time the of evaluation. We collected a total of 100 demonstrations for each task spread across three different experts (each expert contributed roughly one third of the demonstrations for each task). Demonstrations for the tasks were collected using keyboard and mouse controls mapped to the agent's exact action space, which was necessary to enable both behaviour cloning and learning from demonstrations. We show statistics related to the human demonstration data which we collected from three experts in Table 1. In Figure 5, we report the return against the number of actor steps, averaged over five random initializations. We find that none of the baselines succeed in any of the eight environments. Meanwhile, R2D3 learns six out of the eight tasks, and reaches or exceeds human performance in four of them. The fact that R2D3 learns at all in this setting with only 100 demonstrations per task demonstrates the ability of the agent to make very efficient use of the demonstrations. This is in contrast to BC and DQfD which use the same demonstrations, and both fail to learn a single task from the suite. All methods, including R2D3, fail to solve two of the tasks: Remember Sensor and Throw Across. These are the two tasks in the suite that are most demanding in terms of memory requirements for the agent, and it is possible that our zero-initialization with burn-in strategy for handling LSTM states in replay does not give R2D3 sufficient context to complete these tasks successfully. Future work should explore the better handling of recurrent states as a possible avenue towards success on these tasks. R2D3, BC, and DQfD receive some negative returns on Remember Sensor, which indicates that the agents navigate down the hallway and walks over penalty sensors. R2D3 performed better than our average human demonstrator on Baseball, Drawbridge, Navigate Cubes and the Wall Sensor tasks. The behavior on Wall Sensor Stack in particular is quite interesting. On this task R2D3 found a completely different strategy than the human demonstrators by exploiting a bug in the implementation of the environment. The intended strategy for this task is to stack two blocks on top of each other so that one of them can remain in contact with a wall mounted sensor, and this is the strategy employed by the demonstrators. However, due to a bug in the environment the strategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contact with the key by pressing the key against it in a precise way. In light of the uniform failure of our baselines to learn on the Hard-Eight suite we made several attempts at training other models on the task suite; however, these attempts were all unsuccessful. For example, we tried adding randomized prior functions to R2D2, but this approach was still unable to obtain reward on any of the Hard-Eight tasks. We also trained an IMPALA agent with pixel control as auxiliary reward to help with exploration, but this approach also failed to learn on any of the tasks we attempted. We omit these from Figure 5, only keeping the most relevant baselines. In our experiments on Hard-Eight tasks (see Figure 5), we did a hyperparameter search and chose the best hyperparameters for each method independently. In this section, we look more closely at how the demo ratio (ρ) affects learning in R2D3. To do this we look at how the success rate of R2D3 across the entire Hard-Eight task suite varies as a function of the demo ratio. The goal of each task in the Hard-Eight suite is to collect a large apple, which ends the episode and gives a large reward. We consider an episode successful if the large apple is collected. An agent that executes many episodes in the environment will either succeed or fail at each one. We consider an agent successful if, after training, at least 75% of its final 25 episodes are successful. Finally, an individual agent with a fixed set of hyperparameters may still succeed or fail depending on the randomness in the environment and the initialization of the agent. We train several R2D3 agents on each tractable task 1 in the Hard-Eight suite, varying only the demo ratio while keeping other hyperparameters fixed at the values used for the learning experiment. We consider four different demo ratios across six tasks, with five seeds for each task (120 trained agents). Figure 6 shows estimates of the success rate for the R2D3 algorithm for each different demo ratio, aggregated across all tasks. We observe that tuning the demo ratio has a strong effect on the success rate across the task suite, and that the best demo ratio is quite small. See Appendix D.3 for further . The typical strategy for exploration in RL is to either use a stochastic policy and sample actions, or to use a deterministic policy and take random actions some small fraction of the time. Given sufficient time both of these approaches will in theory cover the space of possible behaviors, but in practice the amount of time required to achieve this coverage can be prohibitively long. In this experiment, we compare the behavior of R2D3 to the behavior of R2D2 (which is equivalent to R2D3 without demonstrations) on two of the tasks from the Hard-Eight suite. Even very early in training (well before R2D3 is able to reliably complete the tasks) we see many more task-relevant actions from R2D3 than from R2D2, suggesting that the effect of demonstrations is to bias R2D3 towards exploring relevant parts of the environment. In Figure 7 we begin by examining the Push Blocks tasks. The task here is to push a particular block onto a sensor to give access to a large apple, and we examine the behavior of both R2D3 and R2D2 after 5B steps, which is long before R2D3 begins to solve the task with any regularity (see Figure 5). Looking at the distribution of spatial locations for the agents it is clear that R2D2 essentially diffuses randomly around the room, while R2D3 spends much more time in task-relevant parts of the environment (e.g. away from the walls). We also record the total distance traveled by the moveable blocks in the room, and find that R2D3 tends to move the blocks significantly more often than R2D2, even before it has learned to solve the task. In this paper, we introduced the R2D3 agent, which is designed to make efficient use of demonstrations to learn in partially observable environments with sparse rewards and highly variable initial conditions. We showed through several experiments on eight very difficult tasks that our approach is able to outperform multiple state of the art baselines, two of which are themselves ablations of R2D3. We also identified a key parameter of our algorithm, the demo ratio, and showed that careful tuning of this parameter is critical to good performance. Interestingly we found that the optimal demo ratio is surprisingly small but non-zero, which suggests that there may be a risk of overfitting to the demonstrations at the cost of generalization. For future work, we could investigate how this optimal demo ratio changes with the total number of demonstrations and, more generally, the distribution of expert trajectories relative to the task variability. We introduced the Hard-Eight suite of tasks and used them in all of our experiments. These tasks are specifically designed to be partially observable tasks with sparse rewards and highly variable initial conditions, making them an ideal testbed for showcasing the strengths of R2D3 in contrast to existing methods in the literature. Our behavioral analysis showed that the mechanism R2D3 uses to efficiently extract information from expert demonstrations is to use them in a way that guides (or biases) the agent's own autonomous exploration of the environment. An in-depth analysis of agent behavior on the Hard-Eight task suite is a promising direction for understanding how different RL algorithms make selective use of information. A R2D3 Below we include pseudocode for the full R2D3 agent. The agent consists first of a single learner process which samples from both demonstration and agent buffers in order to update its policy parameters. Inputs: replay of expert demonstrations D, replay of agent experiences R, batch size B, sequence length m, and number of actors A. Initialize policy weights θ. Initialize target policy weights θ ← θ. Launch A actors and replicate policy weights θ to each actor. for nsteps do Sample transition sequences (st:t+m,at:t+m,rt:t+m) from replay D with probability ρ or from replay R with probability (1−ρ), to construct a mini-batch of size B. Calculate loss using target network. Perform a gradient descent step to update θ. If t mod ttarget = 0, update the target policy weights θ ← θ. If t mod tactor = 0, replicate policy weights to the actors. end for The agent also consists of A parallel actor processes which interact with a copy of the environment in order to obtain data which is then inserted into the agent buffer. The agents periodically update their parameters to match those being updated on the learner. Sample action from behavior policy a ← π(s) Execute a and observe s and r Store (s,a,s,r) in R until learner finishes. Sparse rewards All of the tasks emit sparse rewards, indeed in all but one task the only positive instantaneous reward obtained also ends the episode successfully. In other words, for standard RL algorithms to learn by bootstrapping, the actors must first solve the task inadvertently, and must do so with no intermediate signal to guide them. Partial observability Visual observations are all first-person, which means that some relevant features of the state of the world may be invisible to the agent simply because they are behind it or around a corner. Some tasks (e.g. Remember Sensor, are explicitly designed so that this is the case). Highly Variable Initial Conditions Many of the elements of the tasks are procedurally generated, which leads to significant variability between episodes of the same task. In particular, the starting position and orientation of the agent are randomized and similarly, where they are present, the shapes, colors, and textures of various objects are randomly sampled from a set of available such features. Therefore a single (or small number of) demonstration(s) is not sufficient to guide an agent to solve the task as it is in the case of DQfD on Atari . Observation specification All of the tasks provide the same observation space. In particular, a visual channel consisting of 96 by 72 RGB pixels, as well as accelerations of the avatar, force applied by the avatar hand on the object, whether if the avatar is holding anything or not, and the distance of a held object from the face of the avatar (zero when there is no held object). Action specification The action space consists of four displacement and four rotation actions, duplicated for coarse and fine-grained movement as well as for movement with and without grasping. The avatar also has an invisible "hand" which can be used to manipulate objects in the environment. The location of the hand is controlled by the avatar gaze direction, plus an additional two actions that control the distance of the hand from the body. A grasped object can be manipulated by six rotation actions (two for each rotational degree of freedom; 40) as well as four additional actions controlling the distance of the hand from the body at coarse and fine speed. Finally there is an independent grasp action (to hold an object without moving), and a no-op action (total 46). Compared to course actions, fine-grained actions in slower movements, allowing the agent to perform careful manipulations. This section gives addition details on each task in our suite including a sequence frames from a successful task execution (performed by a human) and a list of the procedural elements randomized per episode. Videos of agents and humans performing these tasks can be found at https://bit.ly/2mAAUgg. The agent spawns in a small room with a sensor and a key object resting high atop a plinth. The agent must find a stick and use it to knock the key object of the plinth in order to activate the sensor. Activating the sensor opens a door to an adjoining room with a large apple which ends the episode. • Initial position and orientation of the agent • Wall, floor and object materials and colors • Initial position of the stick • Position of plinth The agent spawns at one end of a network of branching platforms separated by drawbridges, which can be activated by touching a key object to a sensor. Activating a drawbridge with a key object destroys the key. Each platform is connected to several drawbridges, but has only one key object available. Some paths through the level have small apples which give reward. The agent must choose the most rewarding path through the level to obtain a large apple at the end which ends the episode. • Initial position and orientation of the agent • Wall, floor, ceiling and object materials and colors • Positions of the small apples throughout the network of ledges The agent spawns on one side of a large room. On the other side of the room on a raised platform there is a large apple which ends the episode. Across the center of the room there is a wall of movable blocks. The agent must dig through the wall of blocks and find a ramp onto the goal platform in order to collect the large apple. • Initial position and orientation of the agent • Wall, floor and object materials and colors The agent spawns in a medium sized room with a recessed sensor in the floor. There are several objects in the room that can be pushed but not lifted. The agent must push a block whose color matches the sensor into the recess in order to open a door to an adjoining room which contains a large apple which ends the episode. Pushing a wrong object into the recess makes the level impossible to complete. • Initial position and orientation of the agent • Wall, floor, object materials and colors • Positions of the objects • Sensor required color The agent spawns near a sensor of a random color. The agent must travel down a long hallway to a room full of blocks and select one that matches the color of the sensor. Bringing the correct block back to the sensor allows access to a large apple which ends the episode. In addition to being far away, traveling between the hallway and the block room requires the agent to cross penalty sensors which incurs a small negative reward. • Initial position and orientation of the agent • Sensor required color • Number of objects in the block room • Position of objects in the block room • Shape and material of the objects in the block room The agent spawns in a U shaped room with empty space between the legs of the U. There are two key objects near the agent spawn point. The agent must throw one of the key objects across the void, and carry the other around the bottom of the U. Both key objects are needed to open two locked doors which then give access to a large apple which ends the episode. • Initial position and orientation of the agent • Wall, floor and object materials and colors • Color and material of the sensors • Initial positions of the two key objects The agent spawns in a small room with a wall mounted sensor and a key object. The agent must pick up the key and touch it to the sensor which opens a door. In the adjoining room there is a large apple which ends the episode. • Initial position and orientation of the agent • Position of the sensor • Position of the key object The agent spawns in a small room with a wall mounted sensor and two key objects. This time one of key objects must be in constant contact with the sensor in in order for the door to remain open. The agent must stack the two objects so one can rest against the sensor, allowing the agent to pass through to an adjoining room with a large apple which ends the episode. An important property of R2D3 is that although the agents are trained from demonstrations, the behaviors they achieve are able to surpass the skill of the demonstrations they were trained from. This can be seen quantitatively from reward curves in Figure 5, where the R2D3 agent surpasses the human baseline performance on four of the eight tasks (e.g. Baseball, Navigate Cubes, Wall Sensor and Wall Sensor Stack). In some of these cases the improved score is simply a matter of executing the optimal strategy more fluently than the demonstrators. For example, this is the case in the Baseball task, where the human demonstrators are handicapped by the fact that the human interface to the agent action space makes it awkward to rotate a held object. This makes picking up the stick and orienting it properly to knock the ball off the plinth into a tricky task for humans, but the agents are able to refine their behavior to be much more efficient (see Figure 8c). The behavior on Wall Sensor is especially interesting, however in this case the agents find a completely different strategy than the human demonstrators by exploiting a bug in the implementation of the environment. The intended strategy for this task is to stack two blocks on top of each other so that one of them can remain in contact with a wall mounted sensor, and this is the strategy employed by the demonstrators. However, due to a bug in the environment it is also possible to trick the sensor into remaining active even when it is not in contact with the key by pressing the key against it in a precise way. The R2D3 agents are able to discover this bug and exploit it, ing in superhuman scores on this task even though this strategy is not present in the demonstrations. We also ran a few additional experiments to get more information about the tasks we did not solve, or solved incorrectly. Videos for these experiments are available at https://bit.ly/2mAAUgg. Remember Sensor This task requires a long memory, and also has the longest episodes length of any task in the Hard Eight suite. In an attempt to mitigate these issues, we trained the agent using a higher action repeat which reduces the episode length, and used stale lstm states instead of zero lstm states which provides information from earlier in the episode. This allows R2D3 to learn policies that display reasonable behavior, retrieving a random block and bringing it back to the hallway. Using this method it can occasionally solve the task. Throw Across The demonstrations collected for this task had a very low success rate of 54%. We attempted to compensate for this by collecting an additional 30 demos. When we trained R2D3 with all 130 demos all seeds solved the task. Wall Sensor Stack The original Wall Sensor Stack environment had a bug that the R2D3 agent was able to exploit. We fixed the bug and verified the agent can learn the proper stacking behavior. In Figure 9, we show the performance of the R2D3 agents for each seed separately. On task such as Drawbridge, Navigate Cubes and Wall Sensor, all seeds take off quite rapidly and they have very low variance for the rewards between different seeds. However, on Wall Sensor Stack task while one seed takes off quite rapidly, and the rest of them are just flat. In Figure 10, we elaborate on Figure 6. For Baseball, Navigate Cubes, Push Blocks, and Wall Sensor Stack, a demo ratio of 1/256 works best. On Drawbridge and Wall Sensor all demo ratios are similarly effective. Figure 7 ). (a) Spatial pattern of exploration behavior for the R2D2 agent over the course of ∼12B steps of training. Each row shows a different random seed; the number of training steps increases from the leftmost column to the rightmost column. There is little variation in how the policy manifests as explorative behavior across seeds and training time. (b) As in (a), for R2D3. Given demonstrations, the policies now show substantial variation across seeds and training time.
We introduce R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions.
1,716
scitldr
We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term. Our show that while long-term memory is implemented by asymptotic attractors, sequential recall is now additionally implemented by oscillatory dynamics in a transverse subspace to the basins of attraction of these stable steady states. Based on our observations, we propose how different types of memory mechanisms can coexist and work together in a single neural network, and discuss possible applications to the fields of artificial intelligence and neuroscience. Recurrent neural networks (RNN) are widely used to carry out tasks that require learning temporal dependencies across several scales. Training RNN's to perform such tasks offers its share of challenges, from well-known exploding and vanishing gradients, to the difficulties of storing, accessing, and forgetting memories BID10 BID1. Viewed as dynamical system, the activity structure of recurrent network state spaces can reveal how networks learn tasks, and can help guide training and architecture design. In this study, we perform a dynamical system analysis of a trained RNN on a simple tasks that requires two types of memory paradigms interact: short-term memory of past inputs and a delayed output during classification. While gating units found in LSTM BID11 and in a variety of other architectures (e.g., BID2 van der) directly aim at addressing these long-scale temporal learning issues, they are always used in conjunction with so-called "vanilla" recurrent units that shoulder the majority of computation. It is not yet well understood how internal network dynamics supported by such circuits combine information from external inputs to solve complex tasks that require remembering information from the past and delaying output changes. On one hand, attractor networks are a known solution to keep finite memories indefinitely BID5. On the other, orthogonal transformations (e.g., identity and rotations) are used to build explicit RNN solutions to recall tasks BID6 BID13 BID0. Indeed, for the well-studied copy task, where a sequence of symbols needs to be outputted after a long delay, it is known that the best solution is to use rotations to store the sequences, much like clocks that align at the time of recall BID3. However, it is unclear how attractor dynamics and orthogonal (rotational) transformations interact when a task requires both long term memory and sequential recall. We explore this situation here. Leveraging slow-point analysis techniques BID12, we uncover how short-term memory tasks with delayed outputs give rise to attractor dynamics with oscillatory transients in low-dimensional activity subspaces. Our uncovers how the boundaries of basins of attractions that are linked to memory attractors interact with transverse oscillatory dynamics to support timed, sequential computations of integrated inputs. This provides novel insights into dynamical strategies to solve complex temporal tasks with randomly connected recurrent units. Moreover, such transient oscillatory dynamics are consistent with periodic activity found throughout the brain BID9, and we discuss the impact of our findings on computations in biological circuits. In this paper we replicate and advance further the study of BID12 on the implementation of memory tasks in artificial RNNs. This simple analysis allows for careful examination of the inner workings of RNNs, which we leverage here to study the interaction of two distinct memory mechanisms in it with the introduction of delayed recall in a memory task. While the original study (i.e., BID12 used continuous time, we used a discrete-time network and trained it with standard optimization algorithms. This is in order to verify if the original findings translate well between the different types of network, and to explore network architectures that are more widely used in the machine learning community. Detailed description of our setup is provided in Section 3.We base our analysis on a densely connected single-layer RNN with a hyperbolic tangent activation function. Specifically, let X be the input signal, S the hidden state of the network and Y the output of the network. We use N = 100 neurons with input matrix W in, output matrix W out, and recurrent connection (i.e., used to update the hidden layer over time) matrix W. Then, the update rule for the hidden states and outputs of the network are defined as: DISPLAYFORM0 For training, we unfold the network in time, up to ten timesteps, and use the standard Adam BID7 optimization algorithm. Only the W and W out matrices were trained. See Appendix A for more details. We characterized the behavior of the neural circuit using tools from dynamical system theory, which enable us to identify internal states and transitions over time in the neural circuit. In particular, we use slow point analysis, developed in BID12, to find approximate fixed points. Formally, let S(t) be the state of the network at time t. We define a Lyapunov function q inspired by the physical formula for potential energy. We adapt this original approach to a discrete-time system as DISPLAYFORM1 The minima of this function indicate the fixed points of the system, and their types (e.g., sink or saddle point) can be determined by the number of positive eigenvalues of the Jacobian matrix at that point. We use a delayed 3-bit flip-flop task to illustrate our findings, as it requires the interaction of two types of memory mechanisms and lends itself to easily interpretable network dynamics. The network has three independent channels with an associated input (W in) and output (W out) neuron each. Each input neuron sends short spikes of magnitude 1 or -1 at random time intervals (Poisson distribution with homogeneous rate λ = 50) and is set at zero otherwise. The output channel must maintain the value of the last input spike sent by its associated input neuron. To study the interaction between long term memory and sequential recall, we add a time delay to this task. This Figure 1. Task definition. The neural network is expected to match and maintain the last input for each channel while preventing crosstalk. For the delayed task definition, the neural network must, in addition, delay its response (and therefore remember the input) during a certain number of time steps by ∆t, illustrated in grey.in turn requires the network to delay its change of internal state upon the reception of a novel input by a period ∆t. Representative examples of both tasks are demonstrated in Figure 1. We begin by replicating the observations from BID12 in our discrete-time setting. We tested the trained network during roughly a hundred thousand steps, during which the network receives approximately 2000 inputs, saving the neurons states and the network output at every step. We then analyzed the ing dynamics, identifying slow and fixed points with slow-point analysis, and projecting dynamics in the first three principal components (PC), i.e., using PCA.As we can see in figure 2a, we do observe the characteristic "cube" discovered in the original research. Here, each vertex represents a certain output state (since there are three binary channels, this means there are eight possible states), with adjacent vertices varying by a single output. The slow point analysis also revealed attractors at the corners of the cube described by the path of the states, saddle points with one unstable dimension on the edges, saddle points with two unstable dimensions on the faces and a saddle point with three unstable dimensions at the center. The attractors implement long-term memory by "trapping" trajectories in the absence of inputs. Each of the eight attractors encodes a specific output state. Saddle-points channel the dynamics during a switch of output state. Their associated stable subspaces form the separatrices between the basins of attraction of the stable fixed points on each side. The configuration of fixed points observed here is also consistent with the findings of BID12.We next turn to the trajectories produced by the network trained to perform the delayed task (see Figure 2b). The position of the "corner" attractors is fairly similar to that of the zero-delay case. However, the steady-state dynamics (a) (b) Figure 2. (a) Dynamics during 3000 time-steps without delay, projected in the first three principal components. Each point represents the network's state at a different time-step and is color-coded according to the effective output value at that time. In the absence of input, the network will maintain its position in phase-space at one of 8 stable fixed points, encoding the current binary combination of the 3 input channels. Injecting an input in the network induces a switch of the network towards different attracting states, and thus, produces distinct outputs. (b) Same as in (a) for dynamics with delayed output. The network must generate the same output as before but delay its output change for ∆t = 8 time-steps.surrounding these corners have changed considerably. One hint about the hidden dynamics implementing sequential recall is the "loops" protruding from the corners of the initial cube and disappearing with the addition of delay. Since we use a tanh activation function that limits outputs between 1 and -1, overshooting the attractor assures that the correct output is expressed immediately after receiving the input, while undershooting it would give a visible transient period. However, in the delayed task, inputs have to be treated in a subspace orthogonal to the output space since no output change must be expressed before the delay has elapsed. This means that the input pulse is now invisible in the output space. With these observations in mind, we would like to further investigate the different hidden dynamics brought by the added delay. Since standard PCA and slow-point analysis failed to sufficiently inform us about this aspect, we now turn to triggered activation averages described in Section 5. For simplicity, we focus at this point on characterizing a single type of transition in the network. Without loss of generality, we chose the transition from state (−1, −1, −1) to state (−1, −1, 1), and searched every instance during testing when the network had to perform this switch. We note that this choice of input switch is arbitrary but fixed, although other switches can also be analyzed in a similar manner. For each instance, we kept a trajectory of RNN states (i.e., hidden-layer activations) starting a few steps before the switch and ending a few steps after it. We then averaged these trajectories over instances, aligned at the switch time, in order to obtain a single short average trajectory (in the hidden-layer activation space) representing the mean activity surrounding the examined switch. PCA projection of this new signal is shown in FIG1.In FIG1, we see a decomposition akin to Fourier modes of a step function, with increasingly rapid oscillations occurring in one forth of a period phase shifted pairs. Here, individual neurons appear to lock into dynamical regimes associated to precise frequencies. From the way the cube is flattened in FIG1, we also conclude that the subspace where the rotational transformations are implemented is mostly orthogonal to the separatrices of the attractors corresponding to the different memorized outputs. A schematic of the transition is presented at figure 3a. We note that such rotating dynamics have indeed been observed previously when implementing sequential recall in artificial neural networks, and it is known that initializing RNNs with orthogonal rotation matrices helps solve sequential recall tasks BID3.The periodic nature of the observed signals motivated us to explore the eigenspectrum of the matrix W (see Figure 4). Indeed, when considering the complex eigenvalues of this matrix, their complex parts correspond to rotating speeds in different subspaces. For comparison, the network without delay has only three eigenvalues that distinguish themselves from the random cloud around zero, which is expected from random Gaussian matrices. These are on or close to the real axis, and slightly higher than one. Considering the hyperbolic tangent activation function applied at every time-step, this indicates three dimensions in which the state compo- The switch from output (-1,-1,-1) to output (-1,-1,1) for the delayed network was studied using a triggered average. The ing signal was then plotted according to the first principal components. For clarity, a cube is used to mark the approximate trajectory of neurons states during the test. Trajectory in the space spanned by the first three principal components. (c) Same trajectory but with the first 9 principal components represented.nents are maintained at about 1. Figure 4. Analysis of the eigenvalues of the recurrent connection matrix W for the studied RNNs. The eigenvalues of the network without delay is displayed to the left and the ones for the 8 timesteps delayed network to the right. In the delayed network, the rotating dynamics are clearly indicated by eigenvalues with large imaginary parts, in addition to the three real eigenvalues corresponding to the long term memory subspace. This additional information is consistent with our interpretation that oscillatory dynamics implement sequential recall, much like orthogonal RNNs do during simple recall tasks BID6 BID13 BID0 BID3. However, in this case they do so in a localized fashion to facilitate interaction with other computation mechanisms implemented by the network. We have seen in this study that long-term memory and sequential recall can be implemented by a simple RNN fairly easily, and in parallel, by acting on different subspaces of the RNN phase space. Specifically, sequential recall is achieved by rotational dynamics localized around the origin, which occur in a subspace orthogonal to the separatrices of the basins of attraction that solve the classification task. Our findings suggest that this population-level periodic activity may serve as a general "precision timing" mechanism that can be combined with distinct, learned computations. Indeed, oscillations enable the introduction of small delays, transverse to low dimensional activity of recurrent neural circuits. An interesting line of future work would be to investigate more thoroughly this mechanism in the presence of distinct computational tasks, such as character-level prediction, or arithmetic operations. We believe that learning a delayed recall in conjunction with any task will lead to generic, emergent oscillations that enable transient dynamics transverse to the subspaces used to perform other computations. It may be possible to leverage this geometric understanding for faster training by initializing networks in a way that promotes transverse rotations. Furthermore, this oscillatory mechanism is consistent with observations of oscillatory dynamics in the brain BID9. Together with the known phenomena whereby neurons in the brain perform tasks with low-dimensional activity patterns, and that the same neurons engage in oscillatory activity when viewed at the population-level, our findings are consistent with a general principle of delayed recall in neural networks, either biological or artificial. The network was a one-layered discrete and densely connected recurrent neural network with a N = 100 neurons. The network is composed of three real-valued matrix, the square matrix W representing the recurrent connection between the neurons, the 3 × 100 W in matrix representing the strength of connections between the input channels and the neurons of the network and the 100 × 3 W out matrix representing the strength of the connection between the neurons of the network and the output neurons. At each neuron, the signal goes through an "activation function", in this case the hyperbolic tangent. A schematic of the neural network is shown on figure 5a. The hyperbolic tangent is often used as the activation function in studies about neural networks as it is a good representation of biological spiking neurons. It models the way the number and strength of excitatory and inhibitory signals entering the neuron affects its spiking frequency BID4.A discrete, as opposed to a continuous-time, network was chosen for a few reasons. First, we wanted to see if the findings of BID12 could be transposed to a discrete-time network, as the researchers themselves where wondering in their paper. This was further motivated by the fact that in the field of artificial intelligence, discretetime networks are widely used BID8 as they are better adapted to the computer architecture. Any link between the study of neural networks for neuroscience and for artificial intelligence would be beneficial to both fields. Last but not least is the fact that discrete-time networks are much easier and faster to implement than their continuous counterparts, and a wide array of resources to train them are freely available. The W in, W and W out weights matrices were originally set randomly (using a linear probability function) between − for W and W out where N is the number of neurons. This strategy is often used in the literature to get an initial output with a probability distribution approaching a normal law with average 0 and variance 1. Only the W and W out matrices were trained. Since the network is recurrent, the output is potentially the of an infinite number of steps, which complicates training. The solution is to "unfold" the network, i.e., to consider only a certain number of steps back in time during training. In figure 5b, we see the unfolded version of the network. By trial and error we chose a 10 time-steps deep unfolded network for training. 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 Investigation of Memory in RNNs There are now slow points with more than three unstable dimensions. Those are marked by red triangles and the number of unstable dimensions is written above. We posit that this is a of the networks' capacity being stretched to the limit (it could not perform the task for longer delays) and not an actual change in strategy for the long-term memory task.
We investigate how a recurrent neural network successfully learns a task combining long-term memory and sequential recall.
1,717
scitldr
The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known. Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required. Recent promising developments generally depend on problem-specific density models or handcrafted features. In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required. Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here. While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed. This extension connects two until now disjoint areas of research. We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms. We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games. Reinforcement learning (RL) tackles sequential decision making problems by formulating them as tasks where an agent must learn how to act optimally through trial and error interactions with the environment. The goal in these problems is to maximize the sum of the numerical reward signal observed at each time step. Because the actions taken by the agent influence not just the immediate reward but also the states and associated rewards in the future, sequential decision making problems require agents to deal with the trade-off between immediate and delayed rewards. Here we focus on the problem of exploration in RL, which aims to reduce the number of samples (i.e., interactions) an agent needs in order to learn to perform well in these tasks when the environment is initially unknown. The sample efficiency of RL algorithms is largely dependent on how agents select exploratory actions. In order to learn the proper balance between immediate and delayed rewards agents need to navigate through the state space to learn about the outcome of different transitions. The number of samples an agent requires is related to how quickly it is able to explore the state-space. Surprisingly, the most common approach is to select exploratory actions uniformly at random, even in high-profile success stories of RL (e.g., BID26 BID17 . Nevertheless, random exploration often fails in environments with sparse rewards, that is, environments where the agent observes a reward signal of value zero for the majority of states. In model-based approaches agents explicitly learn a model of the dynamics of the environment which they use to plan future actions. In this setting the problem of exploration is well understood. When all states can be enumerated and uniquely identified (tabular case), we have algorithms with proven sample complexity bounds on the maximum number of suboptimal actions an agent selects before 1 When we refer to environments with sparse rewards we do so for brevity and ease of presentation. Actually, any sequential decision making problem has dense rewards. In the RL formulation a reward signal is observed at every time step. By environments with sparse rewards we mean environments where the vast majority of transitions lead to reward signals with the same value.converging to an -optimal policy (e.g., BID4 BID10 BID23 . However, these approaches are not easily extended to large environments where it is intractable to enumerate all of the states. When using function approximation, the concept of state visitation is not helpful and learning useful models is by itself quite challenging. Due to the difficulties in learning good models in large domains, model-free methods are much more popular. Instead of building an explicit model of the environment, they estimate state values directly from transition samples (state, action, reward, next state). Unfortunately, this approach makes systematic exploration much more challenging. Nevertheless, because model-free methods make up the majority of approaches scalable to large domains, practitioners often ignore the exploration challenges these methods pose and accept the high sample complexity of random exploration. Reward bonuses that promote exploration are one alternative to random walks (e.g., BID2 BID15, but none such proposed solutions are widely adopted in the field. In this paper we introduce an algorithm for exploration based on the successor representation (SR). The SR, originally introduced by BID5, is a representation that generalizes between states using the similarity between their successors, i.e., the similarity between the states that follow the current state given the environment's dynamics and the agent's policy. The SR is defined for any problem, it can be learned through temporal-difference learning BID25 and, as we discuss below, it can also be seen as implicitly estimating the transition dynamics of the environment. Our approach is inspired by the substochastic successor representation (SSR), a concept we introduce here. The SSR is defined so that it implicitly counts state visitation, allowing us to use it to encourage exploration. This idea connects representation learning and exploration, two otherwise disjoint areas of research. The SSR allows us to derive an exploration bonus that when applied to model-based RL generates algorithms that perform as well as theoretically sample-efficient algorithms. Importantly, the intuition developed with the SSR assists us in the design of a model-free deep RL algorithm that achieves performance similar to pseudo-count-based methods in hard exploration Atari 2600 games BID2 BID20. We consider an agent interacting with its environment in a sequential manner. Starting from a state S 0 ∈ S, at each step the agent takes an action A t ∈ A, to which the environment responds with a state S t+1 ∈ S according to a transition probability function p(s |s, a) = Pr(S t+1 = s |S t = s, A t = a), and with a reward signal R t+1 ∈ R, where r(s, a) indicates the expected reward for a transition from state s under action a, that is, r(s, a). DISPLAYFORM0 The value of a state s when following a policy π, v π (s), is defined to be the expected sum of discounted rewards from that state: DISPLAYFORM1, with γ being the discount factor. When the transition probability function p and the reward function r are known, we can compute v π (s) recursively by solving the system of equations below BID3: DISPLAYFORM2 This equation can also be written in matrix form with v π, r ∈ R |S| and P π ∈ R |S|×|S|: DISPLAYFORM3 where P π is the state to state transition probability function induced by π, that is, P π (s, s) = a π(a|s)p(s |s, a). Traditional model-based algorithms for RL work by learning estimates of the matrix P π and of the vector r and using them to estimate v π, for example by solving Equation 1. We useP π andr to denote empirical estimates of P π and r. Formally, DISPLAYFORM4 wherer(i) denotes the i-th entry in the vectorr, n(s, s) is the number of times the transition s → s was observed, n(s) = s ∈S n(s, s), and C(s, s) is the sum of the rewards associated with the n(s, s) transitions (we drop the action in the discussion to simplify notation).Alternatively, in model-free RL, instead of estimating P π and r we estimate v π (s) directly from samples. We often use temporal-difference (TD) learning BID25 to update our estimates of DISPLAYFORM5 where α is the step-size parameter. Generalization is required in problems with large state spaces, where it is unfeasible to learn an individual value for each state. We do so by parametrizingv(s) with a set of weights θ. We write, given the weights θ,v(s; θ) ≈ v π (s) andq(s, a; θ) ≈ q π (s, a), where q π (s, a) = r(s, a) + γ s p(s |s, a)v π (s). Model-free methods have performed well in problems with large state spaces, mainly due to the use of neural networks as function approximators (e.g., BID17 .Our algorithm is based on the successor representation (SR; BID5 . The successor representation, with respect to a policy π, Ψ π, is defined as DISPLAYFORM6 where we assume the sum is convergent with I denoting the indicator function. BID5 has shown that this expectation can be estimated from samples through TD learning. It also corresponds to the Neumann series of γP : DISPLAYFORM7 Notice that the SR is part of the solution when computing a value function: DISPLAYFORM8 We useΨ π to denote the SR computed throughP π, the approximation of P π .The definition of the SR can also be extended to features. Successor features generalize the SR to the function approximation setting BID0 . We use the definition for the uncontrolled case in this paper. Importantly, the successor features can also be learned with TD learning. Definition 2.1 (Successor Features). For a given 0 ≤ γ < 1, policy π, and for a feature representation φ(s) ∈ R d, the successor features for a state s are: DISPLAYFORM9 Notice that this definition reduces to the SR in the tabular case, where Φ = I. In this section we introduce the concept of the substochastic successor representation (SSR). The SSR is derived from an empirical transition matrix similar to Equation 2, but where each state incorporates a small (1/(n(s) + 1)) probability of terminating at that state, rather than transiting to a next state. As we will show, we can recover the visit counts n(s) through algebraic manipulation on the SSR.While computing the SSR is usually impractical, we use it as inspiration in the design of a new deep reinforcement learning algorithm for exploration (Section 4). In a nutshell, we view the SSR as approximating the process of learning the SR from an uninformative initialization (i.e., the zero vector), and using a stochastic update rule. While this approximation is relatively coarse, we believe it gives qualitative justification to our use of the learned SR to guide exploration. To further this claim, we demonstrate that using the SSR in synthetic, tabular settings yields comparable performance to that of theoretically-derived exploration algorithms. Definition 3.1 (Substochastic Successor Representation). LetP π denote the substochastic matrix induced by the environment's dynamics and by the policy π such thatP π (s |s) = n(s,s) n(s)+1. For a given 0 ≤ γ < 1, the substochastic successor representation,Ψ π, is defined as: DISPLAYFORM0 The theorem below formalizes the idea that the 1 norm of the SSR implicitly counts state visitation. Theorem 1. Let n(s) denote the number of times state s has been visited and let χ(s) = (1 + γ) − ||Ψ π (s)|| 1, whereΨ π is the substochastic SR as in Definition 3.1. For a given 0 ≤ γ < 1, DISPLAYFORM1 Proof of Theorem 1. LetP π be the empirical transition matrix. We first rewriteP π in terms ofP π: DISPLAYFORM2 The expression above can also be written in matrix form:P π = (I − N)P π, where N ∈ R |S|×|S| denotes the diagonal matrix of augmented inverse counts. ExpandingΨ π we have: DISPLAYFORM3 The top eigenvector of a stochastic matrix is the all-ones vector, e , and it corresponds to the eigenvalue 1. Using this fact and the definition ofP π with respect toP π we have: DISPLAYFORM4 We can now bound the term γ π e using the fact that e is also the top eigenvector of the successor representation and has eigenvalue DISPLAYFORM0 Plugging FORMULA14 into the definition of χ we have (notice that Ψ(s)e = ||Ψ(s)|| 1 ): DISPLAYFORM1 When we also use the other bound on the quadratic term we conclude that, for any state s, DISPLAYFORM2 In other words, the SSR, obtained after a slight change to the SR, can be used to recover state visitation counts. The intuition behind this is that the phantom transition, represented by the +1 in the denominator of the SSR, serves as a proxy for the uncertainty about that state by underestimating the SR. This is due to the fact that s P π (s, s) gets closer to 1 each time state s is visited. This can now be used to convert the SSR into a reward function in the tabular case. We do so by using the SSR to define an exploration bonus, r int, such that the reward being maximized by the agent becomes r(s, a) + βr int (s), where β is a scaling parameter. Since we want to incentivize agents to visit the least visited states as quickly as possible, we can trivially define r int = −||Ψ π (s)|| 1, where we penalize the agent by visiting the states that lead to commonly visited states. Notice that the shift (1 + γ) in χ(s) has no effect as an exploration bonus because it is the same across all states. BID23. The performance of our algorithm is the average over 100 runs. A 95% confidence interval is reported between parentheses. E 3 R-MAX MBIE ESSR RIVERSWIM 3,000,000 3,000,000 3,250,000 3,088,924 (± 57,584) SIXARMS 1,800,000 2,800,000 9,250,000 7,327,222 (± 1,189,460) DISPLAYFORM3 We evaluated the effectiveness of the proposed exploration bonus in a standard model-based algorithm. In our implementation the agent updates its transition probability model and reward model through Equation 2 and its SSR estimate as in Definition 3.1 (the pseudo-code of this algorithm is available in the Appendix), which is then used for the exploration bonus r int. We used the domains RiverSwim and SixArms BID23 to assess the performance of this algorithm. 2 These are traditional domains in the PAC-MDP literature BID9 and are often used to evaluate provably sampleefficient algorithms. Details about these environments are also available in the Appendix. We used the same protocol used by BID23. Our are available in TAB0. It is interesting to see that our algorithm performs as well as R-MAX BID4 ) and E 3 on RiverSwim and it clearly outperforms these algorithms on SixArms. In large environments, where enumerating all states is not an option, directly using the SSR as described in the previous section is not viable. Learning the SSR becomes even more challenging when the representation, φ(·), is also being learned and so is non-stationary. In this section we design an algorithm for the function approximation setting inspired by the from the previous section. Since explicitly estimating the transition probability function is not an option, we learn the SR directly using TD learning. In order to capture the SSR we rely on TD's tendency to underestimate values when the estimates are pessimistically initialized, just as the SSR underestimates the true successor representation; with larger underestimates for states (and similarly features) that are rarely observed. This is mainly due to the fact that when the SR is being learned with TD learning, because a reward of 1 is observed at each time step, there is no variance in the target and the predictions slowly approach the true value of the SR. When pessimistically initialized, the predictions approach the target from below. In this sense, what defines how far a prediction is from its final target is indeed how many times it has been updated in a given state. Finally, recent work BID11 BID14 have shown successor features can be learned jointly with the feature representation itself. These ideas are combined together to create our algorithm. The neural network we used to learn the agent's value function while also learning the feature representation and the successor representation is depicted in Figure 1. The layers used to compute the state-action value function,q(S t, ·), are structured as in DQN BID17, but with different numbers of parameters (i..e, filter sizes, stride, and number of nodes). This was done to match Oh et al.'s architecture, which is known to succeed in the auxiliary task we define below. From here on, we will call the part of our architecture that predictsq(S t, ·) DQN e. It is trained to minimize DISPLAYFORM0 DISPLAYFORM1 This loss is known as the mixed Monte-Carlo return (MMC) and it has been used in the past by the algorithms that achieved succesful exploration in deep reinforcement learning BID2 BID20. The distinction between θ and θ − is standard in the field, with θ − denoting the parameters of the target network, which is updated less often for stability purposes BID17. As before, we use r int to denote the exploration bonus obtained from the successor features of the internal representation, φ(·), which will be defined below. Moreover, to ensure all features are in the same range, we normalize the feature vector so that ||φ(·)|| 2 = 1. In Figure 1 we highlight the layer in which we normalize its output with the symbol φ. Notice that the features are always non-negative due to the use of ReLU gates. The successor features are computed by the two bottom layers of the network, which minimize the loss DISPLAYFORM2 Zero is a fixed point for the SR. This is particularly concerning in settings with sparse rewards. The agent might learn to set φ(·) = 0 to achieve zero loss. We address this problem by not propagating ∇L SR to φ(·) (this is depicted in Figure 1 as an open circle stopping the gradient), and by creating an auxiliary task BID7 to encourage a representation to be learned before a non-zero reward is observed. As Machado et al. FORMULA3, we use the auxiliary task of predicting the next observation, learned through the architecture proposed by BID18, which is depicted as the top layers in Figure 1. The loss we minimize for this last part of the network is L Recons = Ŝ t+1 − S t+1 2. The last step in describing our algorithm is to define r int (S t ; θ −), the intrinsic reward we use to encourage exploration. We choose the exploration bonus to be the inverse of the 2 -norm of the vector of successor features of the current state, that is, DISPLAYFORM0 where ψ(S t ; θ −) denotes the successor features of state S t parametrized by θ −. The exploration bonus comes from the same intuition presented in the previous section, but instead of penalizing the agent with the norm of the SR we make r int (S t ; θ −) into a bonus (we observed in preliminary experiments not discussed here that DQN performs better when dealing with positive rewards). Moreover, instead of using the 1 -norm we use the 2 -norm of the SR since our features have unit length in 2 (whereas the successor probabilities in the tabular-case have unit length in 1).Finally, we initialize our network the same way BID18 does. We use Xavier initialization BID6 in all layers except the fully connected layers around the element-wise multiplication denoted by ⊗, which are initialized uniformly with values between −0.1 and 0.1. We followed the evaluation protocol proposed by BID13. We used MONTEZUMA'S REVENGE to tune our parameters (training set). The reported are the average over 10 seeds after 100 million frames. We evaluated our agents in the stochastic setting (sticky actions, ς = 0.25) using a frame skip of 5 with the full action set (|A| = 18). The agent learns from raw pixels, that is, it uses the game screen as input. Our were obtained with the algorithm described in Section 4. We set β = 0.025 after a rough sweep over values in the game MONTEZUMA'S REVENGE. We annealed in DQN's -greedy exploration over the first million steps, starting at 1.0 and stopping at 0.1 as done by BID2. We trained the network with RMSprop with a step-size of 0.00025, an value of 0.01, and a decay of 0.95, which are the standard parameters for training DQN BID17. The discount factor, γ, is set to 0.99 and w TD = 1, w SR = 1000, w Recons = 0.001. The weights w TD, w SR, and w Recons were set so that the loss functions would be roughly the same scale. All other parameters are the same as those used by BID17. TAB2 summarizes the after 100 million frames. The performance of other algorithms is also provided for reference. Notice we are reporting learning performance for all algorithms instead of the maximum scores achieved by the algorithm. We use the superscript MMC to distinguish between the algorithms that use MMC from those that do not. When comparing our algorithm, DQN MMC e +SR, to DQN we can see how much our approach improves over the most traditional baseline. By comparing our algorithm's performance to DQN MMC +CTS BID2 and DQN MMC +PixelCNN BID20 we compare our algorithm to established baselines for exploration. As highlighted in Section 4, the parameters of the network we used are different from those used in the traditional DQN network, so we also compared the performance of our algorithm to the performance of the same network our algorithm uses but without the additional modules (next state prediction and successor representation) by setting w SR = w Recons = 0 and without the intrinsic reward bonus by setting β = 0.0. The column labeled DQN MMC e contains the for this baseline. This comparison allows us to explicitly quantify the improvement provided by the proposed exploration bonus. The learning curves of these algorithms, their performance after different amounts of experience, and additional analyzing, for example, the impact of the introduced auxiliary task, are available in the Appendix. We can clearly see that our algorithm achieves scores much higher than those achieved by DQN, which struggles in games that pose hard exploration problems. Moreover, by comparing DQN MMC e +SR to DQN MMC e we can see that the provided exploration bonus has a big impact in the game MONTEZUMA'S REVENGE, which is probably known as the hardest game among those we used in our evaluation. Interestingly, the change in architecture and the use of MMC leads to a big improvement in games such as GRAVITAR and VENTURE, which we cannot fully explain. However, notice that the change in architecture does not have any effect in MONTEZUMA'S REVENGE. The proposed exploration bonus seems to be essential in this game. Finally, we also compared our algorithm to DQN MMC +CTS and DQN MMC +PixelCNN. We can observe that, on average, it performs as well as these algorithms, but instead of requiring a density model it requires the SR, which is already defined for every problem since it is a component of the value function estimates, as discussed in Section 2. There are multiple algorithms in the tabular, model-based case with guarantees about their performance (e.g., BID4 BID10 BID23 BID19 . RiverSwim and SixArms are domains traditionally used when evaluating these algorithms. In this paper we have given evidence that our algorithm performs as well as some of these algorithms with theoretical guarantees. Among these algorithms, R-MAX seems the closest approach to ours. As with R-MAX, the algorithm we presented in Section 3 augments the state-space with an imaginary state and encourages the agent to visit that state, implicitly reducing the algorithm's uncertainty in the state-space. However, R-MAX deletes the transition to this imaginary state once a state has been visited a given number of times. Ours lets the probability of visiting this imaginary state vanish with additional visitations. Moreover, notice that it is not clear how to apply these traditional algorithms such as R-MAX and E 3 to large domains where function approximation is required. Conversely, there are not many model-free approaches with proven sample-complexity bounds (e.g., BID24, but there are multiple model-free algorithms for exploration that actually work in large domains (e.g., BID22 BID2 BID20 BID21 . Among these algorithms, the use of pseudo-counts through density models is the closest to ours BID2 BID20 . Inspired by those papers we used the mixed Monte-Carlo return as a target in the update rule. In Section 5 we have shown that our algorithm performs generally as well as these approaches without requiring a density model. Importantly, BID15 had already shown that counting activations of fixed, handcrafted features in Atari 2600 games leads to good exploration behavior. Nevertheless, by using the SSR we are not only counting learned features but we are also implicitly capturing the induced transition dynamics. Finally, the SR has already been used in the context of exploration. However, it was used to help the agent learn how to act in a higher level of abstraction in order to navigate through the state space faster BID12 BID14 . Such an approach has led to promising in the tabular case but only anecdotal evidence about its scalability has been provided when the idea was applied to large domains such as Atari 2600 games. Importantly, the work developed by BID14, BID11 and BID18 are the main motivation for the neural network architecture presented here. BID18 have shown how one can predict the next screen given the current observation and action (our auxiliary task), while BID14 and BID11 have proposed different architectures for learning the successor representation from raw pixels. RL algorithms tend to have high sample complexity, which often prevents them from being used in the real-world. Poor exploration strategies is one of the main reasons for this high sample-complexity. Despite all of its shortcomings, uniform random exploration is, to date, the most commonly used approach for exploration. This is mainly due to the fact that most approaches for tackling the exploration problem still rely on domain-specific knowledge (e.g., density models, handcrafted features), or on having an agent learn a perfect model of the environment. In this paper we introduced a general method for exploration in RL that implicitly counts state (or feature) visitation in order to guide the exploration process. It is compatible to representation learning and the idea can also be adapted to be applied to large domains. This opens up multiple possibilities for future work. Based on the presented in Section 3, for example, we conjecture that the substochastic successor representation can be actually used to generate algorithms with PAC-MDP bounds. Investigating to what extent different auxiliary tasks impact the algorithm's performance, and whether simpler tasks such as predicting feature activations or parts of the input BID7 are effective is also worth studying. Finally, it might be interesting to further investigate the connection between representation learning and exploration, since it is also known that better representations can lead to faster exploration BID8. This supplementary material contains details omitted from the main text due to space constraints. The list of contents is below:• Pseudo-code of the model-based algorithm discussed in Section 3;• Description of RiverSwim and SixArms, the tabular domains we used in our evaluation;• Learning curves of DQN e and DQN MMC e +SR and their performance after different amounts of experience in the Atari 2600 games used for evaluation;• Results of additional experiments designed to evaluate the role of the auxiliary task in the reported in the paper for ESSR. In the main paper we described our algorithm as a standard model-based algorithm where the agent updates its transition probability model and reward model through Equation 2 and its SSR estimate as in Definition 3.1. The pseudo-code with details about the implementation is presented in Algorithm 1. n(s, s) ← 0 ∀s, s ∈ S t(s, a, s) ← 1 ∀s, s ∈ S, ∀a ∈ Â r(s, a) ← 0 ∀s ∈ S, ∀a ∈ Â P (s, a) ← 1/|S| ∀s ∈ S, ∀a ∈ Ã P (s, s) ← 0 ∀s, s ∈ S π ← random over A while episode is not over do Observe s ∈ S, take action a ∈ A selected according to π(s), and observe a reward R and a next state s ∈ S n(s, s) ← n(s, s) + 1 DISPLAYFORM0 r int ← −Ψe π ← POLICYITERATION(P,r + βr int) end while The two domains we used as testbed to evaluate the proposed model-based algorithm with the exploration bonus generated by the substochastic successor representation are shown in FIG2. These domains are the same used by BID23. For SixArms, the agent starts in state 0. For RiverSwim, the agent starts in either state 1 or 2 with equal probability. The algorithm we introduced in the paper, ESSR, relies on a network that estimates the state-action value function, the successor representation, and the next observation to be seen given the agent's current observation and action. While the depicted in TAB2 allow us to clearly see the benefit of using an exploration bonus derived from the successor representation, they do not inform us about the impact of the auxiliary task in the . The experiments in this section aim at addressing this issue. We focus on Montezumas Revenge because it is the game where the problem of exploration is maximized, with most algorithms not being able to do anything without an exploration bonus. The first question we asked was whether the auxiliary task was necessary in our algorithm. We evaluated this by dropping the reconstruction module from the network to test whether the initial random noise generated by the successor representation is enough to drive representation learning. It is not. When dropping the auxiliary task, the average performance of this baseline over 4 seeds in MON-TEZUMA'S REVENGE after 100 million frames was 100.0 points (σ 2 = 200.0; min: 0.0, max: 400.0). As comparison, our algorithm obtains 1778.6 points (σ 2 = 903.6, min: 400.0, max: 2500.0). These suggest that auxiliary tasks seem to be necessary for our method to perform well. We also evaluated whether the auxiliary task was sufficient to generate the we observed. To do so we dropped the SR module and set β = 0.0 to evaluate whether our exploration bonus was actually improving the agent's performance or whether the auxiliary task was doing it. The exploration bonus seems to be essential in our algorithm. When dropping the exploration bonus and the successor representation module, the average performance of this baseline over 4 seeds in MONTEZUMA'S REVENGE after 100 million frames was 398.5 points (σ 2 = 230.1; min: 0.0, max: 400.0). Again, clearly, the auxiliary task is not a sufficient condition for the performance we report. The reported use the same parameters as those reported in the main paper. Learning curves for each individual run are depicted in Figure 3. after different amounts of experience (10, 50, and 100 million frames) in TAB5 Finally, Figure 4 depicts the learning curves obtained with the evaluated algorithms in each game. Lighter lines represent individual runs while the solid lines encode the average over the multiple runs. learning curves in the Atari 2600 games used as testbed. The curves are smoothed with a running average computed using a window of size 100. DISPLAYFORM0
We propose the idea of using the norm of the successor representation an exploration bonus in reinforcement learning. In hard exploration Atari games, our the deep RL algorithm matches the performance of recent pseudo-count-based methods.
1,718
scitldr
Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process. However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space. An effective solution to the above challenge as proposed by is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals. Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking. We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers. To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions. Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models. We present such an implementation for the original flow introduced in , and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks. We also conduct ablation studies to compare proposed method with other options for dimension factorization. Deep Generative Modeling aims to learn the embedded distributions and representations in input (especially unlabelled) data, requiring no/minimal human labelling effort. Learning without knowledge of labels (unsupervised learning) is of increasing importance because of the abundance of unlabelled data and the rich inherent patterns they posses. The representations learnt can then be utilized in a number of downstream tasks such as semi-supervised learning ), synthetic data augmentation and adversarial training , text analysis and model based control etc. The repository of deep generative modeling majorly includes Likelihood based models such as autoregressive models (b;), latent variable models , flow based models (; 2016;) and implicit models such as generative adversarial networks (GANs) . Autoregressive models (; b; a; achieve exceptional log-likelihood score on many standard datasets, indicative of their power to model the inherent distribution. But, they suffer from slow sampling process, making them unacceptable to adopt in real world applications. Latent variable models such as variational autoencoders tend to better capture the global feature representation in data, but do not offer an exact density estimate. Implicit generative models such as GANs which optimize a generator and a discriminator in a min-max fashion have recently become popular for their ability to synthesize realistic data . But, GANs do not offer a latent space suitable for further downstream tasks, nor do they perform density estimation. Flow based generative models perform exact density estimation with fast inference and sampling, due to their parallelizability. They also provide an information rich latent space suitable for many applications. However, the dimension of latent space for flow based generative models is same as the high-dimensional input space, by virtue of bijectivity nature of flows. This poses a bottleneck for flow models to scale with increasing input dimensions due to computational complexity. An effective solution to the above challenge is a multi-scale architecture, introduced by , which performs iterative early gaussianization of a part of the total dimensions at regular intervals of flow layers. This not only makes the model computational and memory efficient but also aids in distributing the loss function throughout the network for better training. Many prior works including;;; implement multi-scale architecture in their flow models, but use static masking methods for factorization of dimensions. We propose a multi-scale architecture which performs data dependent factorization to decide which dimensions should pass through more flow layers. For the decision making, we introduce a heuristic based on the amount of total log-likelihood contributed by each dimension, which in turn signifies their individual importance. We lay the ground rules for quantitative estimation and qualitative sampling to be satisfied by an ideal factorization method for a multi-scale architecture. Since in the proposed architecture, the heuristic is obtained as part of the flow training process, it can be universally applied to generic flow models. We present such implementations for flow models based on affine/additive coupling and ordinary differential equation (ODE) and achieve quantitative and qualitative improvements. We also perform ablation studies to confirm the novelty of our method. Summing up, the contributions of our research are, 1. A log-determinant based heuristic which entails the contribution by each dimensions towards the total log-likelihood in a multi-scale architecture. 2. A multi-scale architecture based on the above heuristic performing data-dependent splitting of dimensions, implemented for several classes of flow models. 3. Quantitative and qualitative analysis of above implementations and an ablation study To the best of our knowledge, we are the first to propose a data-dependent splitting of dimensions in a multi-scale architecture. In this section, we illustrate the functioning of flow based generative models and the multiscale architecture as introduced by. Let x be a high-dimensional random vector with unknown true distribution p(x). The following formulation is directly applicable to continous data, and with some pre-processing steps such as dequantization (; ;) to discrete data. Let z be the latent variable with a known standard distribution p(z), such as a standard multivariate gaussian. Using an i.i.d. dataset D, the target is to model p θ (x) with parameters θ. A flow, f θ is defined to be an invertible transformation that maps observed data x to the latent variable z. A flow is invertible, so the inverse function T maps z to x, i.e. The log-likelihood can be expressed as, where ∂f θ (x) ∂x T is the Jacobian of f θ at x. The invertibile nature of flow allows it to be capable of being composed of other flows of compatible dimensions. In practice, flows are constructed by composing a series of component flows. Let the flow f θ be composed of K component flows, i.e. and the intermediate variables be denoted by y K, y K−1, · · ·, y 0 = x. Then the log-likelihood of the composed flow is, Log-latent density which follows from the fact that det(A · B) = det(A) · det(B). In our work, we refer the first term in Equation 4 as log-latent-density and the second term as log-determinant (log-det). The reverse path, from z to x can be written as a composition of inverse flows, Confirming with the properties of a flow as mentioned above, different types of flows can be constructed (; ;). Multi-scale architecture is a design choice for latent space dimensionality reduction of flow models, in which part of the dimensions are factored out/early gaussianized at regular intervals, and the other part is exposed to more flow layers. The process is called dimension factorization. In the problem setting as introduced in Section 2.1, the factoring operation can be mathematically expressed as, The factoring of dimensions at early layers has the benefit of distributing the loss function throughout the network and optimizing the amount of computation and memory used by the model. We consider the multi-scale architecture for flow based generative models as introduced by (and later used by state-of-the-art flow models such as Glow ) as the base of our research work. In a multi-scale architecture, it is apparent that the network will better learn the distribution of variables getting exposed to more layers of flow as compared to the ones which get factored at a finer scale (earlier layer). The method of dimension splitting proposed by prior works such as (; ;) are static in nature and do not distinguish between importance of different dimensions. In this section, we introduce a heuristic to estimate the contribution of each dimension towards the total log-likelihood, and introduce a method which can use the heuristic to decide the dimensions to be factored at an earlier layer, eventually achieving preferrential splitting in multiscale architecture. Our approach builds an efficient multiscale architecture which factors the dimensions at each flow layer in a way such that the local variance in the input space is well captured as the flow progresses and the log-likelihood is maximized. We also describe how our multi-scale architecture can be implemented over several standard flow models. Recall from Equation 4 that the log-likelihood is composed of two terms, the log-latent-density term and the log-det term. The log-latent-density term depends on the choice of latent distribution whereas the log-det term depends on the modeling of the flow layers. So, careful design of flow layers can lead to maximized log-determinant, eventually maximizing the likelihood. The total log-det term is nothing but the sum of log-det terms contributed by each dimension. Let the dimension of the input space x be s × s × c, where s is the image height/width and c is the number of channels for image inputs. For the following formulation, let us assume no dimensions were gaussianized early so that we have access to log-det term for all dimensions at each flow layer, and the dimension at all intermediate layer remains same (i.e. s × s × c). We apply a flow (f θ) with K component flows to The intermediate variables are denoted by y K, y K−1, · · ·, y 0 with y K = z (since no early gaussianization was done) and y 0 = x. The log-det term at layer l, L The log-det of the jacobian term encompasses contribution by all the s × s × c dimensions. We decompose it to obtain the individual contribution by variables (dimensions) towards the total log-det (∼ total log-likelihood). The log-det term can be viewed (with slight abuse of notations) as a s × s × c tensor corresponding to each of the dimensions, summed over the flow layers till l, ] s×s×c, where α, β ∈ {0, · · ·, s} and γ ∈ {0, · · ·, c} s.t. The entries in [L d] s×s×c having higher value correspond to the variables which contribute more towards the total log-likelihood, hence are more valuable for better flow formulation. So, we can use the likelihood contribution (in the form of log-det term) by each dimension as a heuristic for deciding which variables should be gaussianized early in a multi-scale architecture. Ideally, at each flow layer, the variables with more log-det term should be exposed to more layer of flow and the ones having less log-det term should be factored out. In this manner, selectively more power can be provided to variables which capture meaningful representation (and are more valuable from log-det perspective) to be expressive by being exposed to multiple flow layers. This formulation leads to enhanced density estimation performance. Additionally, for many datasets such as images, the spatial nature should be taken into account while deciding dimensions for early gaussianization. Summarily, at every flow layer, an ideal factorization method should, 1. (Quantitative) For efficient density estimation: Early gaussianize the variables having less log-det and expose the ones having more log-det to more flow layers 2. (Qualitative) For qualitative reconstruction: Capture the local variance over the flow layers, i.e. the dimensions being exposed to more flow layers should contain representative pixel variables from throughout the whole image. Keeping the above requirements in mind, variants of hybrid techniques for factorization can be implemented for different types of flow models which involve a multi-scale architecture, to improve their density estimation and qualitative performance. The key requirement is availability of log-det contributions per dimension, which can be fulfilled by decomposition of the log-det of the jacobian. We refer to the method as Likelihood Contribution based Multi-scale Architecture (LCMA). The steps of LCMA implementation for flow models is summarized in Algorithm 1. Note that in step 2 of dimension factorization phase in algorithm 1, we group the dimensions having more/less log-det locally and then perform splitting. This preserves the local spatial variation of the image in both parts of the factorization, leveraging both enhanced density estimation as well as qualitative reconstruction. Another important observation is since the factorization of dimensions does not occur during the training time, and before the actual training starts, the decision of dimensions which get factored at each flow layer is fixed, the change of variables formula can be applied. This allows the use of non-invertible operations (e.g. max and min pooling) for efficient factorization with log-det heuristic. Step 1 of dimension factorization phase requires computation of individual contribution of dimensions ([L (l) d ] s×s×c ) towards the total log-likelihood, which can vary depending on the original design of flow Algorithm 1: LCMA implementation for generative flow models Pre-Training Phase: Pre-train a network with no multiscale architecture (no dimensionality reduction) to obtain the log-det term at every flow layer. Dimension Factorization: In this phase, the dimensions to be factored at each flow layer is decided based on the log-det term at that layer is computed specifically for corresponding flow models (Refer Section 3.1 and Section 3.2). shaped tensor using local max and min-pooling (= −max-pooling(−input)) operations (Figure 1) at each flow layer. 3. Among the 4c channels, one half contains the dimensions having more log-det term compared with its neighbourhood pixel (Black marked in Fig. 1), while the other half contains the dimensions having less log-det (White marked in Fig. 1). 4. Split the tensor along the channel dimension to two parts. 5. Forward the corresponding dimensions contributing more towards likelihood into more flow layers and early gaussianize the ones contributing less. 6. Repeat steps 1-5 for all the layers with dimensions passed to that layer till the latent space. Training Phase: The decision of dimensions to be factored at each layer as performed in previous step remains fixed. Finally, the flow model with proposed LCMA is trained. models. Some flow models offer direct decomposition of jacobian into per-dimension components, whereas for others, an indirect estimation method has to be adopted. We now describe such methods to obtain such individual likelihood contribution of dimensions for flow models based on affine coupling (RealNVP and Glow ), and flow models involving ordinary differential equation (ODE) based density estimators (i-ResNet ), all of which involve a multiscale architecture. RealNVP : For RealNVP with afffine coupling layers, the logarithm of individual diagonal elements of jacobian, summed over layers till layer l provides the per-dimensional likelihood contribution components at layer l. Glow : Unlike RealNVP where the log-det terms for each dimension can be expressed as log of corresponding diagonal element of jacobian, Glow contains 1 × 1 convolution blocks having non-diagonal log-det term for channel dimensions, for a s × s × c tensor h given by, It remains to decompose the log | det(W)| to individual contribution by each channel. As a suitable candidate, singular values of W correspond to the contribution from each channel dimension, so their log value is the individual log-det contribution. So the individual log-det term for channels are obtained by, where σ i (W) are the singular values of the weight matrix W. For affine blocks in Glow, same method as RealNVP is adopted. Recent works on flow models such as; employ variants of ODE based density estimators. We introduce method to find perdimensional likelihood contribution for i-ResNet , which is a residual network with invertibility and efficient jacobian computation properties. i-ResNet is modelled as a flow F (x), such that z = F (x) = (I + g)(x), where g(x) is the forward propagation function. The log-likelihood expression is written with the log-det of the jacobian is expressed as a power series, where tr denotes the trace. Due to computational constraints, the power series is computed up to a finite number of iterations with the tr(J) is the vector-jacobian product which is multiplied again with v. The individual components which are summed when (v T J k g) is multiplied with v correspond to the diagonal terms in jacobian, over the expectation E p(v). So those terms are the contribution by the individual dimensions, to the log-likelihood and are expressed as [L Multi-scale architecture and variants have been successful in a number of prior works in deep generative modeling. For invertible neural networks, use a keepChannel for selective feed forward of channels analogous to multi-scaling. In the spectrum of generative flow models, multi-scale architecture has been utilized to achieve the dimensionality reduction and enhanced training because of the distribution of loss function in the network . A variant of multiscale architecture has been utilized to capture local variations in auto-regressive models . Among GAN models, use a multiscale variant to generate images in a coarse-to-fine manner. For multi-scale architectures in generative flow models, our proposed method performs factorization of dimensions based on their likelihood contribution, which in another sense translates to determining which dimensions are important from density estimation and qualitative reconstruction point of view. Keeping this in mind, we discuss prior works on generative flow models which involve multi-scaling and/or incorporate permutation among dimensions to capture their interactions. introduce an 1 × 1 convolution layer in between the actnorm and affine coupling layer in their flow architecture. The 1 × 1 convolution is a generalization of permutation operation which ensures that each dimension can affect every other dimension. This can be interpreted as redistributing the contribution of dimensions to total likelihood among the whole space of dimensions. treat the dimensions as equiprobable for factorization in their implementation of multi-scale architecture, and split the tensor at each flow layer evenly along the channel dimension. We, on the other hand, take the next step and focus on the individuality of dimensions and their importance from the amount they contribute towards the total log-likelihood. The log-det score is available via direct/indirect decomposition of the jacobian obtained as part of computations in a flow training, so we essentially have a heuristic for free. Since our method focuses individually on the dimensions using a heuristic which is always available, it can prove to be have more versatility in being compatible with generic multi-scale architectures. extend the concept of 1 × 1 convolutions to invertible d × d convolutions, but do not discuss about multi-scaling. also include a type of permutation which is equivalent to reversing the ordering of the channels, but is more restrictive and fixed. Flow models such as;; involve ODE based density estimators. They also implement a multi-scale architecture, but the splitting operation is a static channel wise splitting without considering importance of individual dimensions or any permutations.;;; use multi-scale architecture in their flow models, coherent with; , but perform the factorization of dimensions without any consideration of the individual contribution of the dimension towards the total log-likelihood. For qualitative sampling along with efficient density estimation, we also propose that factorization methods should preserve spatiality of the image in the two splits, motivated by the spatial nature of splitting methods in (channel-wise splitting) and (checkerboard and channel-wise splitting). In Section 3, we established that our proposed likelihood contribution based factorization of dimensions can be implemented for flow models involving a multi-scale architecture, in order to improve their density estimation and qualitative performance. In this section we present the detailed of proposed LCMA adopted for the flow model of RealNVP and quantitative comparisons with Glow and i-ResNet . For direct comparison, all the experimental settings such as data pre-processing, optimizer parameters as well as flow architectural details (coupling layers, residual blocks) are kept the same, except that the factorization of dimensions at each flow layer is performed according to the methods described in Section 3. For ease of access, we also have summarized the experimental details in Appendix A. For RealNVP, we perform experiments on four benchmarked image datasets: CIFAR-10 , Imagenet (downsampled to 32 × 32 and 64 × 64), and CelebFaces Attributes (CelebA) . The scaling in LCMA is performed once for CIFAR-10, thrice for Imagenet 32 × 32 and 4 times for Imagenet 64 × 64 and CelebA. We compare LCMA with conventional RealNVP and report the quantitative and qualitative . For Glow and i-ResNet with LCMA, we perform experiments on CIFAR-10 and present improvements over baseline bits/dim. We also perform an ablation studies for LCMA vs. other possible dimension splitting options. The bits/dim scores of RealNVP with conventional multi-scale architecture (as introduced in) and RealNVP with LCMA are given in Table 1. It can be observed that the density estimation using LCMA is in all cases better in comparison to the baseline. We observed that the improvement for CelebA is relatively high as compared to natural image datasets. This observation was expected as facial features often contain high redundancy and the flow model learns to put more importance (reflected in terms of high log-det) on selected dimensions that define the facial features. Our proposed LCMA exposes such dimensions to more flow layers, making them more expressive and hence the significant improvement in code length (bits/dim) is observed. The improvement in bits/dim is less for natural image datasets because of the high variance among features defining them, which has been the challenge with image compression algorithms. Note that the improvement in density estimation is always relative to the original flow architecture (RealNVP in our case) over which we use our proposed LCMA, as we do not alter any architecture other than the dimension factorization method. The quantitative of LCMA implementation for RealNVP, Glow and i-ResNet with CIFAR-10 dataset is summarized in Table 2. The density estimation scores for flows with LCMA outperform the same flow with conventional multi-scale architectures. ImageNet 64x64 RealNVP An ideal dimension factorization method should capture the local variance over series of flow layers, which helps in qualitative sampling. For LCMA implementation, we introduced local max and min pooling operations on log-det heuristic to decide which dimensions to be gaussianized early (Section 3). We performed ablation studies to compare LCMA with other methods for dimension factorization in a multi-scale architecture. We consider 4 variants for our study, namely fixed random permutation (Case 1), multiscale architecture with early gaussianization of high log-det dimensions (Case 2), factorization method with checker-board and channel splitting as introduced in RealNVP (Case 3) and multiscale architecture with early gaussianization of low log-det dimensions, which is our proposed LCMA (Case 4). In fixed random permutation, we randomly partition the tensor into two halves, with no regard to the spatiality or log-det score. In case 2, we do the reverse of LCMA, and gaussianize the high log-det variables early. The bits/dim score and generated samples for each of the method are given in Table 3. As expected from an information theoretic perspective, gaussianizing high log-det variables early provides the worst density estimation, as the model could not capture the high amount of important information. Comparing the same with fixed random permutation, the latter has better score as the probability of a high log-det variable being gaussianized early reduces to half, and it gets further reduced with RealNVP due to channel-wise and checkerboard splitting. LCMA has the best score among all methods, as the variables carrying more information are exposed to more flow layers. Fixed random permutation has the worst quality of sampled images, as the spatiality is lost during factorization. The sample quality improves for Case 2 and RealNVP. The sampled images are perceptually best for LCMA. Summarizing, LCMA outperforms multi-scale architectures based on other factorization methods, as it improves density estimation and generates qualitative samples. We proposed a novel multi-scale architecture for generative flows which employs a data-dependent splitting based the individual contribution of dimensions to the total log-likelihood. Implementations of the proposed method for several state-of-the-art flow models such as RealNVP , Glow and i-ResNet were presented. Empirical studies conducted on benchmark image datasets validate the strength of our proposed method, which improves log-likelihood scores and is able to generate qualitative samples. Ablation study confirm the power of LCMA over other options for dimension factorization. For direct comparison with , data pre-processing, optimizer parameters as well as flow architectural details (coupling layers, residual blocks) are kept the same, except that the factorization of dimensions at each flow layer is performed according to the method described in Section 3. In this section, for the ease of access, we summarize the experimental settings. We perform experiments on four benchmarked image datasets: CIFAR-10 , Imagenet (downsampled to 32 × 32 and 64 × 64), and CelebFaces Attributes (CelebA) . Pre-processing: For CelebA, we take a central crop of 148 × 148 then resize it to 64 × 64. Flow model architecture: We use affine coupling layers as introduced . A layer of flow is defined as 3 coupling layers with checkerboard splits at s × s resolution, 3 coupling layers with channel splits at s/2 × s/2 resolution, where s is the resolution at the input of that layer. For datasets having resolution 32, we use 3 such layers and for those having resolution 64, we use 4 layers. The cascade connection of the layers is followed by 4 coupling layers with checkerboard splits at the final resolution, marking the end of flow composition. For CIFAR-10, each coupling layer uses 8 residual blocks. Other datasets having images of size 32 × 32 use 4 residual blocks whereas 64 × 64 ones use 2 residual blocks. More details on architectures will be given in a source code release.
Data-dependent factorization of dimensions in a multi-scale architecture based on contribution to the total log-likelihood
1,719
scitldr
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures. While DNNs have had a large impact on a variety of different tasks BID10 BID8 BID12 BID21 BID28, explaining their predictions is still challenging. The lack of tools to inspect the behavior of these black-box models makes DNNs less trustable for those domains where interpretability and reliability are crucial, like autonomous driving, medical applications and finance. In this work, we study the problem of assigning an attribution value, sometimes also called "relevance" or "contribution", to each input feature of a network. More formally, consider a DNN that takes an input x = [x 1, ..., x N] ∈ R N and produces an output S(x) = [S 1 (x),..., S C (x)], where C is the total number of output neurons. Given a specific target neuron c, the goal of an attribution method is to determine the contribution R c = [R c 1, ..., R c N] ∈ R N of each input feature x i to the output S c. For a classification task, the target neuron of interest is usually the output neuron associated with the correct class for a given sample. When the attributions of all input features are arranged together to have the same shape of the input sample we talk about attribution maps FIG0, which are usually displayed as heatmaps where red color indicates features that contribute positively to the activation of the target output, and blue color indicates features that have a suppressing effect on it. The problem of finding attributions for deep networks has been tackled in several previous works BID22 BID30 BID24 BID2 BID20 BID25 BID31. Unfortunately, due to slightly different problem formulations, lack of compatibility with the variety of existing DNN architectures and no common benchmark, a comprehensive comparison is not available. Various new attribution methods have been published in the last few years but we believe a better theoretical understanding of their properties is fundamental. The contribution of this work is twofold:1. We prove that -LRP BID2 and DeepLIFT (Rescale) BID20 can be reformulated as computing backpropagation for a modified gradient function (Section 3). This allows the construction of a unified framework that comprises several gradient-based attribution methods, which reveals how these methods are strongly related, if not equivalent under certain conditions. We also show how this formulation enables a more convenient implementation with modern graph computational libraries.2. We introduce the definition of Sensitivity-n, which generalizes the properties of Completeness BID25 and Summation to Delta BID20 and we compare several methods against this metric on widely adopted datasets and architectures. We show how empirical support our theoretical findings and propose directions for the usage of the attribution methods analyzed (Section 4). Perturbation-based methods directly compute the attribution of an input feature (or set of features) by removing, masking or altering them, and running a forward pass on the new input, measuring the difference with the original output. This technique has been applied to Convolutional Neural Networks (CNNs) in the domain of image classification BID30, visualizing the probability of the correct class as a function of the position of a grey patch occluding part of the image. While perturbation-based methods allow a direct estimation of the marginal effect of a feature, they tend to be very slow as the number of features to test grows (ie. up to hours for a single image BID31). What is more, given the nonlinear nature of DNNs, the is strongly influenced by the number of features that are removed altogether at each iteration (Figure 1).In the remainder of the paper, we will consider the occluding method by BID30 as a comparison benchmark for perturbation-based methods. We will use this method, referred to as Occlusion-1, replacing one feature x i at the time with a zero baseline and measuring the effect of this perturbation on the target output, ie. DISPLAYFORM0 ) where we use x [xi=v] to indicate a sample x ∈ R N whose i-th component has been replaced with v. The choice of zero as a baseline is consistent with the related literature and further discussed in Appendix B.Original (label: "garter snake") Occlusion-1 Occlusion-5x5 Occlusion-10x10 Occlusion-15x15Figure 1: Attributions generated by occluding portions of the input image with squared grey patches of different sizes. Notice how the size of the patches influence the , with focus on the main subject only when using bigger patches. Backpropagation-based methods compute the attributions for all input features in a single forward and backward pass through the network 1. While these methods are generally faster then perturbationbased methods, their outcome can hardly be directly related to a variation of the output. Gradient * Input BID19 was at first proposed as a technique to improve the sharpness of the attribution maps. The attribution is computed taking the (signed) partial derivatives of the output with respect to the input and multiplying them with the input itself. Refer to TAB0 for the mathematical definition. Integrated Gradients BID25, similarly to Gradient * Input, computes the partial derivatives of the output with respect to each input feature. However, while Gradient * Input computes a single derivative, evaluated at the provided input x, Integrated Gradients computes the average gradient while the input varies along a linear path from a baselinex to x. The baseline is defined by the user and often chosen to be zero. We report the mathematical definition in TAB0.Integrated Gradients satisfies a notable property: the attributions sum up to the target output minus the target output evaluated at the baseline. Mathematically, DISPLAYFORM0 In related literature, this property has been variously called Completeness BID25, Summation to Delta BID20 or Efficiency in the context of cooperative game theory BID15, and often recognized as desirable for attribution methods. Layer-wise Relevance Propagation (LRP) BID2 is computed with a backward pass on the network. Let us consider a quantity r (l) i, called "relevance" of unit i of layer l. The algorithm starts at the output layer L and assigns the relevance of the target neuron c equal to the output of the neuron itself and the relevance of all other neurons to zero (Eq. 1).The algorithm proceeds layer by layer, redistributing the prediction score S i until the input layer is reached. One recursive rule for the redistribution of a layer's relevance to the following layer is the -rule described in Eq. 2, where we defined z ji = w DISPLAYFORM1 i to be the weighted activation of a neuron i onto neuron j in the next layer and b j the additive bias of unit j. A small quantity is added to the denominator of Equation 2 to avoid numerical instabilities. Once reached the input layer, the final attributions are defined as DISPLAYFORM2 LRP together with the propagation rule described in Eq. 2 is called -LRP, analyzed in the remainder of this paper. There exist alternative stabilizing methods described in BID2 and which we do not consider here. DeepLIFT BID20 proceeds in a backward fashion, similarly to LRP. Each unit i is assigned an attribution that represents the relative effect of the unit activated at the original network input x compared to the activation at some reference inputx (Eq. 3). Reference valuesz ji for all hidden units are determined running a forward pass through the network, using the baselinex as input, and recording the activation of each unit. As in LRP, the baseline is often chosen to be zero. The relevance propagation is described in Eq. 4. The attributions at the input layer are defined as DISPLAYFORM3 In Equation 4,z ji = w DISPLAYFORM4 i is the weighted activation of a neuron i onto neuron j when the baselinex is fed into the network. As for Integrated Gradients, DeepLIFT was designed to satisfy Completeness. The rule described in Eq. 4 ("Rescale rule") is used in the original formulation of the method and it is the one we will analyze in the remainder of the paper. The "Reveal-Cancel" rule BID20 is not considered here.: Attribution generated by applying several attribution methods to an Inception V3 network for natural image classification BID27. Notice how all gradient-based methods produce attributions affected by higher local variance compared to perturbation-based methods (Figure 1).Other back-propagation methods exist. Saliency maps BID22 constructs attributions by taking the absolute value of the partial derivative of the target output S c with respect to the input features x i. Intuitively, the absolute value of the gradient indicates those input features (pixels, for image classification) that can be perturbed the least in order for the target output to change the most. However, the absolute value prevents the detection of positive and negative evidence that might be present in the input, reason for which this method will not be used for comparison in the remainder of the paper. Similarly, Deep Taylor Decomposition, although showed to produce sparser explanations, assumes no negative evidence in the input and produces only positive attribution maps. We show in Section 4 that this assumption does not hold for our tasks. Other methods that are designed only for specific architectures (ie. 3 A UNIFIED FRAMEWORK Gradient * Input and Integrated Gradients are, by definition, computed as a function of the partial derivatives of the target output with respect to each input feature. In this section, we will show that -LRP and DeepLIFT can also be computed by applying the chain rule for gradients, if the instant gradient at each nonlinearity is replaced with a function that depends on the method. In a DNN where each layer performs a linear transformation z j = i w ji x i + b j followed by a nonlinear mapping x j = f (z j), a path connecting any two units consists of a sequence of such operations. The chain rule along a single path is therefore the product of the partial derivatives of all linear and nonlinear transformations along the path. For two units i and j in subsequent layers we have ∂x j /∂x i = w ji · f (z j), whereas for any two generic units i and c connected by a set of paths P ic the partial derivative is sum of the product of all weights w p and all derivatives of the nonlinearities f (z) p along each path p ∈ P ic. We introduce a notation to indicate a modified chain-rule, where the derivative of the nonlinearities f is replaced by a generic function g: DISPLAYFORM5 When g = f this is the definition of partial derivative of the output of unit c with respect to unit i, computed as the sum of contributions over all paths connecting the two units. Given that a zero weight can be used for non-existing or blocked paths, this is valid for any architecture that involves fully-connected, convolutional or recurrent layers without multiplicative units, as well as for pooling operations. Proposition 1. -LRP is equivalent the feature-wise product of the input and the modified partial derivative DISPLAYFORM6 e. the ratio between the output and the input at each nonlinearity. Proposition 2. DeepLIFT (Rescale) is equivalent to the feature-wise product of the x −x and the modified partial derivative DISPLAYFORM7 the ratio between the difference in output and the difference in input at each nonlinearity, for a network provided with some input x and some baseline inputx defined by the user. The proof for Proposition 1 and 2 are provided in Appendix A.1 and Appendix A.2 respectively. Given these , we can write all methods with a consistent notation. TAB0 summaries the four methods considered and shows examples of attribution maps generated by these methods on MNIST. As pointed out by BID25 a desirable property for attribution methods is their immediate applicability to existing models. Our formulation makes this possible for -LRP and DeepLIFT. Since all modern frameworks for graph computation, like the popular TensorFlow BID0, implement backpropagation for efficient computation of the chain rule, it is possible to implement all methods above by the gradient of the graph nonlinearities, with no need to implement custom layers or operations. Listing 1 shows an example of how to achieve this on Tensorflow. Listing 1: Example of gradient override for a Tensorflow operation. After registering this function as the gradient for nonlinear activation functions, a call to tf.gradients and the multiplication with the input will produce the -LRP attributions. DISPLAYFORM8 The formulation of TAB0 facilitates the comparison between these methods. Motivated by the fact that attribution maps for different gradient-based methods look surprisingly similar on several tasks, we investigate some conditions of equivalence or approximation. Proposition 3. -LRP is equivalent to i) Gradient * Input if only Rectified Linear Units (ReLUs) are used as nonlinearities; ii) DeepLIFT (computed with a zero baseline) if applied to a network with no additive biases and with nonlinearities f such that f = 0 (eg. ReLU or Tanh).The first part of Proposition 3 comes directly as a corollary of Proposition 1 by noticing that for ReLUs the gradient at the nonlinearity f is equal to g LRP for all inputs. This relation has been previously proven by BID19 and BID5. Similarly, we notice that, in a network with no additive biases and nonlinearities that cross the origin, the propagation of the baseline produces a zero reference value for all hidden units (ie. ∀i : DISPLAYFORM0 DL, which proves the second part of the proposition. Notice that g LRP (z) = (f (z) − 0)/(z − 0) which, in the case of ReLU and Tanh, is the average gradient of the nonlinearity in [0, z]. It also easy to see that lim z→0 g LRP (z) = f, which explain why g can not assume arbitrarily large values as z → 0, even without stabilizers. On the contrary, if the discussed condition on the nonlinearity is not satisfied, for example with Sigmoid or Softplus, we found empirically that -LRP fails to produce meaningful attributions as shown in the empirical comparison of Section 4. We speculate this is due to the fact g LRP (z) can become extremely large for small values of z, being its upper-bound only limited by the stabilizer. This causes attribution values to concentrate on a few features as shown in TAB0. Notice also that the interpretation of g LRP as average gradient of the nonlinearity does not hold in this case, which explains why -LRP diverges from other methods 2.DeepLIFT and Integrated Gradients are related as well. While Integrated Gradients computes the average partial derivative of each feature as the input varies from a baseline to its final value, DeepLIFT approximates this quantity in a single step by replacing the gradient at each nonlinearity with its average gradient. Although the chain rule does not hold in general for average gradients, we show empirically in Section 4 that DeepLIFT is most often a good approximation of Integrated Gradients. This holds for various tasks, especially when employing simple models (see FIG5 . However, we found that DeepLIFT diverges from Integrated Gradients and fails to produce meaningful when applied to Recurrent Neural Networks (RNNs) with multiplicative interactions (eg. gates in LSTM units BID4). With multiplicative interactions, DeepLIFT does not satisfy Completeness, which can be illustrated with a simple example. Take two variables x 1 and x 2 and a the function h(x 1, x 2) = ReLU (x 1 − 1) · ReLU (x 2). It can be easily shown that, by applying the methods as described by TAB0, DeepLIFT does not satisfy Completeness, one of its fundamental design properties, while Integrated gradients does. The formulation in TAB0 highlights how all the gradient-based methods considered are computed from a quantity that depends on the weights and the architecture of the model, multiplied by the input itself. Similarly, Occlusion-1 can also be interpreted as the input multiplied by the average value of the partial derivatives, computed varying one feature at the time between zero and their final value: DISPLAYFORM0 The reason justifying the multiplication with the input has been only partially discussed in previous literature BID23 BID25 BID19. In many cases, it contributes to making attribution maps sharper although it remains unclear how much of this can be attributed to the sharpness of the original image itself. We argue the multiplication with the input has a more fundamental justification, which allows to distinguish attribution methods in two broad categories: global attribution methods, that describe the marginal effect of a feature on the output with respect to a baseline and; local attribution methods, that describe how the output of the network changes for infinitesimally small perturbations around the original input. For a concrete example, we will consider the linear case. Imagine a linear model to predict the total capital in ten years C, based on two investments x 1 and x 2: C = 1.05 · x 1 + 10 · x 2. Given this simple model, R 1 = ∂C/∂x 1 = 1.05, R 2 = ∂C/∂x 2 = 10 represents a possible local attribution. With no information about the actual value of x 1 and x 2 we can still answer the question " Where should one invest in order to generate more capital?. The local attributions reveal, in fact, that by investing x 2 we will get about ten times more return than investing in x 1. Notice, however, that this does not tell anything about the contribution to the total capital for a specific scenario. Assume x 1 = 100 000$ and x 2 = 1 000$. In this scenario C = 115000$. We might ask ourselves "How the initial investments contributed to the final capital? ". In this case, we are looking for a global attribution. The most natural solution would be R 1 = 1.05x 1 = 105 000$, R 2 = 10x 2 = 1 000$, assuming a zero baseline. In this case the attribution for x 1 is larger than that for x 2, an opposite rank with respect to the of the local model. Notice that we used nothing but Gradient * Input as global attribution method which, in the linear case, is equivalent to all other methods analyzed above. The methods listed in TAB0 are examples of global attribution methods. Although local attribution methods are not further discussed here, we can mention Saliency maps BID22 as an example. In fact, showed that Saliency maps can be seen as the first-order term of a Taylor decomposition of the function implemented by the network, computed at a point infinitesimally close to the actual input. Finally, we notice that global and local attributions accomplish two different tasks, that only converge when the model is linear. Local attributions aim to explain how the input should be changed in order to obtain a desired variation on the output. One practical application is the generation of adversarial perturbations, where genuine input samples are minimally perturbed to cause a disruptive change in the output BID26. On the contrary, global attributions should be used to identify the marginal effect that the presence of a feature has on the output, which is usually desirable from an explanation method. Attributions methods are hard to evaluate empirically because it is difficult to distinguish errors of the model from errors of the attribution method explaining the model BID25. For this reason the final evaluation is often qualitative, based on the inspection of the produced attribution maps. We argue, however, that this introduces a strong bias in the evaluation: as humans, one would judge more favorably methods that produce explanations closer to his own expectations, at the cost of penalizing those methods that might more closely reflect the network behavior. In order to develop better quantitative tools for the evaluation of attribution methods, we first need to define the goal that an ideal attribution method should achieve, as different methods might be suitable for different tasks (Subsection 3.2).Consider the attribution maps on MNIST produced by a CNN that uses Sigmoid nonlinearities (FIG3 . Integrated Gradients assigns high attributions to the space in the middle of the image, while Occlusion-1 does not. One might be tempted to declare Integrated Gradients a better attribution method, given that the heatmap is less scattered and that the absence of strokes in the middle of the image might be considered a good clue in favor of a zero digit. In order to evaluate the hypothesis, we apply a variation of the region perturbation method BID17 removing pixels according to the ranking provided by the attribution maps (higher first (+) or lower first (-)). We perform this operation replacing one pixel at the time with a zero value and measuring the variation in the target activation. The in FIG3 show that pixels highlighted by Occlusion-1 initially have a higher impact on the target output, causing a faster variation from the initial value. After removing about 20 pixels or more, Integrated Gradients seems to detect more relevant features, given that the variation in the target output is stronger than for Occlusion-1.This is an example of attribution methods solving two different goals: we argue that while Occlusion-1 is better explaining the role of each feature considered in isolation, Integrated Gradients is better in capturing the effect of multiple features together. It is possible, in fact, that given the presence of several white pixels in the central area, the role of each one alone is not prominent, while the deletion of several of them together causes a drop in the output score. In order to test this assumption systematically, we propose a property called Sensitivity-n. Sensitivity-n. An attribution method satisfies Sensitivity-n when the sum of the attributions for any subset of features of cardinality n is equal to the variation of the output S c caused removing the features in the subset. Mathematically when, for all subsets of features DISPLAYFORM0 When n = N, with N being the total number of input features, we have DISPLAYFORM1, wherex is an input baseline representing an input from which all features have been removed. This is nothing but the definition of Completeness or Summation to Delta, for which Sensitivity-n is a generalization. Notice that Occlusion-1 satisfy Sensitivity-1 by construction, like Integrated Gradients and DeepLIFT satisfy Sensitivity-N (the latter only without multiplicative units for the reasons discussed in Section 3.1). -LRP satisfies Sensitivity-N if the conditions of Proposition 3-(ii) are met. However no methods in TAB0 can satisfy Sensitivity-n for all n:Proposition 4. All attribution methods defined in TAB0 satisfy Sensitivity-n for all values of n if and only if applied to a linear model or a model that behaves linearly for a selected task. In this case, all methods of TAB0 are equivalent. The proof of Proposition 4 is provided in Appendix A.3. Intuitively, if we can only assign a scalar attribution to each feature, there are not enough degrees of freedom to capture nonlinear interactions. Besides degenerate cases when DNNs behave as linear systems on a particular dataset, the attribution methods we consider can only provide a partial explanation, sometimes focusing on different aspects, as discussed above for Occlusion-1 and Integrated Gradients. Although no attribution method satisfies Sensitivity-n for all values of n, we can measure how well the sum of the attributions While it is intractable to test all possible subsets of features of cardinality n, we estimate the correlation by randomly sampling one hundred subsets of features from a given input x for different values of n. FIG5 reports the Pearson correlation coefficient (PCC) computed between the sum of the attributions and the variation in the target output varying n from one to about 80% of the total number of features. The PCC is averaged across a thousand of samples from each dataset. The sampling is performed using a uniform probability distribution over the features, given that we assume no prior knowledge on the correlation between them. This allows to apply this evaluation not only to images but to any kind of input. We test all methods in TAB0 on several tasks and different architectures. We use the well-known MNIST dataset BID9 to test how the methods behave with two different architectures (a Multilayer Perceptron (MLP) and a CNN) and four different activation functions. We also test a simple CNN for image classification on CIFAR10 BID7 ) and the more complex Inception V3 architecture BID27 on ImageNet BID16 samples. Finally, we test a model for sentiment classification from text data. For this we use the IMDB dataset BID11, applying both a MLP and an LSTM model. Details about the architectures can be found in Appendix C. Notice that it was not our goal, nor a requirement, to reach the state-of-the-art in these tasks since attribution methods should be applicable to any model. On the contrary, the simple model architecture used for sentiment analysis enables us to show a case where a DNN degenerates into a nearly-linear behavior, showing in practice the effects of Proposition 4. From these we can formulate some considerations:1. Input might contain negative evidence. Since all methods considered produce signed attributions and the correlation is close to one for at least some value of n, we conclude that the input samples can contain negative evidence and that it can be correctly reported. This is further supported by the in FIG3 where the occlusion of negative evidence produces an increase in the target output. On the other hand, on complex models like Inception V3, all gradient-based methods show low accuracy in predicting the attribution sign, leading to heatmaps affected by high-frequency noise FIG0 ).2. Occlusion-1 better identifies the few most important features. This is supported by the fact that Occlusion-1 satisfies Sensitivity-1, as expected, while the correlation decreases monotonically as n increases in all our experiments. For simple models, the correlation remains rather high even for medium-size sets of pixels but Integrated Gradients, DeepLIFT and LRP should be preferred when interested in capturing global nonlinear effects and cross-interactions between different features. Notice also that Occlusion-1 is much slower than gradient-based methods.3. In some cases, like in MNIST-MLP w/ Tanh, Gradient * Input approximates the behavior of Occlusion-1 better than other gradient-based methods. This suggests that the instant gradient computed by Gradient * Input is feature-wise very close to the average gradient for these models.4. Integrated Gradients and DeepLIFT have very high correlation, suggesting that the latter is a good (and faster) approximation of the former in practice. This does not hold in presence of multiplicative interactions between features (eg. IMDB-LSTM). In these cases the analyzed formulation of DeepLIFT should be avoided for the reasons discussed in Section 3.1.5. -LRP is equivalent to Gradient * Input when all nonlinearities are ReLUs, while it fails when these are Sigmoid or Softplus. When the nonlinearities are such that f = 0, -LRP diverges from other methods, cannot be seen as a discrete gradient approximator and may lead to numerical instabilities for small values of the stabilizer (Section 3.1). It has been shown, however, that adjusting the propagation rule for multiplicative interactions and avoiding critical nonlinearities, -LRP can be applied to LSTM networks, obtaining interesting BID1.Unfortunately, these changes obstacle the formulation as modified chain-rule and make ad-hoc implementation necessary.6. All methods are equivalent when the model behaves linearly. On IMDB (MLP), where we used a very shallow network, all methods are equivalent and the correlation is maximum for almost all values of n. From Proposition 4 we can say that the model approximates a linear behavior (each word contributes to the output independently from the context). In this work, we have analyzed Gradient * Input, -LRP, Integrated Gradients and DeepLIFT (Rescale) from theoretical and practical perspectives. We have shown that these four methods, despite their apparently different formulation, are strongly related, proving conditions of equivalence or approximation between them. Secondly, by reformulating -LRP and DeepLIFT (Rescale), we have shown how these can be implemented as easy as other gradient-based methods. Finally, we have proposed a metric called Sensitivity-n which helps to uncover properties of existing attribution methods but also traces research directions for more general ones. Nonlinear operations. For a nonlinear operation with a single input of the form x i = f (z i) (i.e. any nonlinear activation function), the DeepLIFT multiplier (Sec. 3.5.2 in Shrikumar et al. BID20) is: DISPLAYFORM0 Nonlinear operations with multiple inputs (eg. 2D pooling) are not addressed in BID20. For these, we keep the original operations' gradient unmodified as in the DeepLIFT public implementation. By linear model we refer to a model whose target output can be written as S c (x) = i h i (x i), where all h i are compositions of linear functions. As such, we can write DISPLAYFORM1 for some some a i and b i. If the model is linear only in the restricted domain of a task inputs, the following considerations hold in the domain. We start the proof by showing that, on a linear model, all methods of TAB0 are equivalent. Proof. In the case of Gradient * Input, on a linear model it holds DISPLAYFORM2, being all other derivatives in the summation zero. Since we are considering a linear model, all nonlinearities f are replaced with the identity function and therefore ∀z: g DL (z) = g LRP (z) = f (z) = 1 and the modified chain-rules for LRP and DeepLIFT reduce to the gradient chain-rule. This proves that -LRP and DeepLIFT with a zero baseline are equivalent to Gradient * Input in the linear case. For Integrated Gradients the gradient term is constant and can be taken out of the integral: DISPLAYFORM3, which completes the proof the proof of equivalence for the methods in TAB0 in the linear case. If we now consider any subset of n features x S ⊆ x, we have for Occlusion-1: DISPLAYFORM4 where the last equality holds because of the definition of linear model (Equation 9). This shows that Occlusion-1, and therefore all other equivalent methods, satisfy Sensitivity-n for all n if the model is linear. If, on the contrary, the model is not linear, there must exists two features x i and x j such that DISPLAYFORM5. In this case, either Sensitivity-1 or Sensitivity-2 must be violated since all methods assign a single attribution value to x i and x j. In general, a non-zero attribution for a feature implies the feature is expected to play a role in the output of the model. As pointed out by BID25, humans also assign blame to a cause by comparing the outcomes of a process including or not such cause. However, this requires the ability to test a process with and without a specific feature, which is problematic with current neural network architectures that do not allow to explicitly remove a feature without retraining. The usual approach to simulate the absence of a feature consists of defining a baseline x, for example the black image or the zero input, that will represent absence of information. Notice, however, that the baseline must necessarily be chosen in the domain of the input space and this creates inherently an ambiguity between a valid input that incidentally assumes the baseline value and the placeholder for a missing feature. On some domains, it is also possible to marginalize over the features to be removed in order to simulate their absence. BID31 showed how local coherence of images can be exploited to marginalize over image patches. Unfortunately, this approach is extremely slow and only provide marginal improvements over a pre-defined baseline. What is more, it can only be applied to images, where contiguous features have a strong correlation, hence our decision to use the method by BID30 as our benchmark instead. When a baseline value has to be defined, zero is the canonical choice BID25 BID30 BID20. Notice that Gradient * Input and LRP can also be interpreted as using a zero baseline implicitly. One possible justification relies on the observation that in network that implements a chain of operations of the form z j = f (i (w ji · z i) + b j ), the all-zero input is somehow neutral to the output (ie. ∀c ∈ C : S c ≈ 0). In fact, if all additive biases b j in the network are zero and we only allow nonlinearities that cross the origin, the output for a zero input is exactly zero for all classes. Empirically, the output is often near zero even when biases have different values, which makes the choice of zero for the baseline reasonable, although arbitrary. C EXPERIMENTS SETUP C.1 MNIST The MNIST dataset was pre-processed to normalize the input images between -1 () and 1 (digit stroke). We trained both a DNN and a CNN, using four activation functions in order to test how attribution methods generalize to different architectures. The lists of layers for the two architectures are listed below. The activations functions are defined as ReLU (x) = max(0, x), T anh(x) = sinh(x)/cosh(x), Sigmoid(x) = 1/(1 + e −x) and Sof tplus(x) = ln(1 + e x) and have been applied to the output of the layers marked with † in the tables below. The networks were trained using Adadelta BID29 and early stopping. We also report the final test accuracy. The CIFAR-10 dataset BID7 ) was pre-processed to normalized the input images in range [-1; 1]. As for MNIST, we trained a CNN architecture using Adadelta and early stopping. For this dataset we only used the ReLU nonlinearity, reaching a final test accuracy of 80.5%. For gradient-based methods, the attribution of each pixel was computed summing up the attribution of the 3 color channels. Similarly, Occlusion-1 was performed setting all color channels at zero at the same time for each pixel being tested. We used a pre-trained Inception V3 network. The details of this architecture can be found in BID27. We used a test dataset of 1000 ImageNet-compatible images, normalized in [-1; 1] that was classified with 95.9% accuracy. When computing attributions, the color channels were handled as for CIFAR-10. We trained both a shallow MLP and an LSTM network on the IMDB dataset for sentiment analysis. For both architectures, we trained a small embedding layer considering only the 5000 most frequent words in the dataset. We also limited the maximum length of each review to 500 words, padding shorter ones when necessary. We used ReLU nonlinearities for the hidden layers and trained using Adam BID6 and early stopping. The final test accuracy is 87.3% on both architectures. For gradient-based methods, the attribution of each word was computed summing up the attributions over the embedding vector components corresponding to the word. Similarly, Occlusion-1 was performed setting all components of the embedding vector at zero for each word to be tested. Dense FORMULA3 Dense IMDB LSTM Embedding (5000x32)LSTM FORMULA4 Dense
Four existing backpropagation-based attribution methods are fundamentally similar. How to assess it?
1,720
scitldr
We study SGD and Adam for estimating a rank one signal planted in matrix or tensor noise. The extreme simplicity of the problem setup allows us to isolate the effects of various factors: signal to noise ratio, density of critical points, stochasticity and initialization. We observe a surprising phenomenon: Adam seems to get stuck in local minima as soon as polynomially many critical points appear (matrix case), while SGD escapes those. However, when the number of critical points degenerates to exponentials (tensor case), then both algorithms get trapped. Theory tells us that at fixed SNR the problem becomes intractable for large $d$ and in our experiments SGD does not escape this. We exhibit the benefits of warm starting in those situations. We conclude that in this class of problems, warm starting cannot be replaced by stochasticity in gradients to find the basin of attraction. Reductionism consists of breaking down the study of complex systems and phenomena into their atomic components. While the use of stochastic gradient based algorithms has shown tremendous success at minimizing complicated loss functions arising in deep learning, our understanding of why, when and how this happens is still limited. Statements such as stochastic gradients escape from isolated critical points along the road to the best basin of attraction, or SGD generalizes better because it does not get stuck in steep local minima still need to be better understood. Can we prove or replicate these phenomena in the simplest instances of the problem? We study the behavior of stochastic gradient descent (SGD) BID11 and an adaptive variant (Adam) BID8 under a class of well studied non-convex problems. The single spiked models were originally designed for studying principal component analysis on matrices BID12 BID3 BID5 and have also been extended to higher order tensors BID10. Adaptive stochastic optimization methods have been gaining popularity in the deep learning community thanks to fast training on some benchmarks. However, it has been observed that despite reaching a low value of the loss function, the solutions found by Adam do not generalize as well as SGD solutions do. An assumption, widely spread and adopted in the community, has been that SGD's randomness helps escaping local critical points [WRS + 17]. While the problem has been thoroughly studied theoretically [MR14, HSS15, HSSS16, BAGJ18], our contribution is to propose experimenting with this simple model to challenge claims such as those on randomized gradient algorithms in this very simple setup. It is noteworthy that the landscape of non-global critical points of these toy datasets are studied BID0 BID2 BID1 and formally linked to the neural nets empirical loss functions BID2 BID9. For this problem, the statistical properties of the optimizers are well understood, and in the more challenging tensor situation, also the impact of (spectral) warm start has been discussed BID10. We will examine the solutions found by SGD and Adam and compare them with spectral and power methods. This allows to empirically elucidate the existence of multiple regimes: the strong signal regime where all first order methods seem to find good solutions when polynomially many critical points appear, in the matrix case, SGD converges while Adam gets trapped, unless if initialized in the basin of attraction in the presence of exponentially many critical points (the tensor case), all algorithms fail, unless if d is moderately small and the SNR large enough to allow for proper initialization.2 Single spiked models, and stochastic gradients Even though proving strong about non-convex loss functions is in general challenging, a class of nonconvex statistical problems is very well studied and relatively well understood. Principal component analysis (PCA) or finding the leading eigenvector of a covariance matrix is a problem of interest in statistics and machine learning. The proof of convergence of power method to the leading principal component and the geometry of critical points of the maximum likelihood problem maximize u, Au s.t. u 2 = 1, Rayleigh quotient for matrix PCA are well established using eigenvalue decomposition. In addition, more recently, a class of extremely simplified models have shed light on the phase transitions of the problem difficulty as a function of the signal-to-noise ratio in the model. The so-called single spiked models consist of considering a symmetric normalized noise matrix to which a rank one signal is added. DISPLAYFORM0 It is known BID3 that the spectrum of the noise matrix asymptotically forms a semi-circle situated between −2 and 2. When the signal to noise ratio is weak λ ∈ then the signal dilutes in noise, while the leading principal component pops out of the semi-circle as soon as the signal to noise ratio λ is above the critical value λ > λ c = 1, in which case the solution of the problem forms asymptotically a cosine value of 1 − λ −2 1/2 with the signal and the optimal value of the Rayleigh quotient is λ + λ −1 BID12 BID3 BID5. It is proven that the power method allows to obtain the solution after logarithmically many steps, as a function of the problem dimension d. We will minimize the unconstrained objective function (DISPLAYFORM1 We set the value of γ to the theoretical asymptotic value of the leading eigenvalue, γ = 2 for λ < 1 or λ + λ −1 for larger λ, and will add random normal noise to the gradient for stochasticity: ∇ σ (u) = −Au + γu + σz where z i ∼ N (0, 1/d). This function has a constant Hessian H (u) = −λu 0 u 0 T − Z + γI d which is positive semi-definite as soon as γ is equal or larger than the value of the leading eigenvalue of A. The tensor version of the problem (see BID10 for notations and more discussion on problem setting) DISPLAYFORM0 u 2 = 1 Rayleigh quotient tensor PCA under the tensor single spiked model defined for a symmetric (π is a permutation of 3 elements) How does data abundance explain the success of first order methods? On the positive side, we discuss that in this model and considering a large dataset of i.id. samples, weak signals in individual observations accumulate and allow to solve the problem if n √ d. The counter part to the strong requirement λ d 1/4 (conjectured in BID10 and proven in BID6 BID7) is that accumulation of observations compensate low signal to noise ratio in each individual sample. Formally, Remark 2.1. Assume n sample of data according to model "Single spiked tensor", with the same signal u 0 and different i.i.d. noises Z q are observed: DISPLAYFORM1 DISPLAYFORM2 There exists constants c 0, c 1 such that if n ≥ c 0 √ d, then, warm started power iteration produces a vector u, such that with high probability u 0, u > 1 − c 1 /λ. This is established using Theorems 5 in BID10 and 6.3 in BID7 and considering the average tensorĀ DISPLAYFORM3 SinceZ is symmetric andZ i,j,k ∼ N (0, 1/d), the tensorĀ is sampled from a similar distribution as Single spiked tensor with a SNR λ n = √ nλ. This means that the requirement λ d 1/4 BID7, in the average tensor case, relaxes to n √ d. In words, this means that if we are solving a problem with a tensor PCA complexity, and if the number of i.i.d. observations grows quadratically as a function of the problem dimension, we can compute the solution reliably using spectral warm start, even though the original problem looks intractable. Our numerical report performance of different algorithms at solving simulations of matrix and tensor PCA problems. Under various problem generation parameter choices, we report values of• cosine or u, u 0. This is measures the quality of planted (hidden) signal recovery from the noisy observation. Higher values are preferred, and it cannot exceed 1, since both u 0 and u are normalized. This quantity is to be qualitatively compared with the test error in standard learning problems where the true value of the parameter is not available.• Rayleigh is the value of the log-likelihood objective function that we are maximizing. Higher values are preferred. The theoretical maximum value of this objective is the operator norm of the observed tensor A. This is comparable with (minus) the training loss. The signal to noise ratio λ is to be compared with the number of observations in a supervised learning problem. The stochasticity of the gradients σ is to be compared with the number of sample points in each minibatch of data: large stochasticity mimics small minibatch situations. In Figure 1 we plot the values of the objective function or Rayleigh quotient and the cosine of the ground truth with the solution as a function of the iterates. We replicated these plots at values of the SNR parameter λ < λ c = 1, at the critical value λ = 1 and above it for λ = 2 where the problems is considered to be easy. The learning rate and stochasticity σ were set by generating instances of the problem with different noise matrices. These plots allow to compare the optimization power of different algorithms and also keep track of the quality of the solution found. We can see in these plots that Adam gets stuck around the wrong region very fast. Note that with the value (or larger) of γ = 2 for λ < 1 and γ = λ + λ −1 for λ ≥ 1 that we can set given the true value of λ, and knowing the concentration around the asymptotics BID12 BID3 BID5, the objective function is strongly convex so we expect first order methods to show the same convergence rates as the power method. One can also observe that gradient descent corresponds to performing power iteration on a shifted matrix A + αI d. Figure 2 shows the value of the objective and the correlation with the ground truth as a function of SNR λ. We can see that SGD is superior to Adam, uniformly along λ, while power method (also a first order method) rivals with SGD. These plots exhibit the instability around and below the critical value λ c = 1 while above λ c the behavior is more stable. In the tensor setting we experimented with spectral initialization. Spectral initialization consists of flattening the tensor to a d×d 2 matrix and initializing tensor algorithms with the left singular vector of the flattened tensor. We observe benefits of spectral initialization at locating the initial point in a basin of attraction that leads to better solutions when λ is large enough. For λ = 1.0 spectral initialization does not in better estimates in Figure 3, while for larger values of λ we can see the benefit of warm start. In FIG2 we also plot values of the gradient and the number of positive eigenvalues of the Hessian along the optimization iterates for d = 100, λ = 2, γ = 2. We observe that spectral initialization located the initial point of the iterations in the basin of attraction where the problem is convex (all eigenvalues of the Hessian are positive). Adam, while starting in this region, fails at finding a solution as good as SGD's. We experimented with the amount of noise added to each gradient evaluation and mapped median values of the estimate and optimization problems for values of λ and σ over 100 instances of the problem generated with different noise matrices and stochastic gradients. Stochasticity does not seem to remedy to the problem difficulty. Numerical experiments suggest that irrespective of the magnitude of the stochastic component added to the gradient, the first order methods, initialized at random, fail at finding the best basin of attraction. In the same setup, spectral initialized first order methods successfully find the solutions. We propose to study algorithms used for minimizing deep learning loss functions, at optimizing a non-convex objective on simple synthetic datasets. Studying simplified problems has the advantage that the problem's properties, and the behavior of the optimizer and the solution, can be studied rigorously. The use of such datasets can help to perform sanity checks on improvement ideas to the algorithms, or to mathematically prove or disprove intuitions. The properties of the toy data sets align with some properties of deep learning loss functions. From the optimization standpoint, the ing tensor problems may appear to be even harder than deep learning problems. We observe that finding good solutions is hard unless if proper initialization is performed, while the value of stochasticity in gradient estimates seems too narrow and does not appear to compensate for poor initialization heuristics. Each column represents the values of those quantities along iterations of the algorithm. The prefix sp. refers to spectral initialization and l. refers to a decreasing learning weight scheduled in 1/ √ t. We observe the value of warm starting as soon as λ is large enough. Even at high SNR λ = 6, randomly initialized SGD fails while spectrally initialized SGD succeeds. Adam drifts to a non optimal critical point in that regime, even with spectral warm start.
SGD and Adam under single spiked model for tensor PCA
1,721
scitldr
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction. We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model. Experimental have demonstrated the effectiveness of our method. Convolutional neural networks (CNNs) BID17 BID15 BID10 have achieved superior performance in various tasks, such as object classification and detection. Besides the discrimination power of neural networks, the interpretability of neural networks has received an increasing attention in recent years. In this paper, we focus on a new problem, i.e. explaining the specific rationale of each network prediction semantically and quantitatively. " Semantic explanations" and "quantitative explanations" are two core issues of understanding neural networks. We hope to explain the logic of each network prediction using clear visual concepts, instead of using middle-layer features without clear meanings or simply extracting pixel-level correlations between network inputs and outputs. We believe that semantic explanations may satisfy specific demands in real applications. In contrast to traditional qualitative explanations for neural networks, quantitative explanations enable people to diagnose feature representations inside neural networks and help neural networks earn trust from people. We expect the neural network to provide the quantitative rationale of the prediction, i.e. clarifying which visual concepts activate the neural network and how much they contribute to the prediction score. Above two requirements present significant challenges to state-of-the-art algorithms. To the best of our knowledge, no previous studies simultaneously explained network predictions using clear visual concepts and quantitatively decomposed the prediction score into value components of these visual concepts. Task: Therefore, in this study, we propose to learn another neural network, namely an explainer network, to explain CNN predictions. Accordingly, we can call the target CNN a performer network. Besides the performer, we also require a set of models that are pre-trained to detect different visual concepts. These visual concepts will be used to explain the logic of the performer's prediction. We are also given input images of the performer, but we do not need any additional annotations on the images. Then, the explainer is learned to mimic the logic inside the performer, i.e. the explainer receives the same features as the performer and is expected to generate similar prediction scores. As shown in Fig. 1, the explainer uses pre-trained visual concepts to explain each prediction. The explainer is designed as an additive model, which decomposes the prediction score into the sum of Figure 1: Explainer. We distill knowledge of a performer into an explainer as a paraphrase of the performer's representations. The explainer decomposes the prediction score into value components of semantic concepts, thereby obtaining quantitative semantic explanations for the performer.multiple value components. Each value component is computed based on a specific visual concept. In this way, we can roughly consider these value components as quantitative contributions of the visual concepts to the final prediction score. More specifically, we learn the explainer via knowledge distillation. Note that we do not use any ground-truth annotations on input images to supervise the explainer. It is because the task of the explainer is not to achieve a high prediction accuracy, but to mimic the performer's logic in prediction, no matter whether the performer's prediction is correct or not. Thus, the explainer can be regarded as a semantic paraphrase of feature representations inside the performer, and we can use the explainer to understand the logic of the performer's prediction. Theoretically, the explainer usually cannot recover the exact prediction score of the performer, owing to the limit of the representation capacity of visual concepts. The difference of the prediction score between the performer and the explainer corresponds to the information that cannot be explained by the visual concepts. Challenges: Distilling knowledge from a pre-trained neural network into an additive model usually suffers from the problem of bias-interpreting. When we use a large number of visual concepts to explain the logic inside the performer, the explainer may biasedly select very few visual concepts, instead of all visual concepts, as the rationale of the prediction (Fig. 4 in the appendix visualizes the bias-interpreting problem). Just like the typical over-fitting problem, theoretically, the bias interpreting is an ill-defined problem. To overcome this problem, we propose two types of losses for prior weights of visual concepts to guide the learning process. The prior weights push the explainer to compute a similar Jacobian of the prediction score w.r.t. visual concepts as the performer in early epochs, in order to avoid bias-interpreting. Originality: Our "semantic-level" explanation for CNN predictions has essential differences from traditional studies of "pixel-level" interpreting neural networks, such as the visualization of features in neural networks BID36 BID21 BID27 BID6 BID7 BID24, the extraction of pixellevel correlations between network inputs and outputs BID14 BID22 BID20, and the learning of neural networks with interpretable middle-layer features BID38 BID23.In particular, the explainer explains the performer without affecting the original discrimination power of the performer. As discussed in BID1, the interpretability of features is not equivalent to, and usually even conflicts with the discrimination power of features. Compared to forcing the performer to learn interpretable features, our strategy of explaining the performer solves the dilemma between the interpretability and the discriminability. In addition, our quantitative explanation has special values beyond the qualitative analysis of CNN predictions BID39.Potential values of the explainer: Quantitatively and semantically explaining a performer is of considerable practical values when the performer needs to earn trust from people in critical applications. As mentioned in BID37, owing to the potential bias in datasets and feature representations, a high testing accuracy still cannot fully ensure correct feature representations in neural networks. Thus, semantically and quantitatively clarifying the logic of each network prediction is a direct way to diagnose feature representations of neural networks. Fig. 3 shows example explanations for the performer's predictions. Predictions whose explanations conflict people's common sense may reflect problematic feature representations inside the performer. Contributions of this study are summarized as follows. (i) In this study, we focus on a new task, i.e. semantically and quantitatively explaining CNN predictions. (ii) We propose a new method to explain neural networks, i.e. distilling knowledge from a pre-trained performer into an interpretable additive explainer. Our strategy of using the explainer to explain the performer avoids hurting the discrimination power of the performer. (iii) We develop novel losses to overcome the typical biasinterpreting problem. Preliminary experimental have demonstrated the effectiveness of the proposed method. (iv) Theoretically, the proposed method is a generic solution to the problem of interpreting neural networks. We have applied our method to different benchmark CNNs for different applications, which has proved the broad applicability of our method. In this paper, we limit our discussion within the scope of understanding feature representations of neural networks. Network visualization: The visualization of feature representations inside a neural network is the most direct way of opening the black-box of the neural network. Related techniques include gradient-based visualization BID36 BID21 BID27 BID35 and up-convolutional nets BID6 BID20 BID13 BID7 BID24 extracted rough pixel-level correlations between network inputs and outputs, i.e. estimating image regions that directly contribute the network output. Network-attack methods BID14 BID29 computed adversarial samples to diagnose a CNN. BID16 ) discovered knowledge blind spots of a CNN in a weakly-supervised manner. BID37 ) examined representations of conv-layers and automatically discover biased representations of a CNN due to the dataset bias. However, above methods usually analyzed a neural network at the pixel level and did not summarize the network knowledge into clear visual concepts. BID1 defined six types of semantics for CNN filters, i.e. objects, parts, scenes, textures, materials, and colors. Then, BID41 proposed a method to compute the image-resolution receptive field of neural activations in a feature map. Other studies retrieved middle-layer features from CNNs representing clear concepts. BID25 retrieved features to describe objects from feature maps, respectively. BID41 selected neural units to describe scenes. Note that strictly speaking, each CNN filter usually represents a mixture of multiple semantic concepts. Unlike previous studies, we are more interested in analyzing the quantitative contribution of each semantic concept to each prediction, which was not discussed in previous studies. A new trend in the scope of network interpretability is to learn interpretable feature representations in neural networks BID12 BID28 BID18 in an un-/weakly-supervised manner. Capsule nets BID23 and interpretable RCNN BID33 learned interpretable features in intermediate layers. InfoGAN BID4 and β-VAE BID11 ) learned well-disentangled codes for generative networks. Interpretable CNNs BID38 learned filters in intermediate layers to represent object parts without given part annotations. However, as mentioned in BID1 BID39, interpretable features usually do not have a high discrimination power. Therefore, we use the explainer to interpret the pre-trained performer without hurting the discriminability of the performer. Explaining neural networks via knowledge distillation: Distilling knowledge from a black-box model into an explainable model is an emerging direction in recent years. BID40 used a tree structure to summarize the inaccurate 1 rationale of each CNN prediction into generic decision-making models for a number of samples. In contrast, we pursue the explicitly quantitative explanation for each CNN prediction. BID5 learned an explainable additive model, and BID31 ) distilled knowledge of a network into an additive model. BID30 BID2 BID32 ) distilled representations of neural networks into tree structures. These methods did not explain the network knowledge using humaninterpretable semantic concepts. More crucially, compared to previous additive models BID31, our research successfully overcomes the bias-interpreting problem, which is the core challenge when there are lots of visual concepts for explanation. In this section, we distill knowledge from a pre-trained performer f to an explainable additive model. We are given a performer f and n neural networks {f i |i = 1, 2, . . ., n} that are pre-trained to detect n different visual concepts. We learn the n neural networks along with the performer, and the n neural networks are expected to share low-layer features with the performer. Our method also requires a set of training samples for the performer f. The goal of the explainer is to use inference values of the n visual concepts to explain prediction scores of the performer. Note that we do not need any annotations on training samples w.r.t. the task, because additional supervision will push the explainer towards a good performance of the task, instead of objectively reflecting the knowledge in the performer. Given an input image I, letŷ = f (I) denote the output of the performer. Without loss of generality, we assume thatŷ is a scalar. If the performer has multiple outputs (e.g. a neural network for multicategory classification), we can learn an explainer to interpret each scalar output of the performer. In particular, when the performer takes a softmax layer as the last layer, we use the feature score before the softmax layer asŷ, so thatŷ's neighboring scores will not affect the value ofŷ. We design the following additive explainer model, which uses a mixture of visual concepts to approximate the function of the performer. The explainer decomposes the prediction scoreŷ into value components of pre-defined visual concepts. DISPLAYFORM0 Quantitative contribution from the first visual concept DISPLAYFORM1 where y i and α i (I) denote the scalar value and the weight for the i-th visual concept, respectively. b is a bias term. y i is given as the strength or confidence of the detection of the i-th visual concept. We can regard the value of α i (I) · y i as the quantitative contribution of the i-th visual concept to the final prediction. In most cases, the explainer cannot recover all information of the performer. The prediction difference between the explainer and the performer reflects the limit of the representation capacity of visual concepts. According to the above equation, the core task of the explainer is to estimate a set of weights α = [α 1, α 2, . . ., α n], which minimizes the difference of the prediction score between the performer and the explainer. Different input images may obtain different weights α, which correspond to different decision-making modes of the performer. For example, a performer may mainly use head patterns to classify a standing bird, while it may increase the weight for the wing concept to classify a flying bird. Therefore, we design another neural network g with parameters θ g (i.e. the explainer), which uses the input image I to estimate the n weights. We learn the explainer with the following knowledge-distillation loss. DISPLAYFORM2 However, without any prior knowledge about the distribution of the weight α i, the learning of g usually suffers from the problem of bias-interpreting. The neural network g may biasedly select very few visual concepts to approximate the performer as a shortcut solution, instead of sophisticatedly learning relationships between the performer output and all visual concepts. Thus, to overcome the bias-interpreting problem, we use a loss L for priors of α to guide the learning process in early epochs. DISPLAYFORM3 Loss, where w denotes prior weights, which represent a rough relationship between the performer's prediction value and n visual concepts. Just like α, different input images also have different prior weights w. The loss L(α, w) penalizes the dissimilarity between α and w. DISPLAYFORM4 Note that the prior weights w are approximated with strong assumptions (we will introduce two different ways of computing w later). We use inaccurate w to avoid significant bias-interpreting, rather than pursue a high accuracy. Thus, we set a decreasing weight for L, i.e. λ(t) = DISPLAYFORM5, where β is a scalar constant, and t denotes the epoch number. In this way, we mainly apply the prior loss L in early epochs. Then, in late epochs, the influence of L gradually decreases, and our method gradually shifts its attention to the distillation loss for a high distillation accuracy. We design two types of losses for prior weights, as follows. DISPLAYFORM6 Some applications require a positive relationship between the prediction of the performer and each visual concept, i.e. each weight α i must be a positive scalar. In this case, we use the cross-entropy between α and w as the prior loss. In other cases, the MSE loss between α and w is used as the loss. · 1 and · 2 denote the L-1 norm and L-2 norm, respectively. In particular, in order to ensure α i ≥ 0 in certain applications, we add a non-linear activation layer as the last layer of g, i.e. α = log[1 + exp(x)], where x is the output of the last conv-layer. In this subsection, we will introduce two techniques to efficiently compute rough prior weights w, which are oriented to the following two cases in application. Case 1, filters in intermediate conv-layers of the performer are interpretable: As shown in FIG0, learning a neural network with interpretable filters is an emerging research direction in recent years. For example, BID38 proposed a method to learn CNNs for object classification, where each filter in a high conv-layer is exclusively triggered by the appearance of a specific object part (see FIG4 in the appendix for the visualization of filters). Thus, we can interpret the classification score of an object as a linear combination of elementary scores for the detection of object parts. Because such interpretable filters are automatically learned without part annotations, the quantitative explanation for the CNN (i.e. the performer) can be divided into the following two tasks: (i) annotating the name of the object part that is represented by each filter, and (ii) learning an explainer to disentangle the exact additive contribution of each filter (or each object part) to the performer output. In this way, each f i, i = 1, 2,..., n, is given as an interpretable filter of the performer. According to BID37, we can roughly represent the network prediction aŝ DISPLAYFORM0 where x ∈ R H×W ×n denotes a feature map of the interpretable conv-layer, and x hwi is referred to as the activation unit in the location (h, w) of the i-th channel. y i measures the confidence of detecting the object part corresponding to the i-th filter. Here, we can roughly use the Jacobian of the network output w.r.t. the filter to approximate the weight w i of the filter. Z is for normalization. Considering that the normalization operation in Equation eliminates Z, we can directly use h,w ∂ŷ ∂x hwi as prior weights w in Equation without a need to compute the exact value of Z.Case 2, neural networks for visual concepts share features in intermediate layers with the performer: As shown in FIG0, given a neural network for the detection of multiple visual concepts, using certain visual concepts to explain a new visual concept is a generic way to interpret network predictions with broad applicability. Let us take the detection of a certain visual concept as the targetŷ and use other visual concepts as {y i} to explainŷ. All visual concepts share features in intermediate layers. Then, we estimate a rough numerical relationship betweenŷ and the score of each visual concept y i. Let x be a middle-layer feature shared by both the target and the i-th visual concept. When we modify the feature x, we can represent the value change of y i using a Taylor series, ∆y i = We designed two experiments to use our explainers to interpret different benchmark CNNs oriented to two different applications, in order to demonstrate the broad applicability of our method. In the first experiment, we used the detection of object parts to explain the detection of the entire object. In the second experiment, we used various face attributes to explain the prediction of another face attribute. We evaluated explanations obtained by our method qualitatively and quantitatively. In this experiment, we used the method proposed in BID38 to learn a CNN, where each filter in the top conv-layer represents a specific object part. We followed exact experimental settings in BID38, which used the Pascal-Part dataset BID3 to learn six CNNs for the six animal 2 categories in the dataset. Each CNN was learned to classify the target animal from random images. We considered each CNN as a performer and regarded its interpretable filters in the top conv-layer as visual concepts to interpret the classification score. Following experimental settings in BID38, we applied our method to four types of CNNs, including the AlexNet BID15, the VGG-M, VGG-S, and VGG-16 networks BID26, i.e. we learned CNNs for six categories based on each network structure. Note that as discussed in BID38, skip connections in residual networks BID10 increased the difficulty of learning part features, so they did not learn interpretable filters in residual networks. The AlexNet/VGG-M/VGG-S/VGG-16 performer had 256/512/512/512 filters in its top conv-layer, so we set n = 256, 512, 512, 512 for these networks. We used the masked output of the top conv-layer as x and plugged x to Equation to compute {y i} 1. We used the 152-layer ResNet BID10 3 as g to estimate weights of visual concepts 4. We set β = 10 for the learning of all explainers. Note that all interpretable filters in the performer represented object parts of the target category on positive images, instead of describing random (negative) images. Table 2: Classification accuracy and relative deviations of the explainer and the performer. We used relative deviations and the decrease of the classification accuracy to measure the information that could not be explained by pre-defined visual concepts. Please see the appendix for more . Intuitively, we needed to ensure a positive relationship betweenŷ and y i. Thus, we filtered out negative prior weights w i ← max{w i, 0} and applied the cross-entropy loss in Equation FORMULA6 to learn the explainer. Evaluation metric: The evaluation has two aspects. Firstly, we evaluated the correctness of the estimated explanation for the performer prediction. In fact, there is no ground truth about exact reasons for each prediction. We showed example explanations of for a qualitative evaluation of explanations. We also used grad-CAM visualization BID24 of feature maps to prove the correctness of our explanations (see the appendix). In addition, we normalized the absolute contribution from each visual concept as a distribution of contributions c i = |α i y i |/ j |α j y j |. We used the entropy of contribution distribution H(c) as an indirect evaluation metric for biasinterpreting. A biased explainer usually used very few visual concepts, instead of using most visual concepts, to approximate the performer, which led to a low entropy H(c).Secondly, we also measured the performer information that could not be represented by the visual concepts, which was unavoidable. We proposed two metrics for evaluation. The first metric is the prediction accuracy. We compared the prediction accuracy of the performer with the prediction accuracy of using the explainer's output i α i y i + b. Another metric is the relative deviation, which measures a normalized output difference between the performer and the explainer. The relative deviation of the image I is normalized as |ŷ I − i α I,i y I,i − b|/(max I ∈IŷI − min I ∈IŷI), whereŷ I denotes the performer's output for the image I.Considering the limited representation power of visual concepts, the relative deviation on an image reflected inference patterns, which were not modeled by the explainer. The average relative deviation over all images was reported to evaluate the overall representation power of visual concepts. Note that our objective was not to pursue an extremely low relative deviation, because the limit of the representation power is an objective existence. In this experiment, we learned a CNN based on the VGG-16 structure to estimate face attributes. We used the Large-scale CelebFaces Attributes (CelebA) dataset BID19 to train a CNN to estimate 40 face attributes. We selected a certain attribute as the target and used its prediction score asŷ. Other 39 attributes were taken as visual concepts to explain the score ofŷ (n = 39). The target attribute was selected from those representing global features of the face, i.e. attractive, heavy makeup, male, and young. It is because global features can usually be described by local visual concepts, but the inverse is not. We learned an explainer for each target attribute. We used the same 152-layer ResNet structure as in Experiment 1 (expect for n = 39) as g to estimate weights. We followed the Case-2 implementation in Section 3.1 to compute prior weights w, in which we used the 4096-dimensional output of the first fully-connected layer as the shared feature x. We set β = 0.2 and used the L-2 norm loss in Equation FORMULA6 to learn all explainers. We used the same evaluation metric as in Experiment 1. The quantitative explanation for the prediction of the attractive attribute. Figure 3: Quantitative explanations for the object classification (top) and the face-attribution prediction (bottom) made by performers. For performers oriented to object classification, we annotated the part that was represented by each interpretable filter in the performer, and we assigned contributions of filters α i y i to object parts (see the appendix). Thus, this figure illustrates contributions of different object parts. All object parts made positive contributions to the classification score. Note that in the bottom, bars indicate elementary contributions α i y i from features of different face attributes, rather than prediction values y i of these attributes. For example, the network predicts a negative goatee attribute y goatee < 0, and this information makes a positive contribution to the target attractive attribute, α i y i > 0. Please see the appendix for more . We compared our method with the traditional baseline of only using the distillation loss to learn the explainer. TAB2 evaluates bias-interpreting of explainers that were learned using our method and the baseline. In addition, Table 2 uses the classification accuracy and relative deviations of the explainer to measure the representation capacity of visual concepts. Our method suffered much less from the bias-interpreting problem than the baseline. Fig. 3 shows examples of quantitative explanations for the prediction made by the performer. We also used the grad-CAM visualization BID24 of feature maps of the performer to demonstrate the correctness of our explanations in Fig. 9 in the appendix. In particular, Fig. 4 in the appendix illustrates the distribution of contributions of visual concepts {c i} when we learned the explainer using different methods. Compared to our method, the distillation baseline usually used very few visual concepts for explanation and ignored most strongly activated interpretable filters, which could be considered as bias-interpreting. In this paper, we focus on a new task, i.e. explaining the logic of each CNN prediction semantically and quantitatively, which presents considerable challenges in the scope of understanding neural networks. We propose to distill knowledge from a pre-trained performer into an interpretable additive explainer. We can consider that the performer and the explainer encode similar knowledge. The additive explainer decomposes the prediction score of the performer into value components from semantic visual concepts, in order to compute quantitative contributions of different concepts. The strategy of using an explainer for explanation avoids decreasing the discrimination power of the performer. In preliminary experiments, we have applied our method to different benchmark CNN performers to prove the broad applicability. Note that our objective is not to use pre-trained visual concepts to achieve super accuracy in classification/prediction. Instead, the explainer uses these visual concepts to mimic the logic of the performer and produces similar prediction scores as the performer. In particular, over-interpreting is the biggest challenge of using an additive explainer to interpret another neural network. In this study, we design two losses to overcome the bias-interpreting problems. Besides, in experiments, we also measure the amount of the performer knowledge that could not be represented by visual concepts in the explainer. Table 4: Classification accuracy of the explainer and the performer. We use the the classification accuracy to measure the information loss when using an explainer to interpret the performer. Note that the additional loss for bias-interpreting successfully overcame the bias-interpreting problem, but did not decrease the classification accuracy of the explainer. Another interesting finding of this research is that sometimes, the explainer even outperformed the performer in classification. A similar phenomenon has been reported in BID9. A possible explanation for this phenomenon is given as follows. When the student network in knowledge distillation had sufficient representation power, the student network might learn better representations than the teacher network, because the distillation process removed abnormal middle-layer features corresponding to irregular samples and maintained common features, so as to boost the robustness of the student network. Table 5: Relative deviations of the explainer. The additional loss for bias-interpreting successfully overcame the bias-interpreting problem and just increased a bit (ignorable) relative deviation of the explainer. BID40 ) used a tree structure to summarize the inaccurate rationale of each CNN prediction into generic decision-making models for a number of samples. This method assumed the significance of a feature to be proportional to the Jacobian w.r.t. the feature, which is quite problematic. This assumption is acceptable for BID40, because the objective of BID40 ) is to learn a generic explanation for a group of samples, and the inaccuracy in the explanation for each specific sample does not significantly affect the accuracy of the generic explanation. In comparisons, our method focuses on the quantitative explanation for each specific sample, so we design an additive model to obtain more convincing explanations. Baseline Our method Figure 4: We compared the contribution distribution of different visual concepts (filters) that was estimated by our method and the distribution that was estimated by the baseline. The baseline usually used very few visual concepts to make predictions, which was a typical case of bias-interpreting. In comparisons, our method provided a much more reasonable contribution distribution of visual concepts. Legs & feet Tail Figure 9: Quantitative explanations for object classification. We assigned contributions of filters to their corresponding object parts, so that we obtained contributions of different object parts. According to top figures, we found that different images had similar explanations, i.e. the CNN used similar object parts to classify objects. Therefore, we showed the grad-CAM visualization of feature maps BID24 on the bottom, which proved this finding. We visualized interpretable filters in the top conv-layer of a CNN, which were learned based on BID38. We projected activation regions on the feature map of the filter onto the image plane for visualization. Each filter represented a specific object part through different images. BID38 ) learned a CNN, where each filter in the top conv-layer represented a specific object part. Thus, we annotated the name of the object part that corresponded to each filter based on visualization (see FIG4 for examples). We simply annotate each filter of the top conv-layer in a performer once, so the total annotation cost was O(N), where N is the filter number. Then, we assigned the contribution of a filter to its corresponding part, i.e. Contri part = i:i-th filter represents the part α i y i. We changed the order of the ReLU layer and the mask layer after the top conv-layer, i.e. placing the mask layer between the ReLU layer and the top conv-layer. According to BID38, this operation did not affect the performance of the pre-trained performer. We used the output of the mask layer as x and plugged x to Equation to compute {y i}.Because the distillation process did not use any ground-truth class labels, the explainer's output i α i y i + b was not sophisticatedly learned for classification. Thus, we used a threshold i α i y i + b > τ (τ ≈ 0), instead of 0, as the decision boundary for classification. τ was selected as the one that maximized the accuracy. Such experimental settings made a fairer comparison between the performer and the explainer.
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
1,722
scitldr
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively. Sequence generation and prediction tasks span many modes of data, ranging from audio and language modelling, to more general timeseries prediction tasks. Applications of such models include speech recognition, machine translation, dialogue generation, speech synthesis, forecasting, and music generation, among others. Neural networks can be applied to these tasks by predicting sequence elements one-by-one, conditioning on the history of sequence elements, forming an autoregressive model. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs), including long-short term memory (LSTM) networks BID7 in particular, have achieved many successes at these tasks. However, in their basic form, these models have a limited ability to adapt to recently observed parts of a sequence. Many sequences contain repetition; a pattern that occurs once is more likely to occur again. For instance, a word that occurs once in a document is much more likely to occur again. A sequence of handwriting will generally stay in the same handwriting style. A sequence of speech will generally stay in the same voice. Although RNNs have a hidden state that can summarize the recent past, they are often unable to exploit new patterns that occur repeatedly in a test sequence. This paper concerns dynamic evaluation, which we investigate as a candidate solution to this problem. Our approach adapts models to recent sequences using gradient descent based mechanisms. We show several ways to improve on past dynamic evaluation approaches in Section 5, and use our improved methodology to achieve state-of-the-art in Section 7. In Section 6 we design a method to dramatically to reduce the number of adaptation parameters in dynamic evaluation, making it practical in a wider range of situations. In Section 7.4 we analyse dynamic evaluation's performance over varying time-scales and distribution shifts, and demonstrate that dynamically evaluated models can generate conditional samples that repeat many patterns from the conditioning data. Generative models can assign probabilities to sequences by modelling each term in the factorization given by the product rule. The probability of a sequence x 1:T = {x 1, . . ., x T} factorizes as P (x 1:T) = P (x 1)P (x 2 |x 1)P (x 3 |x 2, x 1) · · · P (x T |x 1 . . . x T −1).Methods that apply this factorization either use a fixed context when predicting P (x t |x 1:t−1), for instance as in N-grams or CNNs, or use a recurrent hidden state to summarize the context, as in an RNN. However, for longer sequences, the history x 1:t−1 often contains re-occurring patterns that are difficult to capture using models with fixed parameters (static models).In many domains, in a dataset of sequences {x contains useful information about the generating distribution for that specific sequence P (x i 1:T). Therefore adapting the model parameters learned during training θ g is justified. We aim to infer a set of model parameters θ l from x i 1:t−1 that will better approximate P (x i t |x i 1:t−1) within sequence i. Many sequence modelling tasks are characterised by sequences generated from slightly different distributions as in the scenario described above. The generating distribution may also change continuously across a single sequence; for instance, a text excerpt may change topic. Furthermore, many machine learning benchmarks do not distinguish between sequence boundaries, and concatenate all sequences into one continuous sequence. Thus, many sequence modelling tasks could be seen as having a local distribution P l (x) as well as a global distribution P g (x):= P (l)P l (x) dl. During training time, the goal is to find the best fixed model possible for P g (x). However, during evaluation time, a model that can infer the current P l (x) from the recent history has an advantage. Dynamic evaluation methods continuously adapt the model parameters θ g, learned at training time, to parts of a sequence during evaluation. The goal is to learn adapted parameters θ l that provide a better model of the local sequence distribution, P l (x). When dynamic evaluation is applied in the present work, a long test sequence x 1:T is divided up into shorter sequences of length n. We define s 1:M to be a sequence of shorter sequence segments s i DISPLAYFORM0 The initial adapted parameters θ 0 l are set to θ g, and used to compute the probability of the first segment, P (s 1 |θ 0 l). This probability gives a cross entropy loss L(s 1), with gradient ∇L(s 1), which is computed using truncated back-propagation through time BID31. The gradient ∇L(s 1) is used to update the model, ing in adapted parameters θ 1 l, before evaluating P (s 2 |θ 1 l). The same procedure is then repeated for s 2, and for each s i in the sequence as shown in FIG1. Gradients for each loss L(s i) are only backpropagated to the beginning of s i, so computation is linear in the sequence length. Each update applies one maximum likelihood training step to approximate the current local distribution P l (x). The computational cost of dynamic evaluation is one forward pass and one gradient computation through the data, with some slight overhead to apply the update rule for every sequence segment. As in all autoregressive models, dynamic evaluation only conditions on sequence elements that it has already predicted, and so evaluates a valid log-probability for each sequence. Dynamic evaluation can also be used while generating sequences. In this case, the model generates each sequence segment s i using fixed weights, and performs a gradient descent based update step on L(s i). Applying dynamic evaluation for sequence generation could in generated sequences with more consistent regularities, meaning that patterns that occur in the generated sequence are more likely to occur again. Adaptive language modelling was first considered for n-grams, adapting to recent history via caching BID9 BID15, and other methods BID2. More recently, the neural cache approach and the closely related pointer sentinel-LSTM (b) have been used to for adaptive neural language modelling. Neural caching has recently been used to improve the state-of-the-art at word-level language modelling BID18. The neural cache model learns a type of non-parametric output layer on the fly at test time, which allows the network to adapt to recent observations. Each past hidden state h i is paired with the next input x i+1, and is stored as a tuple (h i, x i+1). When a new hidden state h t is observed, the output probabilities are adjusted to give a higher weight to output words that coincided with past hidden states with a large inner product (h where e (xi+1) is a one hot encoding of x i+1, and ω is a scaling parameter. The cache probabilities are interpolated with the base network probabilities to adapt the base network at test time. The neural cache closely relates to dynamic evaluation, as both methods can be added on top of a base model for adaptation at test time. The main difference is the mechanism used to fit to recent history: the neural cache approach uses a non-parametric, nearest neighbours-like method, whereas dynamic evaluation uses a gradient descent based method to change model parameters dynamically. Both methods rely on an autoregressive factorisation, as they depend on observing sequence elements after they are predicted in order to perform adaptation. Dynamic evaluation and neural caching methods are therefore both applicable to sequence prediction and generation tasks, but not directly to more general supervised learning tasks. One drawback of the neural cache method is that it cannot adjust the recurrent hidden state dynamics. As a , the neural cache's ability to capture information that occurs jointly between successive sequence elements is limited. This capability is critical for adapting to sequences where each element has very little independent meaning, e.g. character level language modelling. Another related approach is fast weights, BID1 BID27. Fast weights feature recurrent architectures with dynamically changing weight matrices as a function of recent sequence history. Thus, dynamic evaluation as applied at test time, could be considered a form of fast-weights. In traditional fast weights, the network learns to control changes to the weights during training time, allowing it to be applied to more general sequence problems including sequence labeling. In dynamic evaluation, the procedure to change the weights is automated at test time via gradient descent optimization, making it only directly applicable to autoregressive sequence modelling. As dynamic evaluation leverages gradient descent, it has the potential to generalize better to previously unseen pattern repetitions at test time. Dynamic evaluation of neural language models was proposed by BID20. Their approach simply used stochastic gradient descent (SGD) updates at every time step, computing the gradient with fully truncated backpropagation through time, which is equivalent to setting n = 1 in equation FORMULA1. Dynamic evaluation has since been applied to character and word-level language models (; ; BID24). Previous work using dynamic evaluation considered it as an aside, and did not explore it in depth. We propose several changes to BID20's dynamic evaluation method with SGD and fully truncated backpropagation, which we refer to as traditional dynamic evaluation. The first modification reduces the update frequency, so that gradients are backpropagated over more timesteps. This change provides more accurate gradient information, and also improves the computational efficiency of dynamic evaluation, since the update rule is applied much less often. We use sequence segments of length 5 for word-level tasks and 20 for character-level tasks. Next, we add a global decay prior to bias the model towards the parameters θ g learned during training. Our motivation for dynamic evaluation assumes that the local generating distribution P l (x) is constantly changing, so it is potentially desirable to weight recent sequence history higher in adaptation. Adding a global decay prior accomplishes this by causing previous adaptation updates to decay exponentially over time. The use of a decay prior for dynamic evaluation relates to the update rule used for fast weights in BID1, which decayed fast weights towards zero exponentially over time. For SGD with a global prior, learning rate η and decay rate λ; we form the update rule DISPLAYFORM0 We then consider using an RMSprop BID28 derived update rule for the learning rule in place of SGD. RMSprop uses a moving average of recent squared gradients to scale learning rates for each weight. In dynamic evaluation, near the start of a test sequence, RMSprop has had very few gradients to average, and therefore may not be able to leverage its updates as effectively. For this reason, we collect mean squared gradients, MS g, on the training data rather than on recent test data (which is what RMSprop would do). MS g is given by DISPLAYFORM1 where N b is the number of training batches and ∇L k is the gradient on the kth training batch. The mini-batch size for this computation becomes a hyper-parameter, as larger mini-batches will in smaller mean squared gradients. The update rule, which we call RMS with a global prior in our experiments, is then DISPLAYFORM2 where is a stabilization parameter. For the decay step of our update rule, we also consider scaling the decay rate for each parameter proportionally to MS g. Parameters with a high RMS gradient affect the dynamics of the network more, so it makes sense to decay them faster. RMS norm is MS g divided by its mean, ing in a normalized version of MS g with a mean of 1: DISPLAYFORM3 We clip the values of RMS norm to be no greater than 1 /λ to be sure that the decay rate does not exceed 1 for any parameter. Combining the learning component and the regularization component in the final update equation, which we refer to as RMS with an RMS global prior DISPLAYFORM4 6 SPARSE DYNAMIC EVALUATION Mini-batching over sequences is desirable for some test-time sequence modelling applications because it allows faster processing of multiple sequences in parallel. Dynamic evaluation has a high memory cost for mini-batching because it is necessary to store a different set of parameters for each sequence in the mini-batch. Therefore, we consider a sparse dynamic evaluation variant that updates a smaller number of parameters. We introduce a new adaptation matrix M which is initialized to zeros. M multiplies hidden state vector h t of an RNN at every time-step to get a new hidden state h t, via h t = h t + Mh t. h t then replaces h t and is propagated throughout the network via both recurrent and feed-forward connections. In a stacked RNN, this formulation could be applied to every layer or just one layer. Applying dynamic evaluation to M avoids the need to apply dynamic evaluation to the original parameters of the network, reduces the number of adaptation parameters, and makes mini-batching less memory intensive. We reduce the number of adaptation parameters further by only using M to transform an arbitrary subset of H hidden units. This in M being an H ×H matrix with d = H 2 adaptation parameters. If H is chosen to be much less than the number of hidden units, this reduces the number of adaptation parameters dramatically. In Section 7.3 we experiment with sparse dynamic evaluation for character-level language models. We applied dynamic evaluation to word-level and character-level language modelling. In all tasks, we evaluate dynamic evaluation on top of a base model. After training the base model, we tune hyper-parameters for dynamic evaluation on the validation set, and evaluate both the static and dynamic versions of the model on the test set. We also consider follow up experiments that analyse the sequence lengths for which dynamic evaluation is useful. We train base models on the Penn Treebank (PTB, BID16, WikiText-2 (b) datasets, and compare the performance of static and dynamic evaluation. These experiments compare dynamic evaluation against past approaches such as the neural cache and measure dynamic evaluation's general performance across different models and datasets. PTB is derived from articles of the Wall Street Journal. It contains 929k training tokens and a vocab size limited to 10k words. It is one of the most commonly used benchmarks in language modelling. We consider two baseline models on PTB, a standard LSTM implementation with recurrent dropout BID33, and the recent state-of-the-art averaged SGD (ASGD) weight-dropped LSTM (AWD-LSTM, BID18 .Our standard LSTM was taken from the Chainer BID29 tutorial on language modelling 1, and used two LSTM layers with 650 units each, trained with SGD and regularized with recurrent dropout. On our standard LSTM, we experiment with traditional dynamic evaluation as applied by BID20, as well as each modification we make building up to our final update rule as described in Section 5. As our final update rule (RMS + RMS global prior) worked best, we use this for all other experiments and use "dynamic eval" by default to refer to this update rule in tables. We applied dynamic evaluation on an AWD-LSTM BID18. The AWD-LSTM is a vanilla LSTM that combines the use of drop-connect BID30 on recurrent weights for regularization, and a variant of ASGD BID25 for optimisation. Our model, which used 3 layers and tied input and output embeddings BID26 BID8, was intended to be a direct replication of AWD-LSTM, using code from their implementation 2. Results are given in Table 1.Dynamic evaluation gives significant overall improvements to both models on this dataset. Dynamic evaluation also achieves better final than the neural cache on both a standard LSTM and the AWD-LSTM reimplementation, and improves the state-of-the-art on PTB. RNN+LDA+kN-5+cache 92.0 CharCNN BID11 19M 78.9 LSTM BID33 66M 82.2 78.4 Variational LSTM BID4 66M 73.4 Pointer sentinel-LSTM (b) 21M 72.4 70.9 Variational LSTM + augmented loss BID8 51M 71.1 68.5 Variational RHN BID34 23M 67.9 65.4 NAS cell 54M 62.4 Variational LSTM + gradual learning 105M 61.7 LSTM + BB tuning BID17 24M 60.9 58.3 LSTM 86.9 82.3 LSTM + neural cache 74. BID8 28M 91.5 87.0 Pointer sentinel-LSTM (b) 84.8 80.8 LSTM + BB tuning BID17 24M 69.1 65.9LSTM 104.2 99.3 LSTM + neural cache 72 WikiText-2 is roughly twice the size of PTB, with 2 million training tokens and a vocab size of 33k. It features articles in a non-shuffled order, with dependencies across articles that adaptive methods should be able to exploit. For this dataset, we use the same baseline LSTM implementation and AWD-LSTM re-implementation as on PTB. Results are given in TAB3.Dynamic evaluation improves the state-of-the-art perplexity on WikiText-2, and provides a significantly greater improvement than neural caching to both base models. This suggests that dynamic evaluation is effective at exploiting regularities that co-occur across non-shuffled documents. model valid test LSTM 121.8 LSTM + neural cache 99.9AWD-LSTM 80.0 87.5 AWD-LSTM + neural cache 67.5 75.1 AWD-LSTM + dynamic eval 63.3 70.3 We benchmark the performance of dynamic evaluation against static evaluation and the neural cache on the larger text8 dataset. Like WikiText-2, text8 is derived from Wikipedia text. Text8 was introduced for word level language modelling by BID22, which preprocessed the data by mapping rare words to an'<unk>' token, ing in a vocab of 44k and 17M training tokens. We use the same test set as in BID22, but also hold out the final 100k training tokens as a validation set to allow for fair hyper-parameter tuning (the original task did not have a validation set). We trained an AWD-LSTM with 52M parameters using the implementation from BID18. We then compare the performance of static evaluation, dynamic evaluation, and neural caching at test time. To ensure a fair comparison between dynamic evaluation and the neural cache, we used robust hyper-parameter tuning on the validation set for both methods. For dynamic evaluation, we used the hyper-parameter settings found on PTB, and only tuned the learning rate (to 2 significant figures). The neural cache uses 3 hyper-parameters: the cache length, a mixing parameter and a flatness parameter. Starting from a cache size of 3000, we used a series of grid searches to find optimal values for the mixing parameter and flatness parameter (to 2 significant figures). We then varied the cache size in the range of 2000-4000, and found that the affect of this was negligible, so we kept the cache size at 3000. Results are given in table 3, with the from that used the same test set given for context. Dynamic evaluation soundly outperforms static evaluation and the neural cache method, demonstrating that the benefits of dynamic evaluation do not wash away when using a stronger model with more training data. We consider dynamic evaluation on the character-level text8, and Hutter Prize datasets. The Hutter Prize dataset is comprised of Wikipedia text, and includes XML and characters from non-Latin languages. It is 100 million UTF-8 bytes long and contains 205 unique bytes. Similarly to other reported , we use a 90-5-5 split for training, validation, and testing. The text8 dataset is derived the Hutter Prize dataset, but has all XML removed, and is lower cased to only have 26 characters of English text plus spaces. The character-level text8 task corresponds to the unprocessed version of the text8 data used for our medium-scale word level language modelling experiments. As with Hutter Prize, we use the standard 90-5-5 split for training, validation, and testing for text8. We used a multiplicative LSTM (mLSTM) BID13 3 as our base model for both datasets. The mLSTMs for both tasks used 2800 hidden units, an embedding layer of 400 units, weight normalization , variational dropout BID4, and ADAM BID12 for training. We also consider sparse dynamic evaluation, as described in Section 6, on the Hutter Prize dataset. For sparse dynamic evaluation, we adapted a subset of 500 hidden units, ing in a 500×500 adaptation matrix and 250k adaptation parameters. Our mLSTM only contained one recurrent layer, so only one adaptation matrix was used for sparse dynamic evaluation. All of our dynamic evaluation in this section use the final update rule given in Section 5. Results for Hutter Prize are given in TAB7, and for text8 are given in BID32 17M 1.44 HyperLSTM BID6 27M 1.34 Hierarchical multiscale LSTM BID3 1.32 Bytenet decoder BID10 1.31 LSTM + BB tuning BID17 46M 1.30 Recurrent highway networks BID34 46M 1.27 Fast-slow LSTM BID23 BID32 4M 1.44 LSTM BID3 1.43 Batch normalised LSTM BID3 1.36 Hierarchical multiscale LSTM BID3 1.29 Recurrent highway networks BID34 Dynamic evaluation achieves large improvements to our base models and state-of-the-art on both datasets. Sparse dynamic evaluation also achieves significant improvements on Hutter Prize using only 0.5% of the adaptation parameters of regular dynamic evaluation. We measure time-scales at which dynamic evaluation gains an advantage over static evaluation. Starting from the model trained on Hutter Prize, we plot the performance of static and dynamic evaluation against the number of characters processed on sequences from the Hutter Prize test set, and sequences in Spanish from the European Parliament dataset BID12.The Hutter Prize data experiments show the timescales at which dynamic evaluation gained the advantage observed in TAB7. We divided the Hutter Prize test set into 500 sequences of length 10000, and applied static and dynamic evaluation to these sequences using the same model and methodology used to obtain in TAB7. Losses were averaged across these 500 sequences to obtain average losses at each time step. Plots of the average cross-entropy errors against the number of Hutter characters sequenced are given in FIG2.The Spanish experiments measure how dynamic evaluation handles large distribution shifts between training and test time, as Hutter Prize contains very little Spanish. We used the first 5 million characters of the Spanish European Parliament data in place of the Hutter Prize test set. The Spanish experiments used the same base model and dynamic evaluation settings as Hutter Prize. Plots of the average cross-entropy errors against the number of Spanish characters sequenced are given in FIG2.On both datasets, dynamic evaluation gave a very noticeable advantage after a few hundred characters. For Spanish this advantage continued to grow as more of the sequence was processed, whereas for Hutter, this advantage was maximized after viewing around 2-3k characters. The advantage of dynamic evaluation was also much greater on Spanish sequences than Hutter sequences. We also drew 300 character conditional samples from the static and dynamic versions of our model after viewing 10k characters of Spanish. For the dynamic model, we continued to apply dynamic evaluation during sampling as well, by the process described in Section 3. The conditional samples are given in the appendix. The static samples quickly switched to English that resembled Hutter Prize data. The dynamic model generated data with some Spanish words and a number of made up words with characteristics of Spanish words for the entirety of the sample. This is an example of the kinds of features that dynamic evaluation was able to learn to model on the fly. This work explores and develops methodology for applying dynamic evaluation to sequence modelling tasks. Experiments show that the proposed dynamic evaluation methodology gives large test time improvements across character and word level language modelling. Our improvements to language modelling have applications to speech recognition and machine translation over longer contexts, including broadcast speech recognition and paragraph level machine translation. Overall, dynamic evaluation is shown to be an effective method for exploiting pattern re-occurrence in sequences. A.1 DYNAMIC SAMPLES CONDITIONED ON SPANISH 300 character samples generated from the dynamic version of the model trained on Hutter Prize, conditioned on 10k of Spanish characters. The final sentence fragment of the 10k conditioning characters is given to the reader, with the generated text given in bold:Tiene importancia este compromiso en la medida en que la Comisión es un organismo que tiene el montembre tas procedíns la conscriptione se ha Tesalo del Pómienda que et hanemos que Pe la Siemina. De la Pedrera Orden es Señora Presidente civil, Orden de siemin presente relevante frónmida que esculdad pludiore e formidad President de la Presidenta Antidorne Adamirmidad i ciemano de el 200'. Fo 300 character samples generated from the static version of the model trained on Hutter Prize, conditioned on 10k of Spanish characters. The final sentence fragment of the 10k conditioning characters is given to the reader, with the generated text given in bold:Tiene importancia este compromiso en la medida en que la Comisión es un organismo que tiene el monde, &lt;br&gt;There is a secret act in the world except Cape Town, seen in now flat comalo and ball market and has seen the closure of the eagle as imprints in a dallas within the country.&quot; Is a topic for an increasingly small contract saying Allan Roth acquired the government in [ ].===
Paper presents dynamic evaluation methodology for adaptive sequence modelling
1,723
scitldr
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art , and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. Generative Adversarial Networks (GANs) BID5 are a framework to construct a generative model that can mimic the target distribution, and in recent years it has given birth to arrays of state-of-the-art algorithms of generative models on image domain BID23; BID30 BID20. The most distinctive feature of GANs is the discriminator D(x) that evaluates the divergence between the current generative distribution p G (x) and the target distribution q(x) BID5 BID16. The algorithm of GANs trains the generator model by iteratively training the discriminator and generator in turn, with the discriminator acting as an increasingly meticulous critic of the current generator. Conditional GANs (cGANs) are a type of GANs that use conditional information BID13 for the discriminator and generator, and they have been drawing attention as a promising tool for class conditional image generation BID17, the generation of the images from text BID20 BID30, and image to image translation BID10 BID31. Unlike in standard GANs, the discriminator of cGANs discriminates between the generator distribution and the target distribution on the set of the pairs of generated samples x and its intended conditional variable y. To the authors' knowledge, most frameworks of discriminators in cGANs at the time of writing feeds the pair the conditional information y into the discriminator by naively concatenating (embedded) y to the input or to the feature vector at some middle layer BID13 BID2 BID20 BID30 BID18 BID22 BID3 BID24. We would like to however, take into account the structure of the assumed conditional probabilistic models underlined by the structure of the discriminator, which is a function that measures the information theoretic distance between the generative distribution and the target distribution. By construction, any assumption about the form of the distribution would act as a regularization on the choice of the discriminator. In this paper, we propose a specific form of the discriminator, a form motivated by a probabilistic model in which the distribution of the conditional variable y given x is discrete or uni-modal continuous distributions. This model assumption is in fact common in many real world applications, including class-conditional image generation and super-resolution. As we will explain in the next section, adhering to this assumption will give rise to a structure of the discriminator that requires us to take an inner product between the embedded condition vector y and the feature vector (Figure 1d). With this modification, we were able to significantly improve the quality of the class conditional image generation on 1000-class ILSVRC2012 dataset BID21 with a single pair of a discriminator and generator (see the generated examples in Figure 2). Also, when we applied our model of cGANs to a super-resolution task, we were able to produce high quality super-resolution images that are more discriminative in terms of the accuracy of the label classifier than the cGANs based on concatenation, as well as the bilinear and the bicubic method. Let us denote the input vector by x and the conditional information by y 1. We also denote the cGAN discriminator by D(x, y; θ):= A(f (x, y; θ)), where f is a function of x and y, θ is the parameters of f, and A is an activation function of the users' choice. Using q and p to designate the true distributions and the generator model respectively, the standard adversarial loss for the discriminator is given by: DISPLAYFORM0 with A in D representing the sigmoid function. By construction, the nature of the'critic' D significantly affects the performance of G. A conventional way of feeding y to D until now has been to concatenate the vector y to the feature vector x, either at the input layer BID13 BID2 BID22, or at some hidden layer BID20 BID30 BID18 BID3 BID24 ) (see Figure 1a and Figure 1b). We would like to propose an alternative to this approach by observing the form of the optimal solution BID5 for the loss function, Eq., can be decomposed into the sum of two log likelihood ratios: DISPLAYFORM1 Now, we can model the log likelihood ratio r(y|x) and r(x) by some parametric functions f 1 and f 2 respectively. If we make a standing assumption that p(y|x) and q(y|x) are simple distributions like those that are Gaussian or discrete log linear on the feature space, then, as we will show, the parametrization of the following form becomes natural: DISPLAYFORM2 where V is the embedding matrix of y, φ(·, θ Φ) is a vector output function of x, and ψ(·, θ Ψ) is a scalar function of the same φ(x; θ Φ) that appears in f 1 (see Figure 1d). The learned parameters θ = {V, θ Φ, θ Ψ} are to be trained to optimize the adversarial loss. From this point on, we will refer to this model of the discriminator as projection for short. In the next section, we would like to elaborate on how we can arrive at this form. In this section, we will begin from specific, often recurring models and show that, with certain regularity assumption, we can write the optimal solution of the discriminator objective function in the form of. Let us first consider the a case of categorical variable. Assume that y is a categorical variable taking a value in {1, . . ., C}, which is often common for a class conditional image generation task. The most popular model for p(y|x) is the following log linear model: DISPLAYFORM0 where DISPLAYFORM1 is the partition function, and φ: DISPLAYFORM2 is the input to the final layer of the network model. Now, we assume that the target distribution q can also be parametrized in this form, with the same choice of φ. This way, the log likelihood ratio would take the following form; DISPLAYFORM3 If we make the values of (v . Now, if we can put together the normalization constant − (log Z q (φ(x)) − log Z p (φ(x))) and r(x) into one expression ψ(φ(x)), we can rewrite the equation above as DISPLAYFORM4 by using y to denote a one-hot vector of the label y and using V to denote the matrix consisting of the row vectors v c. Most notably, this formulation introduces the label information via an inner product, as opposed to concatenation. The form is indeed the form we proposed in.We can also arrive at the form for unimodal continuous distributions p(y|x) as well. Let y ∈ R d be a d-dimensional continuous variable, and let us assume that conditional q(y|x) and p(y|x) are both given by Gaussian distributions, so that q(y|x) = N (y|µ q (x), Λ −1 q ) and DISPLAYFORM5 Then the log density ratio r(y|x) = log(q(y|x)/p(y|x)) is given by: DISPLAYFORM6 where ψ(φ(x)) represents the terms independent of y. Now, if we assume that Λ q = Λ p:= Λ, we can ignore the quadratic term. If we further express Λ q W q − Λ p W p in the form V, we can arrive at the form again. Indeed, however, the way that this regularization affects the training of the generator G is a little unclear in its formulation. As we have repeatedly explained, our discriminator measures the divergence between the generator distribution p and the target distribution q on the assumption that p(y|x) and q(y|x) are relatively simple, and it is highly possible that we are gaining stability in the training process by imposing a regularity condition on the divergence measure. Meanwhile, however, the actual p(y|x) can only be implicitly derived from p(x, y) in computation, and can possibly take numerous forms other than the ones we have considered here. We must admit that there is a room here for an important theoretical work to be done in order to assess the relationship between the choice of the function space for the discriminator and training process of the generator. As described above, is a form that is true for frequently occurring situations. In contrast, incorporation of the conditional information by concatenation is rather arbitrary and can possibly include into the pool of candidate functions some sets of functions for which it is difficult to find a logical basis. Indeed, if the situation calls for multimodal p(y|x), it might be smart not to use the model that we suggest here. Otherwise, however, we expect our model to perform better; in general, it is preferable to use a discriminator that respects the presumed form of the probabilistic model. Still another way to incorporate the conditional information into the training procedure is to directly manipulate the loss function. The algorithm of AC-GANs BID17 use a discriminator (D 1) that shares a part of its structure with the classifier(D 2), and incorporates the label information into the objective function by augmenting the original discriminator objective with the likelihood score of the classifier on both the generated and training dataset (see Figure 1c). Plug and Play Generative models (PPGNs) BID15 is another approach for the generative model that uses an auxiliary classifier function. It is a method that endeavors to make samples from p(x|y) using an MCMC sampler based on the Langevin equation with drift terms consisting of the gradient of an autoencoder prior p(x) and a pretrained auxiliary classifier p(y|x). With these method, one can generate a high quality image. However, these ways of using auxiliary classifier may unwittingly encourage the generator to produce images that are particularly easy for the auxiliary classifier to classify, and deviate the final p(x|y) from the true q(x|y). In fact, BID17 reports that this problem has a tendency to exacerbate with increasing number of labels. We were able to reproduce this phenomena in our experiments; when we implemented their algorithm on a dataset with 1000 class categories, the final trained model was able to generate only one image for most classes. Nguyen et al.'s PPGNs is also likely to suffer from the same problem because they are using an order of magnitude greater coefficient for the term corresponding to p(y|x) than for the other terms in the Langevin equation. In order to evaluate the effectiveness of our newly proposed architecture for the discriminator, we conducted two sets of experiments: class conditional image generation and super-resolution on ILSVRC2012 (ImageNet) dataset BID21. For both tasks, we used the ResNet BID8 based discriminator and the generator used in BID6, and applied spectral normalization BID14 to the all of the weights of the discriminator to regularize the Lipschitz constant. For the objective function, we used the following hinge version of the standard adversarial loss BID12 BID28 DISPLAYFORM0 where the last activation function A of D is identity function. p(z) is standard Gaussian distribution and G(z, y) is the generator network. For all experiments, we used Adam optimizer with hyper-parameters set to α = 0.0002, β 1 = 0, β 2 = 0.9. We updated the discriminator five times per each update of the generator. We will use concat to designate the models (Figure 1b) 2, and use projection to designate the proposed model (Figure 1d). The ImageNet dataset used in the experiment of class conditional image generation consisted of 1,000 image classes of approximately 1,300 pictures each. We compressed each images to 128×128 pixels. Unlike for AC-GANs 3 we used a single pair of a ResNet-based generator and a discriminator. Also, we used conditional batch normalization BID4 BID1 for the generator. As for the architecture of the generator network used in the experiment, please see FIG2 for more detail. Our proposed projection model discriminator is equipped with a'projection layer' that takes inner product between the embedded one-hot vector y and the intermediate output (FIG2). As for the structure of the the concat model discriminator to be compared against, we used the identical bulk architecture as the projection model discriminator, except that we removed the projection layer from the structure and concatenated the spatially replicated embedded conditional vector y to the output of third ResBlock. We also experimented with AC-GANs as the current state of the art model. For AC-GANs, we placed the softmax layer classifier to the same structure shared by concat and projection. For each method, we updated the generator 450K times, and applied linear decay for the learning rate after 400K iterations so that the rate would be 0 at the end. For the comparative experiments, we trained the model for 450K iterations, which was ample for the training of concat to stabilize. AC-GANs collapsed prematurely before the completion of 450K iterations, so we reported the from the peak of its performance (80K iterations). For all experiments throughout, we used the training over 450K iterations for comparing the performances. On a separate note, our method continued to improve even after 450K. We therefore also reported the inception score and FID of the extended training (850K iterations) for our method exclusively. See the table 1 for the exact figures. We used inception score BID23 for the evaluation of the visual appearance of the generated images. It is in general difficult to evaluate how'good' the generative model is. Indeed, however, either subjective or objective, some definite measures of'goodness' exists, and essential two of them are'diversity' and the sheer visual quality of the images. One possible candidate for quantitative measure of diversity and visual appearance is FID BID9. We computed FID between the generated images and dataset images within each class, and designated the values as intra FIDs. More precisely, FID BID9 measures the 2-Wasserstein distance between the two distributions q y and p y, and is given by F (q y, p y) = µ qy − µ py 2 2 + trace C qy + C py − 2(C qy C py) 1/2, where {µ qy, C qy}, {µ py, C py} are respectively the mean and the covariance of the final feature vectors produced by the inception model BID25 from the true samples and generated samples of class y. When the set of generated examples have collapsed modes, the trace of C py becomes small and the trace term itself becomes large. In order to compute C qy we used all samples in the training data belonging to the class of concern, and used 5000 generated samples for the computation of C py. We empirically observed in our experiments that intra FID is, to a certain extent, serving its purpose well in measuring the diversity and the visual quality. To highlight the effectiveness of our inner-product based approach (projection) of introducing the conditional information into the model, we compared our method against the state of the art ACGANs as well as the conventional incorporation of the conditional information via concatenation 2 in the preliminary experiments of the image geneation task on CIFAR-10 BID27 and CIFAR-100 BID27, we confirmed that hidden concatenation is better than input concatenation in terms of the inception scores. For more details, please see TAB1 in the appendix section.3 For AC-GANs, the authors prepared a pair of discriminator and generator for each set classes of size 10. at hidden layer (concat). As we can see in the training curves Figure 3, projection outperforms inception score than concat throughout the training. TAB0 compares the intra class FIDs and the inception Score of the images generated by each method. The shown here for the AC-GANs is that of the model at its prime in terms of the inception score, because the training collapsed at the end. We see that the images generated by projection have lower intra FID scores than both adversaries, indicating that the Wasserstein distance between the generative distribution by projection to the target distribution is smaller. For the record, our model performed better than other models on the CIFAR10 and CIFAR 100 as well (See Appendix A). FIG8 and 10b shows the set of classes for which (a) projection yielded with better intra FIDs than the concat and (b) the reverse. From the top, the figures are listed in descending order of the ratio between the intra FID score between the two methods. Note that when the concat outperforms projection it only wins by a slight margin, whereas the projection outperforms concat by large margin in the opposite case. A quick glance on the cases in which the concat outperforms the projection suggests that the FID is in fact measuring the visual quality, because both sets looks similar to the human eyes in terms of appearance. FIG3 shows an arbitrarily selected set of yielded by AC-GANs from variety of zs. We can clearly observe the mode-collapse on this batch. This is indeed a tendency reported by the inventors themselves BID17. ACGANs can generate easily recognizable (i.e classifiable) images, but at the cost of losing diversity and hence at the cost of constructing a generative distribution that is significantly different from the target distribution as a whole. We can also assess the low FID score of projection from different perspective. By construction, the trace term of intra FID measures the degree of diversity within the class. Thus, our on the intra FID scores also indicates that that our projection is doing better in reproducing the diversity of the original. The GANs with the concat discriminator also suffered from mode-collapse for some classes (see FIG4). For the set of images generated by projection, we were not able to detect any notable mode-collapse. FIG5 shows the samples generated with the projection model for the classes on which the cGAN achieved lowest intra FID scores (that is the classes on which the generative distribution were particularly close to target conditional distribution), and FIG5 the reverse. While most of the images listed in FIG5 are of relatively high quality, we still observe some degree of mode-collapse. Note that the images in the classes with high FID are featuring complex objects like human; that is, one can expect the diversity within the class to be wide. However, we note that we did not use the most complicated neural network available for the experiments presented on this paper, because we prioritized the completion of the training within a reasonable time frame. It is very possible that, by increasing the complexity of the model, we will be able to further improve the visual quality of the images and the diversity of the distribution. In Appendix D, we list images of numerous classes generated by cGANs trained with our projection model. With our new architecture, we were also able to successfully perform category morphism. When there are classes y 1 and y 2, we can create an interpolated generator by simply mixing the parameters of conditional batch normalization layers of the conditional generator corresponding to these two classes. FIG6 shows the output of the interpolated generator with the same z. Interestingly, the combination is also yielding meaningful images when y 1 and y 2 are significantly different. Fine-tuning with the pretrained model on the ILSVRC2012 classification task. As we mentioned in Section 4, the authors of Plug and Play Generative model (PPGNs) BID15 were able to improve the visual appearance of the model by augmenting the cost function with that of the label classifier. We also followed their footstep and augmented the original generator loss with an additional auxiliary classifier loss. As warned earlier regarding this type of approach, however, this type of modification tends to only improve the visual performance of the images that are easy for the pretrained model to classify. In fact, as we can see in Appendix B, we were able to improve the visual appearance the images with the augmentation, but at the cost of diversity. We also evaluated the effectiveness of in its application to the super-resolution task. Put formally, the super-resolution task is to infer the high resolution RGB image of dimension DISPLAYFORM0 This task is very much the case that we presumed in our model construction, because p(y|x) is most likely unimodal even if p(x|y) is multimodal. For the super-resolution task, we used the following formulation for discriminator function: DISPLAYFORM1 where F (φ(x; θ Φ)) = V * φ(x; θ Φ) where V is a convolutional kernel and * stands for convolution operator. Please see FIG3 in the appendix section for the actual network architectures we used (a) Generated images on the class with'low' FID scores.(b) generated images on the class with'high' FID scores. for this task. For this set of experiments, we constructed the concat model by removing the module in the projection model containing the the inner product layer and the accompanying convolution layer altogether, and simply concatenated y to the output of the ResBlock preceding the inner product module in the original. As for the resolutions of the image datasets, we chose R H = 128 and R L = 32, and created the low resolution images by applying bilinear downsampling on high resolution images. We updated the generators 150K times for all methods, and applied linear decay for the learning rate after 100K iterations so that the final learning rate was 0 at 150K-th iteration. FIG7 shows the of our super-resolution. The bicubic super-resolution is very blurry, and concat is suffering from excessively sharp and rough edges. On the other hand, the edges of the images generated by our projection method are much clearer and smoother, and the image itself is much more faithful to the original high resolution images. In order to qualitatively compare the performances of the models, we checked MS-SSIM BID29 and the classification accuracy of the inception model on the generated images using the validation set of the ILSVRC2012 dataset. As we can see in Table 2, our projection model was able to achieve high inception accuracy and high MS-SSIM when compared to bicubic and concat. Note that the performance of superresolution with concat model even falls behind those of the bilinear and bicubic super-resolutions 878 -in terms of the inception accuracy. Also, we used projection model to generate multiple batches of images with different random values of z to be fed to the generator and computed the average of the logits of the inception model on these batches (MC samples). We then used the so-computed average logits to make prediction of the labels. With an ensemble over 10 seeds (10 MC in Table 2), we were able to improve the inception accuracy even further. This indicates that our GANs are learning the super-resolution as an distribution, as opposed to deterministic function. Also, the success with the ensemble also suggests a room for a new way to improve the accuracy of classification task on low resolution images. Any specification on the form of the discriminator imposes a regularity condition for the choice for the generator distribution and the target distribution. In this research, we proposed a model for the discriminator of cGANs that is motivated by a commonly occurring family of probabilistic models. This simple modification was able to significantly improve the performance of the trained generator on conditional image generation task and super-resolution task. The presented in this paper is strongly suggestive of the importance of the choice of the form of the discriminator and the design A OF CLASS CONDITIONAL IMAGE GENERATION ON CIFAR-10 AND CIFAR-100As a preliminary experiment, we compared the performance of conditional image generation on CIFAR-10 and CIFAR-100 3. For the discriminator and the generator, we reused the same architecture used in BID14 for the task on CIFAR-10. For the adversarial objective functions, we used, and trained both machine learners with the same optimizer with same hyper parameters we used in Section 5. For our projection model, we added the projection layer to the discriminator in the same way we did in the ImageNet experiment (before the last linear layer). Our projection model achieved better performance than other methods on both CIFAR-10 and CIFAR-100. Concatenation at hidden layer (hidden concat) was performed on the output of second ResBlock of the discriminator. We tested hidden concat as a comparative method in our main experiments on ImageNet, because the concatenation at hidden layer performed better than the concatenation at the input layer (input concat) when the number of classes was large (CIFAR-100).To explore how the hyper-parameters affect the performance of our proposed architecture, we conducted hyper-parameter search on CIFAR-100 about the Adam hyper-parameters (learning rate α and 1st order momentum β 1) for both our proposed architecture and the baselines. Namely, we varied each one of these parameters while keeping the other constant, and reported the inception scores for all methods including several versions of concat architectures to compare. We tested with concat module introduced at (a) input layer, (b) hidden layer, and at (c) output layer. As we can see in Figure 11, our projection architecture excelled over all other architectures for all choice of the parameters, and achieved the inception score of 9.53. Meanwhile, concat architectures were able to achieve all 8.82 at most. The best concat model in term of the inception score on CIFAR-100 was the hidden concat with α = 0.0002 and β 1 = 0, which turns out to be the very choice of the parameters we picked for our ImageNet experiment. In this experiment, we followed the footsteps of Plug and Play Generative model (PPGNs) BID15 and augmented the original generator loss with an additional auxiliary classifier loss. In particular, we used the losses given by: DISPLAYFORM0 wherep pre (y|x) is the fixed model pretrained for ILSVRC2012 classification task. For the actual experiment, we trained the generator with the original adversarial loss for the first 400K updates, and used the augmented loss for the last 50K updates. For the learning rate hyper parameter, we adopted the same values as other experiments we described above. For the pretrained classifier, we used ResNet50 model used in BID7. FIG0 compares the generated by vanilla objective function and the generated by the augmented objective function. As we can see in TAB2, we were able to significantly outperform PPGNs in terms of inception score. However, note that the images generated here are images that are easy to classify. The method with auxiliary classifier loss seems effective in improving the visual appearance, but not in training faithful generative model. ReLU Conv 3x3ReLU (a) ResBlock architecture for the discriminator. Spectral normalization BID14 was applied to each conv layer. BID4 BID1 was used in place of the standard batch normalization layer. For the ResBlock in the generator for the super resolution tasks that implements the upsampling, the random vector z was fed to the model by concatenating the vector to the embedded low resolution image vector y prior to the first convolution layer within the block. For the procedure of downsampling and upsampling, we followed the implementation by BID6. For the discriminator, we performed downsampling (average pool) after the second conv of the ResBlock. For the generator, we performed upsampling before the first conv of the ResBlock. For the ResBlock that is performing the downsampling, we replaced the identity mapping with 1x1 conv layer followed by downsampling to balance the dimension. We did the essentially same for the Resblock that is performing the upsampling, except that we applied the upsampling before the 1x1 conv.
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.
1,724
scitldr
Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes. Imagine there are two students who are studying for an exam. Student A studies by diligently learning the class material by heart. Student B studies by learning the underlying reasons for why things are the way they are. Come test day, student A is only able to answer test questions that are very similar to the class material while student B has no trouble answering different looking questions that follow the same reasoning. Distilled from this example, we note there is a trade-off between how "well" a student studied, i.e., how indifferent the student is to receiving exercise or test questions, and how close the test questions are to the exercise questions. While most machine learning work studies the generalization error, i.e., the error when testing on different samples from the same distribution, we do not take the match of train and test distribution as given. In fact, it appears that the distance between train and test distribution may be critical for successful "generalization". Following a similar line of thought, devised a distribution measurement to select only features that do not vary from one domain to another. In contrast, we are interested in linking performance directly to the distance between train and test distribution. Invariance to distribution shifts: We say that a function is invariant to a given input perturbation when the corresponding output does not change with the perturbation. This is desirable when trying to achieve robustness to irrelevant data variations which are called nuisances . As outlined by; , the "optimal" learned function from input to output is maximally invariant to all data variations that do not contain information about the output. To the extent to which a learner reacts to such nuisance variations, which carry no information about the output, it will incur a performance change in expectation. The difficulty lies in knowing what can be ignored and what cannot. Similarity between training and test distribution: Another strategy would be to ensure that the training and test distribution match which has been investigated in a number of diverse settings ). Variations of this theme were encountered by , where they show that networks are able to fit random labels perfectly, yet understandably fail to generalize to the test set of the correct label distribution. We frame the learning problem of training on a training set then shifting to test on a test set as a transfer learning problem that goes beyond the usual generalization which assumes the same distribution for training and testing. Based on our analysis, we aim to explicitly express the trade-off of the distance between both data set distributions given by the Wasserstein distance, which measures how close training and testing distribution are, and how invariant the learned function is to the training to test distribution shift. Joined together this is expressed via the Wasserstein distance between training and test samples embedded in the feature space of the learned function. Our experiments show a strong negative linear correlation between the distribution distances and network performance. This corroborates the notion that as long as the true function class is not found, it is best to ensure that training and test distribution are close. While this seems intuitive, it may often be overlooked in the hunt for more data. The trade-off and effect on performance between invariance to distribution shifts and the magnitude of such distribution shifts has been an active field of research, albeit mostly implicitly. Given the correct invariance, changes in distribution do not have a strong effect and learning "generalizes". What has not been investigated so far, to the best of our knowledge, is a more systematic treatment of measuring distribution distances and invariances of learned functions. These assumptions currently remain mainly implicit in the literature. Invariance: Invariance is a key component of Deep Learning with many works focusing on increasing invariance by incorporating structure into their models (; ; ; ; van der). Other works apply GANs , e.g., to domain adaptation problems, to map source and target distributions to the same space while being invariant to occurring distribution shifts. As stand in for many similar works, employ invariance to improve visual adaptation to domain changes. In this work, rather than trying to beat a benchmark, we aim to establish the quantitative and qualitative trade-offs between invariance and distribution shift. In contrast to adapting the structure of a model, many works achieve invariance through data augmentation (b; ; Hernández-García et al., 2019). While most augmentations are handpicked based on domain knowledge, automated methods based on GANs have been successfully applied to data augmentation . Any of these transformations can be considered a canonization , i.e., a generalization of data normalization to group actions, by removing unwanted nuisances to increase robustness. Many other techniques such as regularization , and dimension reduction can also be considered part of the umbrella of invariance learning (; ;). In this work, we simply take the inherent invariance of learned embeddings to approximate the invariance of our function approximator and focus on the distributional shift from train to test distribution. Distribution shifts: Shifts in data set distribution can take a number of forms where either the input distribution p(x), the target distribution p(y) or a combination thereof changes. Likewise imbalanced datasets , methods of over and undersampling , and domain shifts can be described in this framework. By knowing how the distribution changes from one setting to another, were able to define an operating domain for different vision sensors and employ this for a semi-supervised learning setup. Part of evaluating distribution shifts is measuring these probability distribution distances. A popular use for this is the Maximum Mean Discrepancy (MMD) distance that has found application in anomaly detection and two-sample testing . Other distances such as the Wasserstein distance have made an impact in computational graphics modeling and in generating realistic-looking synthetic images using GANs . In this work, we apply the Wasserstein distance as a powerful tool to investigate the performance drop when moving from train to test samples. Before jumping ahead, we gain an intuitive understanding of the behavior of a neural network in the case of mismatched train and test distributions. In particular we are interested in the effect of distribution shift from training to test distribution and the effect of invariance of the learned function to such changes. Looking at this through a different lens, we also investigate what impact the type of function we are approximating, and the type of function we are approximating with can have. First, we run experiments detailing changes in the input distribution as illustrated in the top row of Fig. 1. From left to right, we increase input values x by adding a scalar α. As can be readily observed, the quality of the function approximation deteriorates under such a distribution shift. If on the other hand, we normalize the input data by subtracting the mean of the distribution, the function approximation remains unchanged as seen in the second row of Fig. 1. This added invariance through normalization made the function approximation robust to the given distribution shift. Secondly, we run function approximation experiments in which we have a training set in one interval and a test set in another. We focus on what happens when the true function class is the same as the one used for approximation and what happens when this is not the case. We observe in the third row of Fig. 1 that when the correct function is found, as in the second and third image, the approximated function works even in a domain in which it was not trained. On the other hand, the first plot of the third row and the plots in the fourth row show that if the true function is not found exactly, then outside of the training domain errors accumulate. Further, we conclude from the bottom row in Fig. 1 that a oversized training domain (bottom right plot) can hurt performance on the test domain, when compared to the middle plot with overlapping train and test domains. Our takeaway is that invariance indeed influences how distribution shifts affect changes in training and testing performance. We however also note that this topic is more nuanced. The difficulty lies in the fact that finding the "correct" invariances also means finding the true function as visualized in the third row of Fig. 1. These smaller experiments make us expect that it may be unlikely to find the true invariances of an underlying functional relationship. In such a case, ensuring that train and test distribution match thus becomes a viable option to enforce a small change from train to test performance. Noted at the bottom of the spaces are the assumed underlying metrics of the space. We begin our investigation by stating that learning is a process that maps a training dataset D train with distribution p train (x, y) to a learned function approximation Fig. 2. In deep learning this is usually achieved by minimizing a loss function on the training set using stochastic gradient descent . Of interest however is the performance of the learned function on a testing dataset D test with possibly different distribution p test (x, y). By modeling the learned relationship f (p train, x) ∈ Y as a function on the data distribution p train (x, y), we are then able to obtain a relationship between changes in the data distribution and changes in the learned function and its performance. Theorem 1. We are given two dataset distributions p 1 (x, y) and p 2 (x, y) of input data x ∈ X and output y ∈ Y situated in the probability space with Wasserstein metric W 2 (·, ·) and assume that f (p, x): P × X → Y is the function of a function family F continuous in the probability space P that minimizes the norm loss L(f (p 1, x), y) = ||f (p 1, x) − y|| 2 when the data is distributed according to p 1 (x, y). Then the expected absolute difference in loss for distribution p 2 has the following lower bound. where denotes the average value of along the integral as part of the intermediate value theorem. Proof. First we note that we can write the learned function for distribution p 2 as the function learned for distribution p 1 plus the line integral of the change of the function along the Wasserstein geodesic between p 1 and p 2. Then the change in loss is lower bounded as follows: From the above lower bound in Ineq. 1, we can deduce two ways of achieving a minimal change in loss when moving from training distribution p 1 to testing distribution p 2. Either the difference in distribution from training set to test set is small W 2 (p 1, p 2) ≈ 0. In this case, by assuming the learned function fits the training distribution perfectly and is not ill-conditioned, we can deduce that also the test distribution is fit perfectly, or The function f (p) does not change when the dataset distribution changes from p 1 to p 2 by being invariant to such changes s.t. E p2 Clearly, if the learned function cannot distinguish between, i.e. is invariant to, changes in training and test set, then even a shift of distribution to the test set cannot alter the quality of the performance. If on the other hand, as may be more common, the function class is not invariant, then the difference in distribution from training to test set becomes decisive. Note that since we regard changes in performance, it is vital to already have a good performance on the training distribution. From the above analysis, we believe that it may be essential to employ training sets that are as close as possible in distribution to the test set. If this is the case one does not need to worry about finding the right function class with the right invariances to approximate the true function. In the case where the learned function has "too much invariance", the function is not noticing some of the relevant variations in the data, e.g. in the extreme a constant would not change from training to testing distribution. While its performance does not change from training to test distribution, its training performance already suffers from unnecessary invariance. If on the other hand, the function is not invariant enough it might fit the training set well yet lose performance when switching to the test distribution. Instead of either calculating the distribution distance on the raw data (W 2 (p 1, p 2)), we compute the distance on an embedding of the raw data that aims to mirror the distances the function f (·) would perceive. This has the advantage of avoiding to compute demanding high dimensional distribution distances from raw data on the one hand, and having to train networks for both training and test distribution on the other hand. Additionally, it may be hard to properly compare the distribution of a function trained on the training distribution vs. one trained on the test distribution since training and test set oftentimes are of very different size. To measure distribution distances, we employ the Fréchet distance (FD) (Fréchet, 1957), also called the 2-Wasserstein distance , by assuming all probabilities to be normally distributed. The FD is a distance measure between two normal distributions, also adopted in the Fréchet Inception Distance (FID) to evaluate the quality difference of real and synthetic images. Instead of comparing raw data, we use the metric by exploiting domain-relevant features using an embedding network such as the inception network . The FD of two normal distributions N (µ 1, Σ 1) and N (µ 2, Σ 2) is given by: show that the distance correlates well with human judgement in the vision domain, it can also be applied to other domains if a domain relevant model replaces the inception network. We apply the FD to measure the mismatch of two sets in the following sections. Next, we aim to show that the distribution distance between training and test distributions matters in many practical problems. We underline this perspective by running experiments in text classification, vision data augmentation, and speech separation. Crucial for these experiments is that we ensure that the output distribution stays the same while only the input distribution changes. In the following we offer the general experimental setup for each dataset. For more comprehensive implementation details please refer to appendices B and C. We begin our investigation in text classification. To this end we employ different subsets of a large text classification dataset which classifies product reviews into binary good or bad product ratings. To measure the effect of distribution shift, we train our rating classifier on texts of one product category and test it on another. This yields many pairwise train/test distances and performances. Specifically, we use the Amazon Review dataset which contains text reviews and ratings divided into various product categories. In our experiments, we select 9 categories of the 5-core (at least 5 reviews per item) subsets. As input we use the review texts and as output we create binary rating categories of good or bad by splitting the rating categories into negative and positive labels. Reviews with a rating of 3 are removed. To infer the FD score between datasets categories, we use pretrained GloVe embeddings of size 100 and 300. The word embeddings of each sentence are averaged to obtain a single embedding for each review. From Fig. 3a, we can see that the classification accuracy of the binarized ratings decreases the further the training set is away from the test set as measured by the FD score. The relationship correlates strongly (Pearson correlation -0.78), and notably there is no high accuracy for large FD scores. Next we examine the effect of distribution shift in vision classification tasks. Special about this problem is that the task is rarely the same across many datasets. Instead we study distribution shifts that are created when applying data augmentation techniques . We study the problem on the popular CIFAR10 , Street View House Numbers (SVHN) , and Fashion MNIST dataset . As a classifica- tion network, we apply a simple convolutional neural network (see appendix B for more details) and use the Inception-v3 network to compute the FD score. In order to create distribution shifts, we apply data augmentation techniques such as flipping, changing contrast, random cropping, and adding salt and pepper noise. We report the ing classification accuracies and FD scores in Tab. 1. Intriguing about the displayed in Tab. 1 is that yet again we see that larger FD scores lead to worse performance. The numbers however are more nuanced than in the previous text experiments. The example of salt and pepper noise shows how nonlinear the relationship between distribution distance and performance can be. While the performance does decrease with increasing distance, it does so at a different scale than observed for the other data augmentation techniques. Nevertheless, these suggest a direct application of our insights: Selecting the augmentation methods applied based on the FD score between the augmented training set and a validation set. We leave an empirical validation of this use case to future work. As a final experiment, we venture into speech separation which deals with separating the speech of 2 or more speakers from audio data. This task is unusual because, through the combinatorial way the dataset is generated, it allows for very large datasets. This lets us gauge the influence of distribution shift especially when more than sufficient data is available. Supervised speech separation requires mixed speaker samples of at least two speakers and the corresponding single speaker streams as labels. In the monaural (single-channel) case, one can simply add two single speaker streams to create overlap. The ing combinatorial possibilities create large datasets. We collect data from the Libri Speech Corpus , the Voxceleb2 data set , Common Voice data set and the TED-LIUM speech corpus . We balance the number of speakers and time per speaker for the first four data sets, keeping 707 speakers for each data set and 7.5 minutes per speaker leading to roughly 88 hours in the training set. For the test set, we sample 10 hours from each data set, from a held back set of data with new speakers. We create mixes from equal datasets since they have more consistent recording settings, which resemble real world situations more closely. We adopt the strongest performing configuration of the Conv-TasNet model with an SDRi score of 15.6 on the 2-mix WSJ dataset. Identically to , we use 4 second long samples with a sampling rate of 8k Hz. We report scores as a SDR improvement score (SDRi), obtained through the difference between the SDR of the processed and unprocessed sample. In place of the inception network, we leverage the encoded features of a speaker embedding network to compute FD scores. Detailed hyperparameters and layer configurations are reported in the appendix in Tab. 2. Again, as depicted in Fig. 4, we find a strong correlation (-0.61) between loss in performance and distribution distance between different training and testing distribution pairs. It appears that larger data sets ing in better performance is not the whole story. Fig. 4a shows that a large combined dataset performs worse on the individual test sets than a model trained only on the corresponding training set. The presented experiments substantiate theorem 1 by showing a strong correlation between network performance and data distribution distances, especially visible in Fig. 3. Following popular wisdom, one would be led to believe that more data is all you need. The presented theory and experiments however clearly detail that, while the amount of data is important, ensuring that train and test distribution are close may be similarly significant to perform well on the test set. From the small-scale and real-world experiments we are left with the startling observation that, frequently, neural networks do not find the "true" functional relationship between input and output. If this were the case, distribution shifts between training and testing should have a smaller impact. Whether this problem can be remedied by finding richer function classes or whether it may be inherently unsolvable will have to be investigated. An important aspect of this work is how we measure distribution distances. By using a representation network, we obtain low dimensional embeddings and reduce the effect of noisy data. This embedding is however in itself limited by its features, training data, and objective. To apply the insights of this work, it will therefore be paramount to carefully choose an embedding for each dataset and task, whose features are able to meaningfully model the various data shifts a desired learning algorithm would react to. As an example, a word embedding trained only on English texts will not provide meaningful on other languages and hence is useless for modeling a distribution shift. Through this work, we emphasize the consequences of using models for predictions, which do not share the invariances of the true functional relationship. In this case, data distribution shifts lead to a deterioration of performance. As a remedy to this issue, we propose applying the Fréchet distance to measure the distance of the dataset distributions to infer the degree of mismatch. With this measure, we can deduce important criteria to choose training sets, select data augmentation techniques, and help optimize networks and their invariances. We believe that making the problem explicit and having a way to measure progress through the FD score may allow for a new wave of innovative ideas on how to address generalization under data shifts. The appendix comprises extended details of the work that may add to the understanding without being critical to follow the main text. The following specifies the experimental details of the small-scale insight experiments from section 3. Throughout these experiments, a model needs to approximate a function f (x) given a point x ∈ R, which is sampled from the uniform distribution. We limit sampling of x during training to a specific interval [a, b] and add gaussian noise from N (0, 0.4) to the training labels. We further elaborate the experiments based on the rows of Fig. 1. In rows one and two of Fig. 1, a model is trained to approximate the function f (x) = e −x cos(2πx) on the training interval [−2, 2]. During testing, we shift the test interval by the scalar α = [0, −0.5, 0.5] (images from left to right respectively). Additionally, we apply a zero mean normalization to the input data in the experiments of the second row. Note that the ground truth is invariant to a change in the input distribution and thus stays consistent. In the third row of Fig. 1, x is drawn from Uniform[−6, 6] during training, while the test set samples lie outside of the training domain in [−10, −6] and. The functions from left to right are given as follows: For all task we use the same neural network as a model. We train a two-layer feedforward neural network with 8 hidden units in its first layer followed by a ReLU activation function and a linear layer with one output unit in its second layer. In total the network consists of 25 parameters including biases, which we optimize using the mean squared error and Adam optimizer . To complete the description of the experiment in section 5.1, we offer a more detailed description of the preprocessing and network details. Before a model can use the text data from the amazon review dataset, several processing steps are necessary to convert text to numeric representations. Each review contains several information fields. However, we only utilize the overall score and the review text, from which we remove capitalization, punctuation and multiple digit numbers. The overall score is split into positive and negative reviews, while ratings of 3 are removed. The vocabulary size is limited to 10,000 words ing in less frequent words being replaced by the < UNK > symbol. To achieve consistent sentence lengths, we cut or pad reviews to a fixed length of 256 with padded words set to the symbol < PAD >. As a last processing step, we tokenize the training data and apply the mapping to the paired test set. We report the complete classification accuracies and FD scores in Tab. 3 between each category used as a training and test set. The classification network consists of an embedding, a global average pooling, and three dense layers of size 32, 16 and 1, respectively. ReLUs are applied after the first two dense layers and a Sigmoid activation function is applied to the last layer. We use Adam and the binary cross-entropy loss for optimization with a learning rate of 0.01. Again we offer complementary details to the vision classification experiments with data augmentation. Especially, we provide more details on the examined datasets, augmentations, and classification network. The CIFAR10 dataset consists of 32 × 32 colored natural images divided into 10 classes. The training and test set consists of 50,000 images and 10,000 images, respectively. The Street View House Numbers (SVHN) dataset, also contains 10 categories, one for each digit between 1 and 10. For SVHN, training set and test set consist of 73,257 images and 26,032 images, respectively. The Fashion MNIST dataset consists of 60,000 train and 10,000 test images. The dataset contains grayscale images of clothes again classified into 10 classes. For data augmentation we offer complementary details. Random cropping reduces the image size to 24 × 24 and is only applied to the middle of images in the test set. We differentiate between only cropping and cropping and resize, in which the images are rescaled back to 32 × 32. However, the test set remains unchanged when rescaling. For salt and pepper noise, we set distorted pixels equiprobably to black or white with a probability of 33%. For random contrast adjustment, we set the upper and lower limit to 1.8 and 0.2, respectively. The classification network is based on a CNN structure. Convolutional layers of size 64 use a kernel size of 5×5 followed by a ReLu activation function, a max pooling layer with a kernel size of 3×3 and stride 2, and a local response normalization (a) with a depth radius of 4, a bias of 1, α equal to 0.001 9, and β set to 0.75. After two blocks three fully-connected layers of size 384, 192 and 10 are used, with ReLu activations after the first two and a softmax layer following the last layer. For optimization, we use the cross-entropy loss and gradient descent with a initial learning rate of 0.1 and an exponential decay rate of 0.1. Additionally, we use exponential moving average on the weights with a decay factor of 0.9999. In order to compute FD scores, we use the Inception-v3 network, which is available pretrained in the Tensorflow library . Images are resized to 299 × 299 to fit the input dimension of the network, which was pretrained on Imagenet . The ing feature representations are retrieved from the pool-3 layer with 2048 output dimensions. Making the description of the speech separation experiments more comprehensive, we offer the following additional implementation details for the speech embedding network for which we adopt the generalized end-to-end loss (GE2E) proposed by. We train a network build on convolutional and LSTM layers. The inputs are based on 3 second long samples transformed into the Short-Time Fourier Transform (STFT) spectrogram and the Mel-frequency cepstral coefficients (MFCC). For the spectrogram creation, we use a hop size of 6 ms and a window length of 23 ms. We apply a simple 1-dimensional convolutional layer with a kernel size of 1 to the MFCC to match the number of frequency bins of the STFT, such that we can use the concatenation of both as an input to our network. Tab. 2 summarizes the network following the input, predicting a 100 dimensional speaker embedding. Table 2: Detailed specifications of the speaker embedding network trained with the GE2E Loss applied to calculate FD scores in section 5.3. The network is based on the densely connected network , fully-connected layers (FC), LSTMs , and Statistical Pooling . As an optimizer, we use Adam with a learning rate of 0.0001. Filter Sizes Filters
The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift.
1,725
scitldr
We introduce a new procedural dynamic system that can generate a variety of shapes that often appear as curves, but technically, the figures are plots of many points. We name them spiroplots and show how this new system relates to other procedures or processes that generate figures. Spiroplots are an extremely simple process but with a surprising visual variety. We prove some fundamental properties and analyze some instances to see how the geometry or topology of the input determines the generated figures. We show that some spiroplots have a finite cycle and return to the initial situation, whereas others will produce new points infinitely often. This paper is accompanied by a JavaScript app that allows anyone to generate spiroplots.
A new, very simple dynamic system is introduced that generates pretty patterns; properties are proved and possibilities are explored
1,726
scitldr
Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs. Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappings that are required to be learned independently and/or they generate low-diversity , a phenomenon known as model collapse. To overcome these limitations, we propose a method named GMM-UNIT based on a content-attribute disentangled representation, where the attribute space is fitted with a GMM. Each GMM component represents a domain, and this simple assumption has two prominent advantages. First, the dimension of the attribute space does not grow linearly with the number of domains, as it is the case in the literature. Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains. Additionally, we show how GMM-UNIT can be constrained down to different methods in the literature, meaning that GMM-UNIT is a unifying framework for unsupervised image-to-image translation. Translating images from one domain into another is a challenging task that has significant influence on many real-world applications where data are expensive, or impossible to obtain and to annotate. Image-to-Image translation models have indeed been used to increase the resolution of images , fill missing parts , transfer styles , synthesize new images from labels , and help domain adaptation . In many of these scenarios, it is desirable to have a model mapping one image to multiple domains, while providing visual diversity (i.e. a day scene ↔ night scene in different seasons). However, the existing models can either map an image to multiple stochastic in a single domain, or map in the same model multiple domains in a deterministic fashion. In other words, most of the methods in the literature are either multi-domain or multi-modal. Several reasons have hampered a stochastic translation of images to multiple domains. On the one hand, most of the Generative Adversarial Network (GAN) models assume a deterministic mapping (; ; a), thus failing at modelling the correct distribution of the data. On the other hand, approaches based on Variational Auto-Encoders (VAEs) usually assume a shared and common zero-mean unit-variance normally distributed space b), limiting to two-domain translations. In this paper, we propose a novel image-to-image translation model that disentangles the visual content from the domain attributes. The attribute latent space is assumed to follow a Gaussian mixture model (GMM), thus naming the method: GMM-UNIT (see Figure 1). This simple assumption allows four key properties: mode-diversity thanks to the stochastic nature of the probabilistic latent model, multi-domain translation since the domains are represented as clusters in the same attribute spaces, scalability because the domain-attribute duality allows modeling a very large number of domains without increasing the dimensionality of the attribute space, and few/zero-shot generation since the continuity of the attribute representation allows interpolating between domains and extrapolating to unseen domains with very few or almost no observed data from these domains. The code and models will be made publicly available.: GMM-UNIT working principle. The content is extracted from the input image (left, purple box), while the attribute (turquoise box) can be either sampled (top images) or extracted from a reference image (bottom images). Either way, the generator (blue box) is trained to output realistic images belonging to the domain encoded in the attribute vector. This is possible thanks to the disentangled attribute-content latent representation of GMM-UNIT and the generalisation properties associated to Gaussian mixture modeling. Our work is best placed in the literature of image-to-image translation, where the challenge is to translate one image from a visual domain (e.g. summer) to another one (e.g. winter). This problem is inherently ill-posed, as there could be many mappings between two images. Thus, researchers have tried to tackle the problem from many different perspectives. The most impressive on this task are undoubtedly related to GANs, which aim to synthesize new images as similar as possible to the real data through an adversarial approach between a Discriminator and a Generator. The former continuously learns to recognize real and fake images, while the latter tries to generate new images that are indistinguishable from the real data, and thus to fool the Discriminator. These networks can be effectively conditioned and thus generate new samples from a specific class ) and a latent vector extracted from the images. For example, and trained a conditional GAN to encode the latent features that are shared between images of the same domain and thus decode the features to images of the target domain in a one-toone mapping. However, this approach is limited to supervised settings, where pairs of corresponding images in different domains are available (e.g. a photos-sketch image pair). In many cases, it is too expensive and unrealistic to collect a large amount of paired data. Unsupervised Domain Translation. Translating images from one domain to another without a paired supervision is particularly difficult, as the model has to learn how to represent both the content and the domain. Thus, constraints are needed to narrow down the space of feasible mappings between images. proposed to minimize the feature-level distance between the generated and input images. created a shared latent space between the domains, which encourages different images to be mapped in the same latent space. Zhu et al. (2017a) proposed CycleGAN, which uses a cycle consistency loss that requires a generated image to be translated back to the original domain. used a reconstruction loss applying the same approach to both the target and input domains. later expanded the previous approach to the problem of translating multiple instances of objects in the same image. All these methods, however, are limited to a one-to-one domain mapping, thus requiring training multiple models for cross-domain translation. proposed StarGAN, a unified framework to translate images in a multi-domain setting through a single GAN model. To do so, they used a conditional label and a domain classifier ensuring network consistency when translating between domains. However, StarGAN is limited to a deterministic mapping between domains. Style transfer. A related problem is style transfer, which aims to transform the style of an image but not its content (e.g. from a photo to a Monet painting) to another image (; ; ;). Differently from domain translation, usually the style is extracted from a single reference image. We willshow that our model could be applied to style transfer as well. Multi-modal Domain Translation. Most existing image-to-image translation methods are deterministic, thus limiting the diversity of the translated outputs. However, even in a one-to-one domain translation such as when we want to translate people's hair from blonde to black, there could be multiple hair styles that are not modeled in a deterministic mapping. The straightforward solution would be to inject noise in the model, but it turned out to be worthless as GANs tend to ignore this injected noise (; ; b). To address this prob-lem, Zhu et al. (2017b) proposed BicycleGAN, which encourages the multi-modality in a paired setting through GANs and Variational Auto-Encoders (VAEs). have instead augmented CycleGAN with two latent variables for the input and target domains and showed that it is possible to increase diversity by marginalizing over these latent spaces. proposed MUNIT, which assumes that domains share a common content space but different style spaces. Then, they showed that by sampling from the style space and using Adaptive Instance Normalization (AdaIN) , it is possible to have diverse and multimodal outputs. In a similar vein, focused on the semantic consistency during the translation, and applied AdaIN to the feature-level space. Recently, proposed a mode seeking loss to encourage GANs to better explore the modes and help the network avoiding the mode collapse. Altogether, the models in the literature are either multi-modal or multi-domain. Thus, one has to choose between generating diverse and training one single model for multiple domains. Here, we propose a unified model to overcome this limitation. Concurrent to our work, DRIT++ ) also proposed a multi-modal and multi-domain model using a discrete domain encoding and assuming, however, a zero-mean unit-variance Gaussian shared space for multiple modes. We instead propose a content-attribute disentangled representation, where the attribute space fits a GMM distribution. A variational loss forces the latent representation to follow this GMM, where each component is associated to a domain. This is the key to provide for both multi-modal and multi-domain translation. In addition, GMM-UNIT is the first method proposing a continuous encoding of the domains, as opposed to the discrete encoding used in the literature. This is important because it allows for domain interpolation and extrapolation with very few or no data (few/zero-shot generation). The main properties of GMM-UNIT compared to the literature are shown in Table 1. (b) None MUNIT None StarGAN Discrete DRIT++ Discrete GMM-UNIT (Proposed) Continuous 3 GMM-UNIT GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion. Following recent seminal works ), our model assumes that each image can be decomposed in a domain-invariant content space and a domainspecific attribute space. In this paper, we model the attribute latent space through Gaussian Mixture Models (GMMs), formally with a K-component Z-dimensional GMM: where z ∈ R Z denotes a random attribute vector sample, φ k, µ µ µ k and Σ Σ Σ k denote respectively the weight, mean vector and covariance matrix of the k-th GMM component (φ k ≥ 0, Z×Z is symmetric and positive definite). p(z) denotes the probability density of this GMM at z. In the proposed representation, the domains are Gaussian components in a mixture. This simple yet effective model has two prominent advantages. Differently from previous works where each domain is a category and the one-hot vector representation grows linearly with the number of domains, we can encode many more domains than the dimension of the latent attribute space Z. Moreover, the continuous encoding of the domains we are introducing in this paper allows us to navigate in the attribute latent space, thus generating images corresponding to domains that have never (or very little) been observed and allowing to interpolate between two domains. We note that the state of the art models can be traced back as a particular case of GMMs. Existing multi-domain models such as or can be modelled with K = |domain in the training data| and ∀k Σ Σ Σ k = 0, thus only allowing the generation of a single per domain translation. Then, when K = 1, µ µ µ = 0, and Σ Σ Σ = I it is possible to model the state of the art approaches in multi-modal translation b), which share a unique latent space where every domain is overlapped, and it is thus necessary to train N (N − 1) models to achieve the multi-domain translation. Finally, we can obtain the approach of by separating the latent space from the domain code. The former is a GMM with K = 1, µ µ µ = 0, and Σ Σ Σ = I, while the latter is another GMM with K = |domain in the training data| and ∀k Σ Σ Σ k = 0. Thus, our GMM-UNIT is a generalization of the existing state of the art. the In the next sections, we formalize our model and show that the use of GMMs for the latent space allows learning multi-modal and multi-domain mappings, and also few/zero-shot image generation. GMM-UNIT follows the generative-discriminative philosophy. The generator inputs a content latent code c ∈ C C C = R C and an attribute latent code z ∈ Z Z Z = R Z, and outputs a generated image G(c, z). This image is then fed to a discriminator that must discern between "real" or "fake" images (D r/f), and must also recognize the domain of the generated image (D dom). For an image x n from domain X X X n (i.e. x n ∼ p X X X n), its latent attribute z n is assumed to follow the n-th Gaussian component of The attribute and content latent representations need to be learned, and they will be modeled by two architectures, namely a content extractor E c and an attribute extractor E z. See Figure 2 for a graphical representation of GMM-UNIT. In addition to tackling the problem of multi-domain and multi-modal translation, we would like these two extractors, content and attribute, to be disentangled. This would constrain the learning and hopefully yield better domain translation, since the content would be as independent as possible from the attributes. Formally, the following two properties must hold: Extracted attribute translation The encoders E c and E z, and the generator G need to be learned to satisfy three main properties. Consistency: When traveling through the network, the generated/extracted codes and images must be consistent with the original samples. Fit: The distribution of the attribute latent space must follow a GMM. Realism: The generated images must be indistinguishable of real images. In the following we discuss different losses used to force the overall pipeline to satisfy these properties. In the textbfconsistency term, we include image, attribute and content reconstruction, as well as cycle consistency. More formally, we use the following losses: Self-reconstruction of any input image from its extracted content and attribute vectors: Content reconstruction from an image, translated into any domain: Attribute reconstruction from an image translated with any content: In practice, this loss needs to be complemented with an isometry loss: Cycle consistency when translating an image back to the original domain: In the fit term we encourage both the attribute latent variable to follow the Gaussian mixture distribution and the generated images to follow the domain's distribution. We set two loss functions. Kullback-Leibler divergence between the extracted latent code and the model. Since the KL divergence between two GMMs is not analytically tractable, we resort on the fact that we know from which domain are we sampling and define: where q(t) dt is the Kullback-Leibler divergence. Domain classification of generated and original images. For any given input image x, we would like the method to classify it as its original domain, and to be able to generate from its content an image in any domain. Therefore, we need two different losses, one directly applied to the original images, and a second one applied to the generated images: where d X X X n is the label of domain n. Importantly, while the generator is trained using the second loss only, the discriminator D dom is trained using both. The realism term tries to making the generated images indistinguishable from real images; we adopt the adversarial loss to optimize both the real/fake discriminator D r/f and the generator G: The full objective function of our network is: where {λ GAN, λ s/rec, λ c/rec, λ a/rec, λ cyc, λ KL, λ iso, λ dom} are hyper-parameters of weights for corresponding loss terms. The value of most of these parameters come from the literature. We refer to Appendix A for the details. We perform extensive quantitative and qualitative analysis in three real-world tasks, namely: edgesshoes, digits and faces. First, we test GMM-UNIT on a simple task such as a one-to-one domain translation. Then, we move to the problem of multi-domain translation where each domain is independent from each other. Finally, we test our model on multi-domain translation where each domain is built upon different combinations of lower level attributes. Specifically, for this task, we test GMM-UNIT in a dataset containing over 40 labels related to facial attributes such as hair color, gender, and age. Each domain is then composed by combinations of these attributes, which might be mutually exclusive (e.g. either male or female) or mutually inclusive (e.g. blonde and black hair). Additionally, we show how the learned GMM latent space can be used to interpolate attributes and generate images in previously unseen domains, thus showing the first example of few-or zero-shot generation in image-to-image translation. Finally, GMM-UNIT will be applied to the Style transfer task. We compare our model to the state of the art of both multi-modal and multi-domain image translation problems. In the former, we select BicycleGAN (b), MUNIT (a) and MSGAN. In the latter, we compare with StarGAN and DRIT++, which is the only multi-modal and multi-domain method in the literature. However, since StarGAN is not multi-modal we additionally test a simple modification of the model where we inject noise in the network. We call this version StarGAN*. More details are in Appendix A. We quantitatively evaluate the performance of our method through image quality and diversity of generated images. The former is evaluated through the Fréchet Inception Distance (FID) and the Inception Score . We evaluate the latter through the LPIPS Distance , NDB and JSD metrics. In addition, we also show the overall number of parameters used for all domains (Params). FID We use FID to measure the distance between the generated and real distributions. Lower FID values indicate better quality of the generated images. We estimate the FID using 100 input images and 100 samples per input v.s. randomly selected 10000 images from the target domain. IS To estimate the IS, we use Inception-v3 fine-tuned on our specific datasets as classifier for 100 input images and 100 samples per input image. Higher IS means higher generated image quality. LPIPS The LPIPS distance is defined as the L 2 distance between the features extracted by a deep learning model of two images. This distance has been demonstrated to match well the human perceptual similarity . Thus, following Zhu et al. (2017b);; , we randomly select 100 input images and translate them to different domains. For each domain translation, we generate 10 images for each input image and evaluate the average LPIPS distance between the 10 generated images. Finally, we get the average of all distances. Higher LPIPS distance indicates better diversity among the generated images. NDB and JSD These are measuring the similarity between the distributions of real and generated images. We use the same testing data as for FID. Lower NDB and JSD mean the generated data distribution approaches better the real data distribution. We first evaluate our model on a simpler task than multi-domain translation: two-domain translation (e.g. edges to shoes). We use the dataset provided by; Zhu et al. (2017a) containing images of shoes and their edge maps generated by HED . We train a single model for edges ↔ shoes without using paired information. Figure 3 displays examples of shoes generated from the same sketch by GMM-UNIT. Table 2 shows the quantitative evaluation and comparison with the state-of-the-art. Our model generates images with high diversity and quality using half the parameters of the state of the art. We refer to Appendix B.1 for additional on this task. Figure 3: Examples of edges → shoes translation with the proposed GMM-UNIT. We then evaluate our model in a multi-domain translation problem where each domain is composed by digits collected in different scenes. We use the Digits-Five dataset introduced in , from which we select three different domains, namely MNIST , MNIST-M , a colorized version of MNIST for domain adaptation, and Street View House Numbers (SVHN) . We compare our model with the state-of-theart on multi-domain translation, and we show in Figure 4 and Table 3 the qualitative and quantitative respectively. From these we conclude that StarGAN* fails at generating diversity, thus confirming the findings of previous studies that adding noise does not increase diversity (; ; b). GMM-UNIT instead generates images with higher quality and diversity than all the state-of-the-art models. We note, however, that StarGAN* achieves a higher IS, probably due to the fact that it solves a simpler task. Additional experiments carried out implementing a StarGAN*-like GMM-UNIT (i.e. setting σ σ σ k = 0, ∀k) indeed produced similar . Specifically, the StarGAN*-like GMM-UNIT tends to generate for each input image one single (deterministic) output and thus the corresponding LPIPS scores are around zero. We refer to Appendix B.2 for additional on this task. We also evaluate GMM-UNIT in the complex setting of multi-domain translation in a dataset of facial attributes. We use the CelebFaces Attributes (CelebA) dataset , which contains 202,599 face images of celebrities where each face is annotated with 40 binary attributes. We resize the initial 178×218 size images to 128×128. We randomly select 2,000 images for testing and use all remaining images for training. This dataset is composed of some attributes that are mutually exclusive (e.g. either male or female) and those that are mutually inclusive (e.g. people could have both blonde and black hair). Thus, we model each attribute as a different GMM component. For this reason, we can generate new images for all the combinations of attributes by sampling from the GMM. As aforementioned, this is not possible for state-of-the-art models such as StarGAN and DRIT++, as they use one-hot domain codes to represent the domains. For the purpose of this experiment we show five binary attributes: hair color (black, blond, brown), gender (male/female), and age (young/old). These five attributes allow GMM-UNIT to generate 32 domains. Figure 5 shows some generated of our model. We can see that GMM-UNIT learns to translate images to simple attributes such as blonde hair, but also to translate images with combinations of them (e.g. blonde hair and male). Moreover, we can see that the rows show different realizations of the model thus demonstrating the stochastic approach of GMM-UNIT. These are corroborated by Table 4 that shows that our model is superior to StarGAN* in both quality and diversity of generated images. We also note in this experiment that the IS is higher in StarGAN*. Additional are on Appendix B.3. We evaluate our model on style transfer, which is a specific task where the style is usually extracted from a single reference image. Thus, we randomly select two input images and synthesize new images where, instead of sampling from the GMM distribution, we extract the style (through E z) from some reference images. Figure 6 shows that the generated images are sharp and realistic, showing that our method can also be effectively applied to Style transfer. Figure 6: Examples of GMM-UNIT applied on the Style transfer task. The style is here extracted from a single reference images provided by the user. In addition, we evaluate the ability of GMM-UNIT to synthesize new images with attributes that are extremely scarce or non present in the training dataset. To do so, we select three combinations of attributes consisting of less than two images in the CelebA dataset: Black hair+Blonde hair+Male+Young, Black hair+Blonde hair+Female+Young and Black hair+Blonde hair+Brown+Young. Figure 7: Generated images in previously unseen combinations of attributes. Figure 7 shows that learning the continuous and multi-modal latent distribution of attributes allow to effectively generate images as zero-or few-shot generation. At the best of our knowledge, we are the first ones being able to translate images in previously unseen domains. This can be extremely important in tasks that are extremely imbalanced. Finally, we show that by learning the full latent distribution of the attributes we can do attribute interpolation both intra-and inter-domains. In contrast, state of the art methods such as can only do intra-domain interpolations due to their discrete domain encoding. Figure 8 shows some generated images through a linear interpolation between two given attributes, while in Appendix B.3 we show that we can also do intra-domain interpolations. We compare GMM-UNIT with three variants of the model that ablate L cyc, L d/rec and L iso in the Digits dataset. Table 5 shows the of the ablation. As expected, L cyc is needed to have higher image quality. When L d/rec is removed image quality decreases, but L iso still helps to learn the attributes space. Finally, without L iso we observe that both diversity and quality decrease, thus confirming the need of all these losses. We refer to Appendix B.4 for the additional ablation broken down by domain. In this paper, we present a novel image-to-image translation model that maps images to multiple domains and provides a stochastic translation. GMM-UNIT disentangles the content of an image from its attributes and represents the attribute space with a GMM, which allows us to have a continuous encoding of domains. This has two main advantages: first, it avoids the linear growth of the dimension of the attribute space with the number of domains. Second, GMM-UNIT allows for interpolation across-domains and the translation of images into previously unseen domains. We conduct extensive experiments in three different tasks, namely two-domain translation, multidomain translation and multi-attribute multi-domain translation. We show that GMM-UNIT achieves quality and diversity superior to state of the art, most of the times with fewer parameters. Future work includes the possibility to thoroughly learn the mean vectors of the GMM from the data and extending the experiments to a higher number of GMM components per domain. Our deep neural models are built upon the state-of-the-art methods MUNIT, BicycleGAN (b) and StarGAN , as shown in Table 6 with details. We apply Instance Normalization (IN) to the content encoder E c and Adaptive Instance Normalization (AdaIN) and Layer Normalization (LN) for the decoder G. For the discriminator network, we use Leaky ReLU with a negative slope of 0.2. We use the following notations: D: the number of domains, N: the number of output channels, K: kernel size, S: stride size, P: padding size, CONV: a convolutional layer, GAP: a global average pooling layer, UPCONV: a 2×bilinear upsampling layer followed by a convolutional layer. Note that we reduce the number of layers of the discriminator on the Digits dataset. We use the Adam optimizer with β 1 = 0.5, β 2 = 0.999, and an initial learning rate of 0.0001. The learning rate is decreased by half every 100,000 iterations. In all experiments, we use a batch size of 1 for Edges2shoes and Faces and batch size of 32 for Digits. And we set the loss weights to λ GAN = 1, λ s/rec = 10, λ c/rec = 1, λ a/rec = 1, λ cyc = 10, λ KL = 0.1, λ iso = 0.1 and λ dom = 1. We use the domain-invariant perceptual loss with weight 0.1 in all experiments. Random mirroring is applied during training., S1, P0) A.1 GMM In our experiments we use a simplified version of the GMM, which satisfies the following properties: • The mean vectors are placed on the vertices of (N − 1)-dimensional regular simplex, so that the mean vectors are equidistant. • The covariance matrices are diagonal, with the same on all the components. In other words, each Gaussian component is spherical, formally: Σ Σ Σ k = σ σ σ k I, where I is the identity matrix. B ADDITIONAL In this section, we present the additional for the one-to-one domain translation. As shown in Figure 9, we qualitatively compare GMM-UNIT with the state-of-the-art. We observe that while all the methods (multi-domain and not) achieve acceptable diversity, it seems that DRIT++ suffers from problems of realism. Figure 10 shows the qualitative comparison with the state of the art, while Figure 1: Examples on using reference images to provide attribute representations. 2 Figure 9: Visual comparisons of state of the art methods on Edge ↔ Shoes dataset. We note that Bicycle-GAN, MUNIT and MSGAN are one-to-one domain translation models, while StarGAN* is a multi-domain (deterministic) model. Finally DRIT++ and GMM-UNIT are multi-modal and multi-domain methods. In Table 9 we show the quantitative on the CelebA datset, broken down per domain. In Figure 11 we show some generated images in comparison with StarGAN*. Figure 12 shows the possibility to do attribute interpolation inside a domain. In Table 10 we show additional, per domain, ablation on the Digits dataset. C VISUALIZATION OF THE ATTRIBUTE LATENT SPACE Figure 13 shows that the attributes sampled from the distribution and those extracted by the encoder E z are mapped and well projected in the latent space of the attributes. Figure 13: Visualization of the attribute vectors in a 2D space via t-SNE method. "S" refers to randomly sampling from GMM components (1: black hair, 2: blondehair, 3: brown hair) and "E" refers to extracting attribute vectors by the encoder Ez from the real data.
GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion.
1,727
scitldr
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good . This paper considers how best to design neural networks to perform the iterative reasoning necessary for complex problem solving. Putting facts and observations together to arrive at is a central necessary ability as we work to move neural networks beyond their current great success with sensory perception tasks BID20 BID18 towards displaying Artificial General Intelligence. Figure 1: A sample image from the CLEVR dataset, with a question: "There is a purple cube behind a metal object left to a large ball; what material is it?" Concretely, we develop a novel model that we apply to the CLEVR dataset BID15 for visual question answering (VQA). VQA BID3 BID10 ) is a challenging multimodal task that requires responding to natural language questions about images. However, BID0 show how the first generation of successful models on VQA tasks tend to acquire only superficial comprehension of both the image and the question, exploiting dataset biases rather than capturing a sound perception and reasoning process that would lead to the correct answer BID27. CLEVR was created to address this problem. As illustrated in figure 1, instances in the dataset consist of rendered images featuring 3D objects of several shapes, colors, materials and sizes, coupled with unbiased, compositional questions that require an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties, without allowing any shortcuts around such reasoning. Notably, each instance in CLEVR is also accompanied by a tree-structured functional program that was both used to construct the question and reflects its reasoning procedure -a series of predefined operations -that can be composed together to answer it. Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately. In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance. At the other extreme, some approaches adopt symbolic structures that resemble the expression trees of programming languages to perform reasoning BID2 BID13. In particular, some approaches to CLEVR use the supplied functional programs for supervised or semi-supervised training BID1 BID16. Not only do we wish to avoid using such supervision in our work, but we in general suspect that the rigidity of these structures and the use of an inventory of operation-specific neural modules undermines robustness and generalization, and at any rate requires more complex reinforcement learning methods. To address these weaknesses, while still seeking to use a sound and transparent underlying reasoning process, we propose Compositional Attention Networks, a novel, fully differentiable, non-modular architecture for reasoning tasks. Our model is a straightforward recurrent neural network with attention; the novelty lies in the use of a new Memory, Attention and Composition (MAC) cell. The constrained and deliberate design of the MAC cell was developed as a kind of strong structural prior that encourages the network to solve problems by stringing together a sequence of transparent reasoning steps. MAC cells are versatile but constrained neural units. They explicitly separate out memory from control, both represented recurrently. The unit contains three sub-units: The control unit updates the control representation based on outside instructions (for VQA, the question), learning to successively attend to different parts of the instructions; the read unit gets information out of a knowledge base (for VQA, the image) based on the control signal and the previous memory; the write unit updates the memory based on soft self-attention to previous memories, controlled by the retrieved information and the control signal. A universal MAC unit with a single set of parameters is used throughout the reasoning process, but its behavior can vary widely based on the context in which it is applied -the input to the control unit and the contents of the knowledge base. With attention, our MAC network has the capacity to represent arbitrarily complex acyclic reasoning graphs in a soft manner, while having physically sequential structure. The is a continuous counterpart to module networks that can be trained end-to-end simply by backpropagation. We test the behavior of our new network on CLEVR and its associated datasets. On the primary CLEVR reasoning task, we achieve an accuracy of 98.9%, halving the error rate compared to the previous state-of-the-art FiLM model BID24. In particular, we show that our architecture yields better performance on questions involving counting and aggregation. In supplementary studies, we show that the MAC network learns more quickly (both in terms of number of training epochs and training time) and more effectively from limited amounts of training data. Moreover, it also achieves a new state-of-the-art performance of 82.5% on the more varied and difficult humanauthored questions of the CLEVR-Humans dataset. The careful design of our cell encourages compositionality, versatility and transparency. We achieve these properties by defining attention-based interfaces that constrict the cell's input and output spaces, and so constrain the interactions both between and inside cells in order to guide them towards simple reasoning behaviors. Although each cell's functionality has only a limited range of possible continuous reasoning behaviors, when chained together in a MAC network, the whole system becomes expressive and powerful. In the future, we believe that the architecture will also prove beneficial for other multi-step reasoning and inference tasks, for instance in machine comprehension and textual question answering. There have been several prominent models that address the CLEVR task. By and large they can be partitioned into two groups: module networks, which in practice have all used the strong supervision provided in the form of tree-structured functional programs that accompany each data instance, and large, relatively unstructured end-to-end differentiable networks that complement a fairly standard stack of CNNs with components that aid in performing reasoning tasks. In contrast to modular approaches BID1 b; BID13 BID16, our model does not require additional supervision and makes use of a single computational cell chained in sequence (like an LSTM) rather than a collection of custom modules deployed in a rigid tree structure. In contrast to augmented CNN approaches BID25 BID24, we suggest that our approach provides an ability for relational reasoning with better generalization capacity and higher computational efficiency. These approaches and other related work are discussed and contrasted in more detail in the supplementary material in section C. Compositional Attention Networks is an end-to-end architecture for question-answering tasks that sequentially performs an explicit reasoning process by stringing together small building blocks, called MAC cells, each is responsible for performing one reasoning step. We now provide an overview of the model, and a detailed discussion of the MAC cell. The model is composed of three components: an Input unit, the core MAC network, and an output unit. A TensorFlow implementation of the network, along with pretrained models will be made publicly available. In this paper we explore the model in the context of VQA. However, it should be noted that while the input and output units are naturally domain-specific and should be designed to fit the task at hand, the MAC network has been designed to be generic and more broadly applicable, and may prove useful in contexts beyond those explored in the paper, such as machine comprehension or question answering over knowledge bases, which in our belief is a promising avenue for future work. The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit, for the query and the KB, here a biLSTM and a CNN. More details can be found in the supplementary material, section A.At the end of this stage, we get from the query sub-unit a series of biLSTM output states, which we refer to as contextual words, [cw 1, ..., cw S], where S is the length of the question. In addition, we get q = [← − − cw 1, − − → cw S], the concatenation of the hidden states from the backward and forward LSTM passes. We refer to q as the question representation. Furthermore, we get from the Knowledge-Base sub-unit a static representation of the knowledge base. For the case of VQA, it will be represented by a continuous matrix KB V of dimension H, W, d, where H = W = 14 are the height and width of the transformed image, corresponding to each of its regions. The MAC network, which is the heart of our model, chains a sequence of small building blocks, called MAC cells, each responsible for performing one reasoning step. The model is provided access to a Knowledge Base (KB), which is, for the specific case of VQA, the given image, and then upon receiving a query, i.e. a question, the model iteratively focuses, in p steps, on the query's various parts, each reflects in turn the current reasoning step, which we term the control. Consequently, guided by this control, it retrieves the relevant information from the KB, that is then passed to the next cell in a recurrent fashion. Drawing inspiration from the Model-View-Controller paradigm used in software design and from the commonly exercised separation between control and data paths in computer architecture, the MAC cell is composed of three units: control unit, read unit and write unit. Each has a clearly defined role and an interface through which it interacts with the other units. See figure 2.The careful design and imposed interfaces that constrain the interaction between the units inside the MAC cell, as described below, serve as structural prior that limits the space of hypotheses it can learn, thereby guiding it towards acquiring the intended reasoning behaviors. As such, this prior facilitates the learning process and mitigate overfitting issues. In particular, and similar in spirit to BID24, we allow the question to interact with the Knowledge Base -the image for the case of VQA, only through indirect means: by guiding the cell to attend to different elements in the KB, as well as controlling its operation through gating mechanisms. Thus, in both cases, the interaction between these mediums, visual and textual, or knowledge and query, is mediated through probability distributions, either in the form of attention maps, or as gates, further detailed below. This stands in stark contrast to many common approaches that fuse the question and image together into the same vector space through linear combinations, multiplication, or concatenation. Rather, our controlled interaction distills the influence that the query should have in processing the Knowledge Base, casting it onto discrete probability distributions instead. The MAC cell has been designed to replace the discrete and predefined "modules" used in the modular approach BID1 b; BID13 BID16. Rather, we create one universal and versatile cell that is applied across all the reasoning steps, sharing both its architecture as well as its parameters, across all of its instantiations. In contrast to the discrete modules, each trained to specialize to some specific elementary reasoning task, the MAC cell is capable of demonstrating a continuous range of possible reasoning behaviors conditioned on the context in which it is applied -namely, the inputs it receives from the prior cell. Each cell M AC i maintains two dual states: control c i and memory m i, both are continuous vectors of dimension d. The control c i represents the reasoning operation the MAC cell should accomplish in the current step -focusing only on some aspect of the whole question. This is represented by a weighted-average attention-based sum of the question words. The memory m i represents the current context information deemed relevant to respond to the query, or answer the question. This is represented practically by a weighted average over elements from the KB, or for the case of VQA, regions in the image. m 0 and c 0 are initialized each to a random vector parameter of dimension d. The memory and control states are passed from one cell to the next in a recurrent fashion, and used in a way reminiscent of Key-Value memory networks BID22, as discussed below. The control unit determines the reasoning operation that should be applied at this step. It receives the contextual words [cw 1, ..., cw S], the question representation q, and the control state from the previous MAC cell c i−1, all of which are vectors of dimension d. We would like to allow our MAC cell to perform continuously varied and adaptive range of behaviors, as demanded by the question. Therefore, we define the behavior of each cell to be a function of the contextual words [cw 1, ..., cw S], weighted-averaged according to the attention distribution that the control unit produces at each step. This will allow the cell to adapt its behavior -the reasoning operation it performs -to the question it receives, instead of having a fixed set of predefined behaviours as is the case in competing approaches BID1 b); BID16.The formal specification of the control unit is shown in FIG1. The question q is linearly transformed into a vector q i of the same dimension, which in turn is concatenated with the previous control state c i−1 and linearly transformed again to a d-dimensional vector cq i. DISPLAYFORM0 Note that in contrast to all other parameters of the cell, which are shared across its instantiations at the different steps i = 1,..., p, the parameters W is done to allow each cell to attend more readily to different aspects (i.e. parts) of the questions, depending on the index of the current step -its relative stage in the context of the whole reasoning process.cq i represents the current reasoning operation we would like to perform in a continuous way, taking into account both the overall meaning of the question q i, as well as the words the model attended to in the previous step, c i−1.However, we would like to prevent the cell from diverging in the reasoning operations it tries to perform, and instead anchor it back in the question words, by using them to represent the reasoning operation of the current step. We can achieve that by computing an attention distribution cv i over the contextual words [cw 1, ..., cw S] based on their similarity to cq i. Then, summing the contextual words according to the attention distribution cv i will allow us to have a new control state, c i, which is represented again in terms of words from the question. Intuitively, it is the gist of the question that is relevant to the reasoning operation we would like to perform in the current step. DISPLAYFORM1 Finally, the control unit returns the current control state c i, along with an attention map cv i over the contextual words. The Read Unit is provided with access to the knowledge base KB V, along with the previous memory state m i−1 and the current control c i. It is responsible for retrieving relevant content from the Knowledge Base KB V for the reasoning task that the MAC cell should accomplish at this step, which is represented by the current control state c i, as explained above. FIG2 shows a diagram. The relevance of the new information is judged in two stages by the "relatedness" of each element in the KB (or for the case of VQA, each region in the image) to either the memory m i−1 that has accumulated relevant information from previous iterations, or to the current control c i, pointing towards the next piece of information that should be taken into account. Here, relatedness is measured by trained linear transformations comparing each element to the previous memory and the current control. More formally, at the first stage, the interaction between each element KB h,w, where h = 1,..., H, w = 1,..., W, and the previous memory m i−1 is computed by: These memory-KB interactions measure the relatedness of each element in the KB to the memory accumulated so far, which holds information that has been deemed relevant to handle previous reasoning steps towards addressing the question. They allow the model to perform transitive inference, retrieving a new piece of information that now seems important in light of the recent memory retrieved in a prior iteration. DISPLAYFORM0 However, there are cases which necessitate the model to temporarily ignore current memories, when choosing the new information to retrieve. Logical OR is a classical example: when the model has to look at two different objects at the same time, and assuming it stored one of them at the first iteration, it should briefly ignore it, considering new information that is relevant to the question but is unrelated to the memory. In order to achieve such capability, the read unit concatenates the original KB elements to each corresponding memory-KB interaction, which are then projected back to d-dimensional space (equation 6a): DISPLAYFORM1 At the second stage, the read unit compares the current c i with these memory-KB interactions, in order to focus on the information that is relevant to the current reasoning operation that the MAC cell seeks to accomplish. The is then passed to a softmax layer yielding an attention map mv i over the KB, which is used in turn to retrieve the relevant information to perform the current reasoning step. DISPLAYFORM2 Finally, the read unit returns the newly retrieved information m new, along with an attention map mv i over the Knowledge Base. To give an example of the read unit operation, assume a given question q such as "What object is located left to the blue ball?", whose associated answer is "cube". Initially, no cue is provided to the model to attend to that cube, since no direct information about it presents in the question. Instead, based on its comprehension of the question, the model may start by focusing on the blue ball at the first iteration, such that the memory state m 1 will capture the blue ball. However, in the second iteration, the control unit, after re-examining the question, may realize it should now look left, storing the word "left" in c 2. Then, when considering both m 1 and c 2, the read unit will realize it should perform a reasoning operation corresponding to the word "left" (stored in c 2) given a memory representing the blue ball in m 1, thereby allowing it to look left to the blue ball and find the cube. The Write Unit is responsible for creating the new memory state m i that will reflect all the information considered to be important to answer the question so far, i.e. up to the current iteration in the In the main design we have explored, merging the new information with the previous memory state is done simply by a linear transformation. DISPLAYFORM0 In addition, we have explored two variations of this design. The first, self-attention, allows considering any previous memories rather than just the last one m i−1, thus providing the network with the capacity to model non-sequential reasoning processes. The second variation is adding gating mechanisms to the writing unit. These may allow the model to dynamically adjust the practical length of the computation to the question complexity and stabilize the memory content throughout the sequential network (similarly to GRUs and LSTMs).Self-Attention. The current architecture that we have presented allows the model to perform reasoning steps in a sequence, passing control and memory states from one cell to the following. However, we would like to grant the system with more flexibility. Particularly, we would like to allow it to capture more complicated reasoning processes such as trees and graphs -Directed Acyclic Graph (DAG) in particular, where several branches of reasoning sub-processes are merged together in later stages. Indeed, the CLEVR dataset includes cases where the questions embody tree-like reasoning process, rather than just sequences, which we would like to address correctly in our model. We achieve that by adding self-attention connections between each MAC cell and all the prior cells. Since each cell can look on all the prior reasoning steps and their corresponding memories retrieved from the Knowledge Base, it can virtually capture any directed acyclic graph, while still having physically sequential layout. More formally, the current MAC cell, of the i th iteration, is granted with access to c 1,..., c i−1 along with the corresponding m 1,..., m i−1, that have been computed by the prior MAC cells. It begins by computing the similarity between c i and c 1,..., c i−1, and use it to derive an attention map over the prior MAC cells sa i,j for j = 1,..., i − 1. This represents the relevance of the j th prior reasoning step to the current one i (equation 9a).Then, we average the previous memories according to this ed attention map sa ij. We obtain m sa, representing the information from all the other reasoning steps that is relevant to the current one (equation 9b).This resembles the approach of Key-Value networks BID22. The similarity between control states, corresponding to the reasoning operations that are performed in each prior step, allows the model to select which memories should be taken into account, when creating the new memorynamely, which branches of the reasoning process should be merged together at this point. DISPLAYFORM1 Finally, we use m sa along with m i to compute m i, the new memory content in this variation. DISPLAYFORM2 Memory Gate. The currently presented MAC network has some fixed number p of concatenated MAC cells, representing the length of the overall reasoning process we perform. However, not all questions require reasoning sequence of the same length. Some questions are simpler while others more complex. Motivated by this observation, we add a gate over the new memory computed at each step, that may selectively keep content of the previous memory m i−1 unchanged. Practically, the gate functions in a similar way to a highway network BID26, where the gate value is conditioned on the current reasoning operation, c i. DISPLAYFORM3 The write unit returns the new memory state m i, that will be passed along with c i to the next MAC cell. Overall, when designing the MAC cell, we have attempted to formulate the inner workings of an elementary, yet generic reasoning skills: the model decomposes the problem into steps, focusing on one at a time. At each such step, it takes into account:• The control c i: Some aspect of the task -pointing to the future work that has left to be done.• The previous memory or memories: The partial solution or evidence the cell has acquired so far -pointing to the past work that has already been achieved.• The newly retrieved information m new: that is retrieved from the knowledge base KB and may or may not be transitively related to that partial solution or evidence -the present, or current work. Considering these three sources of information together, the cell finally adds the new information up into its working memory, m i, progressing one more step towards the final answer. The output unit receives the question representation q, along with the memory state passed from the last MAC cell m p, where p is the number of MAC cells in the network -representing the number of reasoning steps in the whole process. It inspects both and predicts an answer based on their concatenation. Intuitively, we would like our model to consider both the question as well as the relevant information that has been progressively retrieved from the KB, deemed the necessary information to answer it. Note that considering both q and m p is critical to answer the question. While m p represents the information collected from KB, we still need to recall what has been asked about it to be able to answer accordingly. This is especially true in our case, when all other interactions between the question and the KB are mediated through attention distributions, rather than being transformed into a shared continuous vector space. The prediction is built out of a standard 2-layers fully-connected softmax-based classifier with hidden dimension d and output dimension that matches the number of possible answers in the dataset. The classifier receives [m p, q] as input and returns a probability distribution over the answers. We evaluate our model on the recent CLEVR dataset BID15. CLEVR is a synthetic dataset consisting of 700K tuples; each consists of a 3D-rendered image featuring objects of various shapes, colors, materials and sizes, coupled with compositional multi-step questions that measure performance on an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties. In addition, each question is associated with a formal program, specifying the reasoning operations that should be performed to compute the answer, among 28 possibilities. We first perform experiments on the original 700k CLEVR dataset BID15, comparing to prior work. As shown in table 1, our model matches or outperforms all existing models both in overall accuracy, as well as in each category, testing different reasoning skills. In particular, for the overall performance, we achieve 98.94% accuracy, more than halving the error rate of the prior best model, FiLM BID24.Counting and Numerical Comparison. Remarkably, our performance on questions testing counting and numerical comparisons is significantly higher than the competing models, which consistently struggle on this question type. Again, we nearly halve the corresponding error rate. These demonstrate the aptitude of attention mechanisms to perform counting, reduction and aggregation, in contrast to alternative, CNN-based approaches. Training Length and Computational-Efficiency. We examine the learning curves of our and competing models. We have trained all models on the same architecture and used the author code for the other models. Aiming at having equal settings for comparison, we ran all models including ours with learned random words vectors. In order to make sure the are statistically significant we ran each model multiple times, and plotted the averages and confidence intervals (figure 4). The show that our model learns significantly faster than the other leading methods, FiLM BID24 and PG+EE BID16. While we do not have learning curves for the Relational Network model, BID25 report approximately 1.4 million iterations to achieve 95.5% accuracy, which are equivalent to 125 epochs approximately, whereas our model achieves a comparable accuracy after 3 epochs only, yielding 40x reduction in the length of the training process. Naturally, the smaller number of required training steps also translates to comparably shorter training time. BID24 report training time of 4 days, equivalent to 80 epochs, to reach accuracy of 97.7%. In contrast, we achieve higher accuracy in 6 epochs, taking 9.5 hours overall, leading to 10x reduction in training time. Right: Learning curve: Accuracy for 1%, 2%, 5%, 10%, 25%, 50% and 100% of the 700k CLEVR samples. BID16 27.5 36.5 CNN+LSTM BID16 37.7 43.2 CNN+LSTM+SA+MLP BID15 50.4 57.6 PG+EE (18K prog.)* BID16 54.0 66.6 CNN+GRU+FiLM BID24 56 We have explored the performance of our and other leading approaches on smaller subsets of the CLEVR dataset, in order to study the ability of models to generalize from smaller amount of data. We sampled at random subsets of CLEVR, with 10%, 25% and 50% of its original 700k size, and used them to train our and other 3 proposed models for the CLEVR task: FiLM BID24, the strongly-supervised PG+EE BID16, and stacked-attention networks BID16.As shown in figure 4, our model outperforms the other models by a wide margin for all subsets of the CLEVR dataset. For 50% of the data, equivalent to 350k samples, other models obtain accuracies ranging between 70% and 92%, while our model achieves 97.9%. The gap becomes larger as the dataset size reduces: for 25% of the data, equivalent to 175k samples, performance of other models is between 50% and 77%, while our model maintains a high 95.4% accuracy. Finally, for 10% of the data -70k samples, still a sizeable amount -our model is the only one that manages to generalize, with performance of 84.7% on average, whereas the other three models fail, achieving 47.6%-57.5%. Note that as pointed out by BID15 a simple baseline that predicts the most frequent answer for each of the question types achieves already 42.1%, suggesting that answering half of the questions correctly means that the competing models barely learn to generalize from the smaller dataset. These demonstrate the robustness of our architecture and its key role as a structural prior guiding our network to learn the intended reasoning skills. We analyze our model performance on the CLEVR-Humans dataset BID16, consisting of natural language questions collected through crowdsourcing. As such, the dataset has diverse vocabulary and linguistic variations, and it also demands more varied reasoning skills. Since the training set is relatively small, consisting of 18k samples, we use it to finetune a model pretrained on the standard CLEVR dataset. However, since most of the vocabulary in CLEVRHumans is not covered by CLEVR, we do not train the word vectors during the pre-training stage, so to prevent drift in their meaning compared to other uncovered words in CLEVR-Humans that may be semantically related. As shown in table 2, our model achieves state-of-the-art performance on CLEVR-Humans both before and after fine-tuning. It surpasses the next-best FiLM model, BID24 by 6.6% percent, achieving 82.5%.The substantiate the model's robustness against linguistic variations and noise, as well as its ability to adapt to diverse vocabulary and varied reasoning skills. Arguably, the soft attention performed over the question words allows the model to focus on the words that are most critical to answer the question and translate them to corresponding reasoning operations, giving less attention to irrelevant linguistic variations. Based on the validation set, we have conducted an ablation study on our model to understand better the contribution of each of its component to the overall performance. We tested each setting on the standard 700K CLEVR dataset as well as on 10% subset of the dataset. See table 3 for the numerical . In addition, figure 4.3 presents the training curves for the different settings trained on the standard dataset. Overall, the demonstrate the robustness of the model to hyperparameter variations such as network dimension and length, and also the impact of different aspect and components of MAC on its performance. Network Length. We have tested the model performance as a function of the network's lengththe number of MAC cells that were sequenced together. The show the positive correlation between the network length and its performance. We can see that for 1 cell the scores are relatively low -75%, but adding at least one more cell leads to a significant increase in performance above 95%. The performance keeps improving up to lengths 8-16 that achieve 98.9-99.1%. The also teach us about the complexity of the dataset, by showing the relatively significant benefits of having at least 4 cells, each modeling a reasoning step. Network Dimension. We have varied the state dimension to check the robustness of the model to hyperparameters. The on the standard CLEVR dataset show the model is able to maintain high performance with dimension of 128, albeit after a longer training process, achieving 97.6%, compared to 98.94% achieved with dimension of 512. However, for 10% of CLEVR, the larger 512-dimension allows accuracy increase by 7.5% over dimension of 128.Weight Sharing. We have tested the impact of sharing weights between cell has on the model performance for network of length p = 12. The show that for the standard dataset there is only a small difference between these settings of 1%. However, for less data, we see much more significant drop 16.9% in the unshared-parameters setting compared to the shared one. Indeed, we observe that a model with less parameter is more data-efficient and has a lower tendency to overfit the data. Control Unit. We have performed several ablations in the control unit to understand its contribution to the overall model performance. Based on the , first, we can see the the question information is crucial for the model to handle the questions, as can be noted by the low performance of the model when there is no use of control signal whatsoever. Second, we have tested the model performance when using the continuous control state computed by question in section 3.2.1, without having word-attention, in order to understand its relative contribution. Based on the , we can indeed see that using word-attention is useful for accelerating the training process and achieving higher accuracies both for the standard dataset as well as for the small subset, where using word-attention increases in 21.4%. We also see that using the "contextual words" produced by the questionunit LSTM is useful in accelerating the model performance, when compared to using the wordvectors directly. Reading Unit. We have conducted several ablations for the reading unit to better understand its behavior and contribution to the performance of the model. The standard MAC reading unit uses the control state -which averages the question words based on attention distributions computed per each reasoning step. In this ablation experiment, we have tested using the full question representation q instead across all reasoning steps to gain better understanding of the the contribution of wordattention to the model performance. Indeed, we can see that using q rather then the control state c i in a significant drops in performance -19.4% for the full CLEVR dataset and 19.5% for 10% of the data. We have conducted additional ablation experiment to better understand the contribution of using the KB features directly in the first-stage information retrieval process described in section 3.2.2, compared to using only the dot-products of the KB elements with the previous memory state m i−1. For the full CLEVR dataset, we can see that this component has only a small impact in the final performance -ultimately ing in 0.06% performance difference. However, for the 10% of the data, we can see that the difference in performance when ablating this component is much larger -11.2%.Writing Unit Ablations. In our main MAC model variant, the memory unit merges the new information m new with the previous memory state m i−1 by combining them through a linear transformation. In this experiment, we have explored other variations, such as assigning m new to m i directly -ignoring previous memories, or doing a linear transformation based on m new only. The show that in fact such variant is only slightly worse than our main variant -0.4%. We also conducted an experiment in which we merge the new information with the previous memory just by a having a gate that does a weighted average of them. The show that this variant performs equivalently to our standard linear-transformation variant. Writing Unit Additions. We have explored the impact of the writing unit variants described in section 3.2.3 -adding self-attention, gating mechanisms, or both, compared to our standard main model that uses a linear transformation to merge the newly retrieved information m new with the previous memory m i. For the complete CLEVR dataset we can see that indeed both these variants are very helpful in increasing the model performance. Compared to our standard MAC model that achieves 98.94% on the validation set, self-attention yields accuracy of 99.23%, gating yields 99.36% and adding both achieves 99.48%.Output Unit. In our standard model, the final predictions made in the output unit are based on the final memory state m p as well as question representation q (stands for the final hidden states of the backward and forwards passes of the LSTM). We have explored the contribution of basing the model prediction on the latter, by testing the model performance when prediction is based on memory alone, for the complete and 10% datasets. We can see that in both settings basing the model's predictions on the question representation allows faster training and higher accuracies. Notable is the gap in performance for the 10% CLEVR -19.8% increase by using the question representation to make predictions. These are very reasonable intuitively, since the model is structured such that the memory holds only information that was retrieved from the image. Thus, questions that may ask for instance on different aspects (such as color or shape) of the same object in the image may in the same memory content, which is thus does not directly contain enough information to respond such questions. Position. In our standard model, similarly to the practice of competing models BID25 BID24 BID13, we have concatenated positional information to each region of the image, in order to increase the model capability to perform spatial reasoning. We have explored both simple linear maps at a constant [−1, 1] as well as more complex positional encoding suggested by BID28. However, the for both the standard dataset and the 10% version show a very negligible improvement at best when adding positional encoding information, demonstrating the capability of MAC to perform spatial reasoning without data augmentation. Gate Bias Initialization. For our model variant with gating mechanism (described in section 3.2.3) we have tested the effect of setting different values for the gate bias -−1, 0 and 1. for −1 the model is initialized to biased for keeping the previous memory value whereas for 1 it will be biased for using the new memory instead. We can see that for the complete dataset setting the bias to 1 is optimal -apparently since the model has enough data to learn to apply each cell effectively. In contrast, for the small 10% CLEVR data, setting the bias to 0 shows better performance, biasing the model to using less cells overall which ultimately in a theoretically-simpler model that can fit less data more effectively. We have looked into attention maps over the image and question that the model produces during its computation and provide a few examples in FIG2.4. The first example shows us how the model parses the question in steps, first focusing on the main entity that the question is about, then on relation of this entity to the "brown matte thing" which is then located in the image. Finally, the model correctly focuses on the small brown cube and predicts the right answer -brown. The second example shows a model with 4 cells instead of 6, that similarly parse the question in iterations and focuses on the relevant objects at each step, though we can see that the reasoning process looks somewhat different when the MAC network has fewer cells. The last example shows how how the model handles counting and OR operations. It starts from identifying the task -computing a number, and then red objects as well as the cylinder, one at a time, allowing it ultimately to respond correctly, with the answer 2. We have given a first demonstration of how a sequence of Memory, Attention and Control (MAC) cells combined into a Compositional Attention Network provides a very effective tool for neural reasoning. In future work, we wish to explore this promising architecture for other tasks and domains, including real-world VQA, machine comprehension and textual question answering. The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit. Here we provide details of the Query Unit and the Image Unit used in this work. We encode a query of S words into a continuous representation using a bidirectional LSTM BID12 BID7. Each word is associated with a word embedding w s, where s = 1,..., S. In our case, we use GloVE words embeddings BID23. Then, these embeddings are processed by a bidirectional LST M of dimension d that outputs:• a matching sequence of d-dimensional output states, which we refer to as contextual words, DISPLAYFORM0, the concatenation of the hidden states from the backward and forward passes. We refer to q as the question representation. Intuitively, each contextual word cw s represents the meaning of s th word, in the context of the question, while the hidden state q represents the overall (compositional) meaning of the question. Given an image, and following prior work on CLEVR (? BID25 BID24, we extract conv4 features from ResNet101) pretrained on ImageNet BID18 which we treat as a fixed initial representation of the image, x of dimension H, W, C where H = W = 14 are the height and width of the transformed image and C = 1024 is the number of channels. Each feature x h,w represents one region in the original image. Similar to prior work BID13 BID25 BID24, we would like to allow our model to reason explicitly about spatial locations, as required by many of the questions in CLEVR, and therefore we concatenate to this representation a spatial map that represents each of the positions in the image. However, in contrast to prior work that uses a linear meshgrid feature map with 2 features h and w ranging from −1 to 1, and to allow better representation of the positions, we use the positional encoding scheme proposed by BID28: DISPLAYFORM0 And similarly for w, where p d is a hyperparameter. Overall, the positional encoding of a feature at position (h, w) is [p h, p w], the concatenation of the positional encodings for h and w. This positional encoding scheme allows better correspondence between the distance of 2 positions (x, y) and (x, y) in the image and a vector similarity of their positional encodings, even when p d is larger than two. We then concatenate the obtained spatial map with x, receiving a spatially-aware image representation x p. Then, we pass this representation through two CNN layers with d output channels and obtain a final representation of the image, which we refer to as our Visual Knowledge Base (KB V that is used in further components of the model. For the question processing, we use GloVE BID23 word-vectors with dimension 300. For the image processing, we extract conv4 features from ResNet101 pretrained on ImageNet BID18, with dimension H, W, C where H = W = 14 and C = 1024, followed by 2 CNN layers with kernel size 2. We use MAC network with p = 12 cells, and train it using Adam BID17, with learning rate 10 −4 . We train our model for 10 − 20 epochs, with batch size 64, and use early stopping based on validation accuracies. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. We use dropout 0.85, and ELU BID6 which in our experience has reduce the training process compared to RELU.The training process takes roughly 10-20 hours on a single Titan X GPU. In this section we provide detailed discussion of related work. Several models have been applied to the CLEVR task. These can be partitioned into two groups, module networks that use the strong supervision provided as a tree-structured functional program associated with each instance, and end-to-end, fully differentiable networks that combine a fairly standard stack of CNNs with components that aid them in performing reasoning tasks. We also discuss the relation of MAC to other approaches, such as memory networks and neural computers. The modular approach BID1 b; BID13 BID16 first translates the given question into a tree-structured action plan, aiming to imitate the ground-truth programs provided as a form of strong-supervision. Then, it constructs a tailor-made network that executes the plan on the image in multiple steps. This network is composed of discrete units selected out of a collection of predefined modules, each responsible for an elementary reasoning operation, such as identifying an objects color, filtering them for their shape, or comparing two amounts. Each module has its own set of learned parameters BID16, or even hand-crafted design BID1 to guide it towards its intended behavior. Overall, this approach makes discrete choices at two levels: the identity of each module -the behavior it should learn among a fixed set of possible types of behaviors, and the network layout -the way in which these modules are wired together to compute the answer progressively. Hence, their differentiability is confined to the boundaries of a single module, disallowing end-to-end training. Several key differences exist between our approaches. First, our model replaces the fixed modules collection with one versatile and universal cell that shares both its architecture and parameters across all of its instantiations, and is applied across all the reasoning steps. Second, it replaces the dynamic recursive tree structures with a sequential topology, augmented by soft attention mechanisms, as done in BID5 . This confers our network with a virtual capacity to represent arbitrarily complex Directed Acyclic Graphs (DAGs) while still having efficient and readily deployed physical sequential structure. Together, both of these relaxations allow us to effectively train our model end-to-end by backpropagation alone, whereas module networks demand a more involved training scheme that relies on the strongly-supervised programs at the first stage, and on various Reinforcement Learning (RL) techniques at the second. Furthermore, while our model can be train without the strong supervisory programs, developing adaptive reasoning skills to address the task is it trained for, the modular approach reliance on questions structured and formal representation hinder its applicability to real-world tasks. Alternative approaches for the CLEVR task that do not rely on the provided programs as a strong supervision signal are BID25 and BID24. Both complement standard multi-layer Convolutional Neural Networks (CNNs) with components that aid them in handling compositional and relational questions. Relational Networks. BID25 appends a Relation Network (RN) layer to the CNN. This layer inspects all pairs of pixels in the image, thereby enhancing the network capacity to reason over binary relations between objects. While this approach is very simple and elegant conceptually, it suffers from quadratic computational complexity, in contrast to our and other leading approaches. But beyond that, closer inspection reveals that this direct pairwise comparison might be unnecessary. Based on the analogy suggested by BID25, according to which pixels are equivalent to objects and their pairwise interactions to relations, a RN layer attempts to grasp the induced graph between objects all at once in one shallow and broad layer. Conversely, our attention-based model proceeds in steps. It basically compares the image to its current memory and control for this step, aggregates the attended regions into the new memory, and repeats the process. By the same analogy, it traverses a narrow and deep path, progressively following transitive relations. Consequently, our model exhibits a relational capacity while circumventing the computational inefficiency. FiLM. FiLM BID24 ) is a recently proposed method that interleaves standard CNN layers that process the given image with linear layers, reminiscent of layer normalization techniques BID4 BID14. Each of these layers, called FiLM, is conditioned on the question: the question words are processed by a GRU, and its output is linearly transformed into matching biases and variances for each of the CNN layers, tilting its activations to reflect the specifics of the given question and affect the computation done over the image. Similarly to our model, this approach features distant modulation between the question and the image, where rather than being fused together into the same vector space, the question can affect the image processing only through constrained means -for the case of FiLM -linear transformations. However, since the same transformation is applied to all the activations homogeneously, agnostic to both their spatial location as well as the features values, this approach does not allow the question to differentiate between regions in the image based on the objects or concepts they represent -on the content of the image. This stands in stark contrast to our attention-based model, which readily allows and actually encourages the question to inform the model about relevant regions to focus on. We speculate that this still distant, yet more direct interaction between the question and the data, or image, for the case of VQA, facilitates learning and increases generalizability. It may be more suitable to VQA tasks, and CLEVR in particular, where the questions demand the responder to focus on specific objects, and reason about their properties or relations, rather than respond based only on a holistic view of the image that may lead to sub-optimal , as is the case of FiLM. Indeed, as demonstrated in 4, there is significant evidence showing our models better generalization capacity, allowing it to achieve high accuracies much faster, and from less data than FiLM and other competing methods. Our architecture draws inspiration from recent research on memory and attention BID19 BID29 BID8. BID19; BID29 propose the Dynamic Memory Network model that proceeds in an iterative process, applying soft attention to retrieve relevant information from a visual or textual KB, which is in turn accumulated into memory passed from one iteration to the next. However, in contrast to our model, it views the question as an atomic unit, whereas our model decomposes it into a multi-step action plan informing each cell in our sequential network about its current objective. Another key difference is the distant interaction between the question and the KB that characterizes our model. Conversely, DMN fuses their corresponding representations together into the same vector space. BID9 complements a neural network with a memory array it can interact with, through the means of soft attention. Analogously to our model, it partitions the model into a core neural network, called controller, as well as reading and writing heads that interact with external memory array. However, a main point distinguishing our model from this approach, is the use of dynamic memory, as in BID19, instead of a fixed-array memory. Each MAC cell is associated with a memory state, our reading unit inspects only the latest memory passed from the previous state, and our writing unit creates a new memory state rather than writing to multiple slots in a fixed shared external memory. Notably, our approach is much more reminiscent of the widely successful RNN structure, rather than to BID9. Finally, our approach has potential ties to the VQA models BID13; BID21 which also attend both the to question words and the image while progressively addressing the given question. However, both of these models have distinct specialized designs for each of their attention layers or modules, and have a discrete or fixed layout in which they are composed together. In contrast, our approach relax both of these limitations, having one universal cell design and one universal self-attending sequential network layout. Compared to other leading methods, our model stands out by being heavily based on soft attention, whereas most competing approaches are CNN-based, surprisingly lack any attention mechanism. Since attention is commonly used in models designed for standard VQA BID3 BID10 BID21, it is reasonable to assume that it would be beneficial to incorporate such methods into visual reasoning systems for the CLEVR task as well. In fact, attention mechanisms should be especially useful for multi-step reasoning questions such as those present in CLEVR. Such questions refer to several relations between different objects in the image and feature compositional structure that may be approached one step at a time. Thus, it should be beneficial for a cogent responder to have the capacity to selectively focus on on one or some objects at each step, traversing the relevant relational links one after the other, both at the image level, and at the question level. Moreover, attention mechanisms enhance our model's ability to perform reasoning skills that pertain to aggregation of information across different regions, such as counting, finding maximum value, or performing other reduction operations over information that is spread across the image. Indeed, as discussed in 4, all existing models for visual reasoning, most of which lacking any attention mechanism, struggle with the counting and numerical comparisons questions present in CLEVR. Conversely, our model proves much more capable of performing these reasoning skills, outperforming the other approaches by a wide margin. Noticeably, incorporating soft attention into our model makes it much more adept at performing such aggregation reasoning skills, successfully addressing the this type of questions. Finally, as pointed out by BID21;, soft attention confers the model with robustness to noise introduced from irrelevant information presents in the image, and higher capacity for handling larger and more diverse vocabulary, the latter being demonstrated in 4. It allows the model to separate the wheat from the chaff, selectively attending to the relevant information only, and arguably, being more resilient to both visual and linguistic variations.
We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning.
1,728
scitldr
Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to''imagine'' the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a $\beta$-VAE. Variational auto-encoders (VAEs) have made a significant impact since their introduction by. However, one of their perceived problems is their reconstruction performance. This has spawned a wave of research into trying to improve the reconstruction performance (; ; ; ;). We argue that such attempts are misguided. The whole point of VAEs is to capture only compressible information and discard information specific to any particular image. This is a consequence of the well known evidence lower bound or ELBO objective function consisting of a negative log-probability of generating the original image from the latent representation (this is often implemented as a mean squared error between the image and the reconstruction, although as we argue in Appendix A this term should be proportional to the logarithm of the mean squared error) and a KL-divergence between the probability distribution representing a latent code and a'prior distribution' (usually taken as a multivariate normal with mean zero and unit variance). These two terms have a nice interpretation in terms of the minimum description length -this has been described elsewhere, for example,. The KL-term can be viewed as a measure of the amount of information in the latent code while the log-probability of the image measures the amount of information required to change the image produced by the decoder into the input image (see Section 3 for details). That is, the latent space of a VAE can be viewed as a model of the dataset-capturing compressible information while not encoding any image specific information (which is cheaper to communicate using the reconstruction loss). The great strength of a VAE is that it builds a model of the dataset that does not over-fit (i.e. code for in-compressible features found in specific images). However, because of this it typically will not do a good job of reconstructing images as the latent code does not contain enough information to do the reconstruction (for very restrictive dataset such as MNIST and Celeb-A a lot of information can be captured in the latent space, but for more complex datasets like ImageNet or CIFAR the reconstructions are poor). Of course, if you want good reconstructions on the training set then the simplest solution is to remove the KL-divergence term and just use an autoencoder. However, having a model that does not over-fit the dataset can be useful, but in this case the decoder of a standard VAE should not be regarded as a generative model-that is not its purpose. If we wish to generate realistic looking images we need to imagine the information discarded by the encoder. As a rather simplified analogy, consider a verbal description of an image "a five year old girl in a blue dress standing on a beach". If we asked different artists to depict such scene there is clearly not enough information to provide pixel-wise or feature-wise similarity between their interpretation although each artist could render a convincing image that satisfies the description. In a similar manner if we want a VAE to act as a generative model we need to build a renderer that will imagine an image consistent with the latent variable representation. A simple way to achieve this is using a modified Generative Adversarial Network (GAN). We call such a model a latent space renderer-GAN (or LSR-GAN). To generate an image we choose a latent vector z from the prior distribution for the VAE. This is passed to a generator network that generates an image,x, with the same dimensions as that of the dataset used to train the VAE. The generated image has both to convince a discriminator network that it is a real image-as is usual for a GAN -at the same time the VAE encoder should mapx close to z. To accomplish this we add an additional cost to the normal GAN loss function for the generator (L GEN) where q φ (·|x) is the probability distribution generated by the VAE encoder given an imagex and z is the latent vector that was put into the GAN generator. Note that when training the LSR-GAN we freeze the weights of the VAE encoder. The constant λ is an adjustable hyperparameter providing a trade-off between how realistic the image should look and how closely it captures the information in the latent space. This modification of the objective function can clearly be applied to any GAN or used with any VAE. Although the idea is simple, it provides a powerful method for visualising (imagining) the information stored in a latent space. Interestingly, it also appears to provide a powerful regularisation mechanism to stabilize the training for GANs. Combinations of VAEs and GANs are, of course, not new (; ; ; ;). In all cases we are aware of GANs have been combined with VAEs to "correct" for the poor reconstruction performance of the VAE (see Appendix B for a more detailed discussion of the literature on VAE-GAN hybrids). As we have argued (and expound on in more detail in Section 3), we believe that the decoder of a VAE does the job it is designed to do. They cannot reconstruct images accurately, because the latent space of a VAE loses information about the image, by design. All we can do is imagine the type of image that a point in the latent space represents. In the next section, we show examples of images generated by the LSR-GAN for both normal VAEs and β-VAEs (we also spend time describing VAEs, β-VAEs and the LSR-GAN in more detail). In addition, in this section we present a number of systematic experiments showing the performance of a VAE and LSR-GAN. In Section 3, we revisit the minimum description length formalism to explain why we believe a VAE is doomed to fail as a generative model. We conclude in Section 4. We cover more technical aspects in the appendices. In Appendix A we show that the correct loss function for a VAE requires minimising a term proportional to the logarithm of the mean squared error. In Appendix B we draw out the similarities and differences between our approach to hybridising VAEs with GANs and other work in this area. We present some additional experimental in Appendix C. A detailed description of the architecture of LSR-GAN is given in Appendix D. We end the paper with Appendix E by showing some samples generated by randomly drawing latent variables and feeding them to the LSR-GAN. A natural question to ask is what information about an image gets represented in the latent space of a VAE. To answer this we can use the VAE encoder to generate a distribution q φ (z|x) representing that image in the latent space (see Sections 2.1 for details on VAEs). From this distribution we can sample points in the latent space and feed this to the LSR-GAN generator. We show examples of this for both CIFAR-10 and ImageNet (down-sampled to 64 × 64) in Figure 1. In all cases in this paper the input images are taken from a test set that is independent of the training set. Note that both CIFAR-10 and ImageNet are "hard" for VAEs in the sense that they represent extremely diverse sets of images. As a consequence, the VAE latent space will struggle to store detailed information about the images and the VAE reconstructions will be poor. We have repeated this for a β-VAE (see section 2.3 for a full description of β-VAEs). We note that there is very little variation between the different samples drawn from q φ (z|x), particularly for the standard VAE (β = 1), showing that the latent space of the VAE is relatively smooth (there is more variation when β = 20). To get a sense of the variation in the information stored in latent spaces we show in Figure 2 inputoutput pairs, where the left image is the input and right image is the output generated by the LSR-GAN generator seeded with a latent vector encoding of the input image. The reconstructions capture the shape and , but clearly loses a lot of detail. In some cases it appears that the type of object is being captured, although in the case of the boat with the β-VAE (with β = 20) the wrong object is being rendered. The structure of a VAE is represented schematically below. We sample an input x from some dataset, D. To be concrete we will consider the case where the inputs are images, although clearly a VAE can be used to represent many different types of data. For each input x the encoder outputs a mean vector, µ, and standard deviation vector, σ, that describes an axis aligned normal distribution,. A latent variable z is sampled from this distribution and then fed to a decoder. For simple black and white datasets such as MNIST the decoder outputs a scalar at each pixel location that can be interpreted as the probability that the pixel is black. For more complex datasets the decoder ususal generates a "reconstruction"x = D θ (z). The probability of generating a pixel value x i is then usually taken as 2 )) and variance σ 2 that measures the expected size of the errors between the input images, x, and the reconstructions,x. The loss function for a VAE is equal to the negative evidence lower bound (ELBO) As explained in Appendix A, log(p θ (x|z)) is chosen to be proportional to the logarithm of the reconstruction error betweenx and the input image x-in our experiments this produced better reconstructions than replacing log(p θ (x|z)) with the mean squared error. LSR-GAN is a novel hybridization of VAE and GAN model. The most distinct difference of LSR-GAN from previous work is that it is a two-stage model. In the first stage we train the VAE model. Having done this we freeze the weights of the VAE and train the GAN. We train the discriminator, D, of LSR-GAN in the same way as a normal GAN. That is, we minimise a loss function where G is the generator or the decoder of LSR-GAN. The job of the discriminator, D is, to decide whether its import is a real image or not. Thus, to optimise the loss function we neet to maximize the log-probability of passing the real data, x, while minimising the log-probability of accepting a random sampling G(z) generated by a generator G seeded with a random latent vector z. The architecture of the generator is the same as that of a normal GAN but the loss function is slightly different. We add an additional term giving The parameters of the discriminator and generator are trained in the usual tick-tock fashion using gradient descent. We built the VAE and the generator of GAN using a ResNet as it gave slightly better performance than using a standard CNN. The architecture of the discriminator is the same as DCGAN . The architecture is described in Appendix D. To test the LSR-GAN we use the VAE to generate a latent representation z for an image drawn from an independent test set. The latent vector is then used as a seed value for the generator in the LSR-GAN. The LSR-GAN can get sharper reconstruction images than VAE (see Figure 3). Although not visually so obvious, we have used a quantitative measure of sharpness computed as luminancenormalised Laplacian (, Section 3.1.2). For the reconstructed images from the VAE we obtained a measure of 0.17 ± 0.03 while for the LSR-GAN we obtain 0.28 ± 0.08 (i.e. an improvement of a factor of two). We have also computed the FID measure of image quality for CIFAR-10 . For images seeded from a testing example the VAE achieved a score of 89.8 while LSR-GAN achieved a score of 44.1, while for images seeded with random latent variable (i.e. z ∼ N (0, I)) the FID score for the VAE is 138.6 while for the LSR-GAN it is 47.4. This should not be surprising. The decoder of the VAE is training only where there are training images. Despite the fact that the KL-divergence tries to ensure that as much latent space as possible is used, the constraint of minimising the reconstruction loss means that most of the latent space is far from a training example. Although the VAE does not do too badly generating testing examples, these tend to be substantially closer in the latent space to the training examples than random samples. In contrast the LSR-GAN is trained on random samples so that the generator will have to produce "realistic" images over the whole latent space. Of course, whether these generated images represents anything recognisable is open to question. For diverse training sets such as CIFAR-19 and ImageNet this may be very difficult. What image should we expect from a latent vector halfway between a truck and a bird? In Appendix E we show images generated by seeding LSR-GAN with random latent variables for CIFAR-10, ImageNet, MNIST and Celeb-A. A goal of generating a latent representation is for the representation to be disentangled. Intuitively disentanglement seems clear: We would want information that is somehow causally independent to be encoded into orthogonal directions (or different variables) in our latent space . Unfortunately, this is not only quite difficult to achieve in practice (at least, in an unsupervised setting), but it is even difficult to formulate (see). Despite this difficulty, there have been many attempts to achieve disentanglement (; ;). One of the most prominent has been the β-VAE introduced by , where the KL-divergence term in a normal VAE is weighted by a parameter β The argument is that by making β 1 we encourage disentanglement. Contrariwise, by making β 1 we make a VAE closer to an auto-encoder. This improves the reconstruction performance on the training examples, but at the cost of allowing the latent space to over-fit the training set. In Figure 4 we show examples of input-output pairs for different values of β. We observe that for large β the outputs are quite different from the input images in contrast to small β where many more details of the original input are captured. Although the LSR-GAN model generates slightly clearer, less blurry, images, it has a lower reconstruction error than the VAE decoder. We show the mean squared error measured on a testing set from CIFAR-10 as a function of β in Figure 5 (a). This poor performance of the LSR-GAN is unsurprising, it uses the same information as the VAE (i.e. the information stored in the latent space). By producing sharper images it will pay the price of getting the boundary wrong. The blurry edges from the VAE is a way to hedge its bet and reduced the mean squared error. Interestingly, the mean squared error remains fairly constant as we increase β from a low value, until we reach β = 1 after which it rapidly increases. One interpretation of this fact is that the VAE with β = 1 is successfully encoding all the useful information (i.e. compressible information) so for reconstructing unseen images it will perform as well as an auto-encoder. As we increase β above 1, the reconstruction error increases rapidly. In Figure 5 (b) we show the classification performance as measured by a simple classifier trained on the CIFAR-10 training set. The classifier performance achieved an 84% correct classification on the raw images. We find little variation as we decrease β below 1. As we increase β above 1 the classification accuracy falls off. Again we can attribute this to the latent space of the VAE (with β = 1) capturing most useful information. Interestingly the high-β VAE fails to capture (a) (b) Figure 5: Performance of the VAE (blue points) and LSR-GAN (red points) versus β. In (a) we show the means squared error, while in (b) we show the classification performance using a classifier taking the reconstructed images. The images are taken from CIFAR-10. Error bars show ±1 standard error. "objectness" well. This suggests that, at least for CIFAR-10, the type of object does not contain very much information about its appearance and is rapidly discarded. To understand what VAEs do it is useful to interpret them in the framework of the minimum description length (MDL) formalism. In MDL we consider communication a dataset D through a communication channel using as few bits as possible. We can do this using lossy compression, where we encode each input x by a code z, which we communicate down our channel. The receiver decodes the message and produces an approximation of the inputx. To communicate the original information we send the code z together with the error = x−x between the input x and the reconstructionx. Because the distribution of errors, p, is more sharply concentrated than the original inputs, p(x), this method allows us to communicate the image more efficiently than transmitting the raw pixel values. The expected cost of transmitting an input is where M (z) is the number of bits needed to communicate the code, z, and E is the number of bits required to communicate the error,. In the MDL formalism we attempt to find a code that minimises the description length L. To communicate the model and errors we need to use an optimal coding strategy. Rather than specifier and actual code we can use the Shannon bound (i.e. the negative log-probability of the tokens we transmit). For this to be meaningful, we need to specify both the errors and code to a finite precision. The precision of the errors will determine the accuracy of the data we communicate. If the i th component of the error is distributed according to p(i) then the cost of communicating the error to a precision of ∆ is approximately − log(p( i) ∆) = − log(p( i)) − log(∆). The factor − log(∆) is common to all coding schemes so is irrelevant to choosing optimal codes z. In contrast the precision to which we transmit the model will directly determine the cost M (z). There is a balance to be struck: a more precise model can potential lead to a better reconstructionx, reducing the reconstruction cost, E, but at the same time increasing the cost, M (z), of communicating the code z. The KL-divergence term, KL q(z) p(z) (also known as the relative entropy) can be interrupted as the communication cost (in nats) of transmitting a random variable z with uncertainty given by q(z) assuming an underlying probability distribution of all random variables of p(z). Using this interpretation we see that the loss function of a VAE is equivalent to the expected message length (in nats) of communicating a sample from the dataset D by using a random variable z with uncertainty q(z). By minimising the loss function we find a coding scheme with the minimum description length (or, at least, an approximate local minimum). By encoding a message as a random variable z drawn from a distribution q φ (z|x) the VAE is able to find an optimal balance between accuracy to which it transmits the model (determined by the standard deviation vector, σ, generated by the VAE encoder) and the need to reduce the reconstruction error. From an MDL perspective the ELBO is the correct objective function, and should not be regarded as a approximate lower bound to what we really want to achieve. If there are too many dimensions in the latent space then some of the components of z (channel in information theory terms) are such that z i is approximated distributed by N (z i |0, 1) for all inputs x. The channel is effectively "switched off" (and it will be ignored by the decoder as it is just a source of random noise). This is referred to as latent variable collapse and is sometimes viewed as problematic, however, from the MDL viewpoint it acts as an elegant automatic dimensionality selection technique. The job of the decoder in a variational autoencoder is to reconstruct the image only using information that can be compressed. Image specific information is ignored. For example, information about the precise shape of an object is probably not compressible. As a the decoder tends to hedge its bets and has a blurry outline. Of course, some encoders and decoders will be better than others, but to date there is little evidence in the literature that the performances of VAEs are massively suboptimal, at least, when working with images. With an extremely powerful encoder and decoder and a limited dataset it would be possible for the encoder to communicate an identifier of the input image and for the decoder to reproduce the image just from the identifier, thus avoiding communicating any information about the visual content of the image-this requires that the decoder memorises all the images. This would be an extreme case of what is sometimes called posterior collapse. There is some evidence that with very strong encoders and decoders that the amount of information stored in the latent space (as measured by the KL-divergence) decreases . This might point to a weakness of the VAE set-up-the MDL set-up really only makes sense when the dataset is arbitrarily large-, but this problem could be ameliorated by data augmentation. However, using standard CNN encoders and decoders we found no evidence for memorisation of the images (for example, the VAE would produce a similar level of reconstruction for images from a separate test set). For language modelling there seems to be more evidence that VAEs often fail to extract information in the latent space, but for images it seems likely that a properly trained VAE will extract a good fraction of the compressible information. We believe that the failure of the VAE decoder to produce high quality reconstructions (except in the case very of simple datasets such as MNIST and possibly CELEB-A) is because to do so would require communicating information that is noncompressible. As a consequence we should not think of the decoder of a VAE as a generative model: It will, by design, produce blurry and poor quality reconstructions. We want this to ensure that the latent space only captures information that is common across many images. We see the mapping from images to latent space as a many-to-one mapping. Thus, the mapping from the latent space to images will be ambiguous and the best we can do is imagine an image compatible with the latent variable: exactly what we have designed the LSR-GAN to do. VAEs are often taken to be a pauper's GAN. That is, a method for generating samples that is easier to train than a GAN, but gives slightly worse . If this is the only objective then it is clearly legitimate to modify the VAE in anyway that will improve its performance. However, we believe that this risks losing one of their most desirable properties, namely their ability to learn features of the whole dataset while avoiding encoding information specific to particular images. We have argued that because of this property, a VAE is not an ideal generative model. It will not be able to reconstruct data accurately and consequently will struggle even more with generating new samples. One of the weaknesses of the vast literature on VAEs is that it often attempts to improve them without regard to what makes VAEs special. As we have argued in this paper, a consistent way of using the latent space of a VAE is to use a GAN as a data renderer, using the VAE encoder to ensure that the GAN is generating images that represent the information encoded in the VAE's latent space. This involves "imagining" the information that the VAE disregards. LSR-GAN can be particularly useful in generating random samples, although, as shown in Appendix E, for very diverse datasets the samples are often not recognisable as real world objects. Although there are already many VAE-GAN hybrids, to the best of our knowledge, they are all designed to "fix" the VAE. In our view VAEs are not broken and "fixing" them is actually likely to break them (i.e. by encoding image specific information in the latent space). Although, the main idea in this paper is relatively simple, we believe its main contribution is as a corrective to the swath of literature on VAEs that, in our view, often throws the baby out with the bath water in an attempt to fix VAEs despite the fact that perform in exactly the way they were designed to. A ON THE ELBO In the standard VAE we maximise the log-probability of generating the original image. In the original paper this was achieved by the decoder outputting a probability distribution akin to what happens in the latent space. More often it is assumed that the pixel errors are normally distributed with some variance σ 2. Thus the log-probability of generating all the images is where the sum is over all predicted pixels-i.e. the number of pixels in an image times the number of colour channels times the number of examples (or, more usually, the mini-batch size). However, (at least, if we make the natural assumption that the errors have mean zero). As a consequence 2 so that we should minimise N log σ 2 /2. In information theory terms this tells us that it cheaper to communicate the residues if they are more tightly concentrated. Note that since σ 2 is proportional to the mean squared error, E MSE, it suffices to minimise N log(E MSE) /2. We note that which is precisely the gradient of if we ignored the dependence of σ 2 onx i. In many publically available implementations of VAEs the algorithm minimises 2 which arbitrarily assumes σ 2 = 1 2 rather than its true value. This means that these implementations are effectively running a β-VAE with some unknown β (in our experience with β > 1). This makes comparing from different VAE implementations difficult. For example, rescaling outputs to lie in the range [−1, 1] rather than would change the effective β-value. The hybridisation of VAE (or autoencoder) and GAN models have been developed for several years. There are many attempts on this area and we compare LSR-GAN to the most related work in this section. The adversarial types autoencoder is the most intuitive and simplest way to combine a VAE or an autoencdoer and a GAN models. Most of these models introduce a discriminator into the autoencoder training. AAE applies a discriminator to distinguish the output of encoder and the random sample from the prior distribution. It uses this discriminator to replace the KL term in VAE. VAE/GAN is the first model that applied feature-wise errors and the input of its generator contains three different types images: the reconstruction images, the generated images and the real images. The same as our model, it collapse the decoder and the generator into one. MDGAN is another AE-GAN hybrid which is close to VAE/GAN, they try to match the manifold of GAN to real data by adding a geometric metrics regulariser and mode regulariser. None of these methods feed the output of generator back into the encoder or train their network in two-stages, which is the biggest difference between these methods and ours. Also, many of these hybrid models adopt an autoencoder instead of VAE while the VAE in our model cannot be replaced by an autoencoder. There are not many models that use the output of decoder to feed the encoder. The Introspective Adversarial Network (IAN) is a unified model which means the discriminator is not separate. IAN only encodes the feature that extracted by discriminator rather than the raw images. The discriminator of IAN extracts features from both raw images and synthetic images. The generator accept both random sample and the output of the discriminator as inputs at the same time. In contrast, our models only accept one input. Another model that adopts the introspective method is IntroVAE , it constructs the inference model E and generator model G in a circulation loop. IntroVAE has the ability to generate high-resolution images. But it does not contain any discriminator network. The most closely work to our LSR-GAN is VEEGAN . It introduces a second network F θ to the GAN. The task of F θ is to map both the real images and synthetic images to a Gaussian distribution which is what we ask the encoder to do. When the input of F θ is the output of generator, the objective function minimise the distance between the input of generator and the output of F θ. If the input of F θ is real data, the objective function minimise the cross entropy between Gaussian prior and the output of F θ. Another related model is the Generative moment matching networks (GMMN) . In this model the autoencoder is frozen and they then minimize the maximum mean discrepancy (MMD) between the generated representation and data representation, and they use an uniform prior to generate the representations. In LSR-GAN, we match two Gaussian distributions in maximizing the probability distance. None of these related works are two-stages models except GMMN. Also, to the best of our knowledge, LSR-GAN is the first VAE-GAN hybrid model that applies the probability distance in the loss function. We briefly present some additional experimental data. In Table 1 we present measurements of the performance of outputs from both VAEs and LSR-GAN for different values of β. Some of this data is also presented graphically in Figure 5, but we have included additional measurements. The performance of the LSR-GAN depends on the hyper-parameter λ. This balances the need to produce convincing images (from the discriminator's point of view) with the requirement that the latent space of the GAN should be close to that for the VAE. These two objectives are not necessarily contradictory, although we will see that changing λ has benefits and drawbacks. In Figure 6 we show the effect of changing λ over approximately three orders of magnitude on (a) the absolute classification accuracy (b) the classification accuracy compared to the class labels predicted by the classifier on the raw images (c) the mean squared reconstruction error and (d) the variance in the predictions when choosing different samples from q φ (z|x). We see that increasing λ improves the classification performance (both relative and absolute). However, and perhaps surprisingly, increasing λ produces a significant reduction in the reconstruction error. More intuitively it also causes a reduction in the variance between images sampled independently from q φ (z|x). That is, using the encoder in the LSR-GAN acts a regulariser ensuring close by points in latent space map to similar images. More details are given in Table 2. (a) (b) (c) (d) Figure 6: Graphs showing the classification performance of images generated by our GAN with different λ values (on test dataset). The left-bottom graph shows the mean square error between reconstruction images and imagination images. The right-bottom graph shows the variance between images generated by our GAN from latent representations of an input image. The x-axis is the log value of different λ, the errors are too small which make error bars look like lines. In this appendix we describe the detailed architecture of the VAE and LSR-GAN we used. Table 3 describes the structure of the VAE's encoder and decoder and the GAN's generator and discriminator networks. The encoder and decoder/generator are based on a ResNet. The ResNet block structure is shown in Figure 7. Both networks are optimized using Adam with a learning rate of 2 × 10 −4 and β 1 = 0.5. The code we used to implement the models is available at https://github.com/iclr-2020-zzz/LSR-GAN. In this appendix we show sample images generated by LSR-GAN starting with a random seed z ∼ N (0, I). These are shown in Figure 9 for an LSR-GAN trained on CIFAR-10 and ImageNet. Although the images superficially look reasonable on close inspection it is clear that most samples for the LSR-GAN trained on CIFAR-10 and ImageNet are not real world objects. This reflects the fact that the images for these two dataset are very variable leaving most of the latent space representing rather surreal objects. Figure 8: Random samples generated by LSR-GAN trained on CIFAR10 and ImageNet with β =1 and λ=1. We have also trained LSR-GAN on MNIST and Celeb-A with samples shown in Figure 9. Perhaps unsurprisingly, most samples are identifiable. Figure 9: Random samples generated by LSR-GAN trained on MNIST and Celeb-A with β =1 and λ=1.
To understand the information stored in the latent space, we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space.
1,729
scitldr
Human brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view that generative models provide a useful tool for understanding this variability. To this end, this manuscript presents two novel generative models trained on real neuroimaging data which synthesize task-dependent functional brain images. Brain images are high dimensional tensors which exhibit structured spatial correlations. Thus, both models are 3D conditional Generative Adversarial networks (GANs) which apply Convolutional Neural Networks (CNNs) to learn an abstraction of brain image representations. Our show that the generated brain images are diverse, yet task dependent. In addition to qualitative evaluation, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers (also known as decoding, or brain reading). Our approach achieves significant improvements for a variety of datasets, classifi- cation tasks and evaluation scores. Our classification provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript. Functional Magnetic Resonance Imaging (fMRI) is a common tool used by cognitive neuroscientists to investigate the properties of brain function in response to stimuli. Classic analysis approaches BID19 focused on analyzing group-averaged brain function images. However, it was discovered that brain activation patterns vary significantly between individuals. Thus, modern analysis now prioritizes understanding the inter-subject variability of brain function (; BID6 . Our work is motivated by the view that generative models provide a useful tool for understanding this variability -as they enable the synthesis of a variety of plausible brain images representing different hypothesized individuals, and high-quality generative models can be analyzed to posit potential mechanisms that explain this variability BID10 . The presented in this paper provide -to our knowledge for the first time, positive suggesting that it is indeed possible to generate high quality diverse and task dependent brain images. While we can qualitatively evaluate generative brain images, quantitative evaluation allows us to objectively compare between various . To this end, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers. The use of classifiers to predict behavior associated with brain images is also known as decoding or brain reading BID21 BID16). Classifiers such as support vector machines and deep networks have been applied for decoding brain images. For example, attempted to classify which of 10 categories of object a subject was looking at (including similar categories, such as horses and cows) based on limited number of brain images. Besides visual tasks, BID14 distinguished active regions of brains when subjects listened to linguistic words, where the stimuli included five items from each of 12 semantic categories (animals, body parts etc.).Beyond providing a model for individual variability, high quality brain image synthesis addresses pressing data issues in cognitive neuroscience. Progress in the computational neurosciences is stifled by the difficulty of obtaining brain data either because of a limited culture of data sharing, or due to medical privacy regulations BID18. For the computational neuroscientist, generated images deliver unlimited quantities of high quality brain imaging data that can be used to develop state of the art tools before application to real subjects and/or patients BID21. This approach of using modern generative models to synthesize data, which in turn accelerates scientific study, has already proven useful in many scientific fields such as particle physics and astronomy BID1. Our work represent a first application for this approach to neuroscience. One of the promising generative models are Generative Adversarial Networks (GANs) BID7, capturing complex distributions using a non-cooperative two-player game formulation: a generator produces synthetic data by transforming samples drawn from a simple distribution; a discriminator focuses on distinguishing synthetic and real data. Despite (or possibly due to) the compelling formulation, GAN training is known to be unstable. To address this difficulty various variants have been proposed. Wasserstein GANs (WGANs) formulate the objective using the Wasserstein distance rather than the classical Jenson-Shannon divergence. Improved training of WGANs BID9 applies an additional gradient penalty, which avoids the critical weight clipping in WGANs which might lead to pathological behavior. Dualing GANs restrict the discriminator and formulate the dual objective . Beyond improving the stability, conditional GANs make it possible to control the data generation process by conditioning the model on additional information. Auxiliary Classifier GANs (AC-GANs) BID15 ) unify a GAN and classifier to a single architecture, employing labels to generate ImageNet samples. 3D GANs BID23 reconstruct 3D objects and BID4 propose to use improved WGAN to enhance the stability of 3D GANs. We make the following contributions in this paper:1. We develop Improved Conditional Wasserstein GANs (ICW-GAN) and Auxiliary Classifier and Discriminator GANs (ACD-GAN), two types of 3D conditional GANs to synthesize fMRI brain data, both of which we find to be stable to train. 2. We assess the qualitative quality and diversity of generated brain volumes. Our suggest that the proposed models are able to generate high quality task-dependent and diverse 3D brain images. 3. We evaluate our models on three datasets using a series of image classification tasks with support vector machines and deep network classifiers at two levels of brain image resolution. Results show that augmenting training data using synthetic data generated by our models can greatly improve test classification accuracy of brain volumes. We are unaware of any published papers using neural networks to generate brain imaging data. However, neural networks have been used by several authors for classifying brain imaging data. BID5 and BID12, used 2D deep nets to extract features of fMRI brain images to classify brain states. BID14 applied both 2D and 3D neural networks to classify fMRI brain data. BID20 decoded fMRI data of video stimuli and classify data into visual categories. Similarly, BID14 extracted features from 4-D fMRI data and used deep learning methods for discrimination of cognitive processes. To learn a distribution over data x, a GAN BID7 formulates a 2-player noncooperative game between two deep nets. The generator G uses a random noise vector z sampled from a prior distribution P z (z) as input and produces an image G(z). The generator is trained to fool the discriminator, which receives either synthetic data or real data and differentiates between them. Formally, G and D play the following two-player minimax game with value function V (G, D): DISPLAYFORM0 For the conditional GAN , both G and D are conditioned on some extra information y, for instance, class labels or other features. The conditioning can be presented by feeding y into both the generator and the discriminator. This work is successful in one-to-many mappings such as image labeling with many tags. The generator of our ICW-GAN. The 128 dimensional encoding z is drawn from a multivariate normal distribution. The label vector is a one-hot encoding, i.e., the one entry that equals one represents the class that the generated volume belongs to. It is concatenated to input and hidden layers and for each of these layers, a fully connected layer transforms the label vector to a volume. Our stride in the de-convolutional layers is in the batch, height, width, length, feature map dimension. The Wasserstein GAN (WGAN) uses Wasserstein-1 distance W (q, p) as the objective function. It computes the minimum cost of transporting mass to transform the distribution q into the distribution p. Under this circumstance, W (q, p) is continuous everywhere and its gradient with respect to its input was found to be more stable than its classical GAN counterpart. Improved Wasserstein GAN (IWGAN) BID9 argues that weight clipping of the critic, i.e., the discriminator, in WGANs inevitably causes the gradient to either vanish or to explode. To address this issue BID9 propose an alternative penalty term in the critic loss based on the gradient norm. In an auxiliary classifier GAN (AC-GAN) BID15, every generated sample has a corresponding label which can be used as a condition in the generator and the classifier. The discriminator is modified to contain an auxiliary decoder network to reconstruct the training data class. Further, 3D-GANs BID23 extend GANs to 3D object generation. Different from classical GANs, 3D-GANs apply the three dimensional convolution in both G and the D. By learning deep object representations, 3D GANs can generate visually appealing yet variable 3D object volumes. In the following we introduce two models for fMRI data generation: ICW-GAN and ACD-GAN. They differ in model structure and in the way label information is taken into account. We first show the model structure of 3D ICW-GAN and its downstream classifiers in Section 3.1, and present ACD-GAN in Section 3.2. Similar to classical generative adversarial networks (GANs), ICW-GANs are formulated as a noncooperative two-player game between two adversaries: a generatorx = G θ (z), which generates artificial samplesx from randomly drawn latent encodings z via a transformation using a deep net parameterized by θ; and a discriminator D w (x) represented via the logit obtained from a deep net parameterized by w. Following Arjovsky et al. FORMULA0, we build our models based on minimizing the Wasserstein distance via min DISPLAYFORM0 where {D w} w∈W denotes a set of functions that are K-Lipschitz for some K. IW-GANs BID9 ) provide a more stable method for enforcing the Lipschitz constraint by employing a gradient penalty rather than weight clipping, when optimizing: DISPLAYFORM1 Herebyx is a convex combination of real data and artifical samples, i.e.,x ← x + (1 −)G(z) with drawn from a uniform distribution (∼ U).In our models, we extend IW-GAN in two directions. First, because fMRI data is three dimensional, 3D convolution and deconvolution are used in order to maintain the spatial structure and voxel information of the fMRI data. Second, we condition both the discriminator D w and the generator G θ on available labeled data. As shown in FIG0, our generator consists of three fully convolutional layers. In convolutional layers, we use kernels with a size 4 × 4 × 4 and strides 2, with batch normalization and Leaky ReLU layers added between and a tanh layer at the end. The discriminator architecture is a mirrored generator, but no activation function is used at its end. This ICW-GAN model follows models proposed in BID9; BID23, however, we found the following modifications to stabilize training and supervise the generation process for classification data: compared to existing GANs, our ICW-GAN applies a more stable upper bound for Wasserstein distance as an objective function, leverages conditioning labels to supervise the generation process, and utilizes 3D convolution methods for 3D object generation. To include label information, we concatenate labels to the input and to the hidden layers. At the input of the generator, one-hot labels are combined with the brain vector. Then, for each of the intermediate layers, we use a fully connected layer followed by a tanh activation to transform the one-hot vector to a volume of appropriate size, i.e., 15 × 2 × 2 for the first hidden layer, and 15 × 3 × 3 for the next. We empirically found dimensions other than 15, e.g., 3 to work well too. We concatenate the label volume to intermediate volumes on feature dimension and pass the joint one to the next deconvolution layer. We follow the same procedure in the architecture of the discriminator. Referring to the volume labels via y, the objective function L of our ICW-GAN model is as follows: DISPLAYFORM2 wherex DISPLAYFORM3 λ is a gradient penalty coefficient. We optimize both the discriminator and generator loss using Adam optimizer BID11.DOWNSTREAM CLASSIFIERS FOR ICW-GAN Our work is partially motivated by the difficulty of acquiring fMRI data, so we consider the task of enlarging the dataset for downstream tasks such as classification, using synthetically generated samples. To assess the suitability of the developed generation procedure, we employ a variety of downstream classification processes. More specifically, we train a classifier with real ('Real') or real plus generated ('Real+G') data. We also develop a conditional GAN architecture which jointly trains the generator, the discriminator and the classifier. This form of training is inspired by the AC-GAN BID15 formulation. Our generator architecture is illustrated in FIG0 and identical to the one employed for the ICW-GAN. The combination of classifier and discriminator is provided in FIG1. As before, the discriminator assesses the logit of the probability that a provided sample is real or artificially generated. In addition, the classifier provides a probability distribution over the class labels. In this model, training passes labels through the generator and the discriminator in order to condition the adversarial process. However, no labels are employed in the classification stream. The classifier loss, L C, is the log-likelihood of the correct class. Compared to L in Eq., the discriminator loss, L D includes the additional classifier component DISPLAYFORM0 The generator loss function, L G, consists of the likelihood of fake images and one part of L C, i.e., DISPLAYFORM1 Similarly, the discriminator loss function is given by L D = L + L C. We optimize the generator, classifier and discriminator loss functions using Adam optimizer BID11. We examine the performance of the two aforementioned models on three different datasets from Neurovault BID8 of generated fMRI data for classifier training, we focus on two variants of the model architecture for generating images. The original images are downsampled using the nilearn python package 2 to 13 × 15 × 11 (lower resolution) and 26 × 31 × 23 (higher resolution) spatial resolutions for collection 1952, and 13 × 15 × 11 (lower resolution) and 26 × 31 × 22 (higher resolution) resolutions for the other two collections. In Section 3, we present the architectures of our models for generating low resolution data, which only have 3 convolutional layers. For high resolution data we add one more convolutional and deconvolutional layer in the discriminator and generator respectively. For high resolution data, the classifier also uses 4 convolutional layers in the ACD-GAN. We first show qualitative for generated 3D volumes, then we provide quantitative for 3D volume classification. We use cross validation in all reported to carefully assess the classifier performance. See the supplementary material for details. Dataset 1: Collection 1952 Collection 1952 has 6573 brain images, 45 classes with a total number of 19 sub-classes. The labels include task description that subjects were going through during the imaging process, such as visual, language and calculate. A combination of 2 to 4 sub-classes are subsumed for a class. There are a total of 45 classes because not every potential combination is observed. In our experiments, we work on a subset of 36 classes because 9 of the 45 classes have less than 30 examples. If a class has more than 100 images, we split images in that class 7:1:2 into training, validation and test datasets. Below 100 images but more than 30 images we use a 3:1:2 data split. Note that we ensure that the test data always consists of real images only. We train the ICW-GAN and ACD-GAN for 2500 epochs with a learning rate of 1e-4 and 50% exponential decay for each 3000 iterations and a batch size of 50. 2-D projections of several brain volumes generated by ICW-GAN are illustrated in FIG2. The projections in the top row of FIG2 class'visual words, language, visual,' and those at the bottom are from class'places, visual, right hand.' By examining the generated brain images, we find diversity in both intra-class and betweenclass. We refer the interested reader to the supplementary material for a comparison of generated and real brain images. Inspired by BID15, we compute the multi-scale structural similarity (MS-SSIM) BID22 to examine the intra-class diversity of the data generated by ICW-GAN. The MS-SSIM score is a similarity metric based on image similarity at different resolutions. Its values are within. Higher MS-SSIM values correspond to more similar images. We measure the mean MS-SSIM score with 100 randomly chosen volumes of mixed data with a given class, and ones of real training data. In 13 × 15 × 11 resolution, we find that 22 classes with mixed data have a lower MS-SSIM score than only with real training data. In other words, 61.1% classes with mixed data have sample variability that exceeds those only with real training data. In 26 × 31 × 23 resolution, 69.4% classes with mixed data have a lower MS-SSIM score. See Figure 4 for details. To further assess the quality of the generated data we evaluate classification performance. Note that the test data is always composed of real images. The classification are shown in Table 1. Downsampling refers to the downscaling factor as already discussed. The second column indicates the type of training data we use for the classifier: only using real data ('Real'), or using the mixed data of real and generated volumes ('Real+G'). The third column denotes the classifier type, i.e., an SVM or a deep net ('NN'). We use the validation dataset to choose the best training models and use these models to classify the test data. We observe the deep net classifier to generally outperform SVMs and we observe generated data to be beneficial for classifier training. Dataset 2: Collection 2138 For data in collection 2138, there are 1847 brain images, 61 classes and 50 labels. Because of the small size of the dataset, we randomly choose 70% of the brain images as training data and leave 30% as test data. In this case, we do not have development data to supervise the training process; thus, we train our models for 1000 epochs in several runs and record the best classification , which are summarized in TAB3. In this collection, we downsample brains by a factor of 8 and 4 to volume sizes of 13 × 15 × 11 and 26 × 31 × 22 respectively. Similar to the Accuracy Macro F1 Precision Recall Table 4: Results of ACD-GAN on the three datasets in the low resolution setting. We find the ACD-GAN to perform slightly better than the ICW-GAN.earlier reported we observe deep nets to outperform SVMs, but more importantly, generated data during training was again suitable to improve the classifier performance on real data. Dataset 3: Collection 503 5067 brain images are in collection 503. Subjects in this dataset are required to respond to 30 images from the International Affective Picture Set BID2. These 30 images were used to train the Picture Induced Negative Emotion Signature also described in this paper. We consider the 30 pictures classes of brain data. In other words, there are overall 30 classes and each class represents a picture shown to subjects. This collection needs extra preprocessing because brain volumes are available with two shapes: 79 × 95 × 68 and 91 × 109 × 91.We conform large volumes to the small ones. Similarly, all experiments are at the two levels of resolution. Classification are summarized in TAB5 and again follow the trend reported earlier. Results of Using ACD-GAN Since our ICW-GAN achieves the best performance in the three databases when we use mixed training data, we compare these best to of ACD-GAN at the low resolution. 2-D projections of brain volumes generated by ACD-GAN are shown in FIG3. In Table 4, we present classification of our models and show that ACD-GAN obtains improved accuracy scores compared to a ICW-GAN for most of the investigated metrics. As a baseline, we also implemented the recent generative model -the 3-D. According to all the presented above, the proposed methods achieve much improved overall better performance on the three datasets. This is true at both levels of resolution for brain images. The also outperform a 3-D AC-GAN applied to the same dataset. We also observe that our strategy of mixing real and generated data does not always yield the best . For example, using real data only to train the NN classifier achieves the largest macro F1 score at high resolution in TAB5. However, the proposed approach consistently improves accuracy metrics -which is by far the most popular metric for evaluating multi-class classification performance. Generative models provide a useful tool for understanding the individual variability of brain images. The of this manuscript show -to our knowledge for the first time, that 3-D conditional GANs, in particular our proposed ICW-GAN and ACD-GAN, can generate high quality diverse and task dependent brain images. We hope our inspire additional research on generative models for brain imaging data. Beyond qualitative evaluation, we evaluate quantitative performance by using the generated images as additional training data in a predictive model -mixing synthetic and real data to train classifiers. The show that our synthetic data augmentation can significantly improve classification accuracy -a which may be of independent interest. Future work will focus on additional qualitative evaluation of the generated images by neuroscience experts and exploration of various applications. We also plan to more throughly investigate the trained models to further explore what it may contribute to the science of individual variability in neuroimaging. Finally, we plan to expand our models to combine data across multiple studies -each of which use different labels, by exploring techniques for merging labels based on the underlying cognitive processes BID17.6 SUPPLEMENTARY MATERIAL First, we explain our cross-validation strategy. In Figure 6, the blue bar represents training data and the green bar is test data. For 3-fold cross-validation, first we partition test data to three folds and each fold maintains a similar proportion of data for each class as original test data. For each round of cross-validation, we leverage one of the three parts as test data and left two folds together with training data to train a model. The same strategy is used with generated data to ensure that test performance is only based on the test data. Each evaluation metric is the average of the three rounds. Figure 6: Cartoon illustration of our 3-fold cross validation strategy. We tested our model with various cross-validation settings and calculated the variances of the evaluation metrics (Table 5). We conducted this experiment on low resolution data from collection 1952 and synthetic data generated by the ICW-GAN. Except for the number of folds used when partitioning the test dataset, the training strategy for 5-fold and 10-fold cross-validation is similar to that of 3-fold cross-validation. The small variances suggest that the reported accuracy differences, while small, are indeed significant. Table 5: Accuracy, F1, Precision, Recall and their variance (column 3,5,7,9) for 3-fold, 5-fold and 10-fold cross validation. We conducted this experiment with the training data of mixed'Real+Synth.' data of collection 1952 in the low resolution (4.0×) setup. We present the ICW-GAN training loss curve evaluated on the validation dataset of collection 1952 in FIG6. This illustrates the stability of the proposed improved Wasserstein variants. In order to further evaluate the quality of the generated data for collection 1952, we trained using only generated data and tested with real data. We used the deep net classifier for evaluation in this case while varying the number of input samples for each class. FIG5 shows that in general, the test accuracy improves as the amount of artificial training data increases. The red and yellow lines show the accuracy obtained when only using real data. These suggest that the generated data can be as effective for classification as real data with sufficient numbers of samples. Table 6: Results of synthesizing data using collection 1952 (low resolution). Comparison between the GMM and ICW-GAN. We list 6 training data strategies: in the 1st, 2nd, 4th and 5th row, we only use synthetic data to train the deep net classifier while in the 3rd and 6th row, we mix real and synthetic data together to train the same classifier. We also used a Gaussian Mixture Model (GMM), i.e., trained separately for each class, as a simple baseline generative model. The are shown in Table 6, where 6 training strategies were listed and the deep net classifier (same as the NN classifier) was used for all the strategies. I.e., we leveraged 20 synthetic images in each class (20×number of classes in all) to train the NN classifier. Note that both generative models learned the probability distribution from collection 1952. The evaluation scores of our models are significantly higher than the ones obtained with a GMM, particularly when using only synthetic data to train the classifier.6.5 GENERATED IMAGE QUALITATIVE ANALYSIS We used NeuroSynth 3, a brain decoding tool to qualitatively analyze generated volumes. Its principle is to compare activity regions of a given brain volume to a data pool and then to obtain an analysis report about correlation to specific activity patterns. Figure 9 shows a generated brain image of collection 1952 by the ICW-GAN in low resolution and its analysis report with the top 10 ranking list of correlation. The supervised class of this generated brain is "Non-human sound, auditory", and several correlations with top rank are related to auditory and sound. Figure 9: The projections of a generated brain volume (left) and its corresponding analysis report (right). The volume belongs to the class'Non-human sound, auditory.' We also examine our models using a multilabel classification strategy in low resolution data of collection 1952. As described before, one class in collection 1952 consists of several labels and the collection has a total of 19 sub-labels. Instead of encoding the classes with a one-hot vector of 45 dimensions, we encode them in 19 dimensions, each of which represents a sub-label. The value in a dimension is 1 only if the sample possesses that label, otherwise, 0, i.e., a sample comes with a 19 dimensional binary vector. This representation is demanding since the probability space is significantly larger, specifically, 2 19 possibilities, for a classifier to learn and discriminate. We use SVM to train real and mixed training data (synthetic data obtained from ICW-GAN), both of which Table 7: Multilabel for collection 1952 at 4.0× downsampling and synthetic data were from the ICW-GAN. Even in this highly demanding task, accuracy scores with mixed data (the second row) outperform the baseline (the first row).are in the resolution of 13 × 15 × 11, and utilize 3-fold cross-validation as discussed in Section 6.1. Several evaluation scores are shown in Table 7. We observe that even in this highly demanding task, accuracy scores with mixed data (the second row) outperform the baseline (the first row). We present several real images of collection 1952 with their labels above the 2D projections of brain images in FIG0.Figure 10: Real images in collection 1952 at 4.0× downsampling with labels on top. Linear normalization is used and the lower threshold for plotting these images is 0.48. More synthetic brain images using ICW-GAN are shown in this section. FIG0 show generated ones of low resolution from collection 1952, 2138 and 503 respectively. For collection 1952, classes are given above the images. As for collection 2138, the corresponding categories of the classes are as follows:• 14: visual form recognition, feature comparison, response selection, response execution, relational comparison, visual pattern recognition • 22: response execution, working memory, body maintenance, visual body recognition • 4: response selection, response execution, punishment processing • 43: motion detection • 2: response selection, response execution, animacy perception, animacy decision, motion detection • 3: response selection, response execution, motion detection
Two novel GANs are constructed to generate high-quality 3D fMRI brain images and synthetic brain images greatly help to improve downstream classification tasks.
1,730
scitldr
Transferring representations from large-scale supervised tasks to downstream tasks have shown outstanding in Machine Learning in both Computer Vision and natural language processing (NLP). One particular example can be sequence-to-sequence models for Machine Translation (Neural Machine Translation - NMT). It is because, once trained in a multilingual setup, NMT systems can translate between multiple languages and are also capable of performing zero-shot translation between unseen source-target pairs at test time. In this paper, we first investigate if we can extend the zero-shot transfer capability of multilingual NMT systems to cross-lingual NLP tasks (tasks other than MT, e.g. sentiment classification and natural language inference). We demonstrate a simple framework by reusing the encoder from a multilingual NMT system, a multilingual Encoder-Classifier, achieves remarkable zero-shot cross-lingual classification performance, almost out-of-the-box on three downstream benchmark tasks - Amazon Reviews, Stanford sentiment treebank (SST) and Stanford natural language inference (SNLI). In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT models, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks, and the high, out-of-the-box classification performance is correlated with the generalization capability of such systems. Here, we first describe the model and training details of the base multilingual NMT model whose 135 encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For 136 each task, we provide the specifics of f pre, f pool and f post nets that build the task-specific classifier. All the models in our experiments are trained using the Adam optimizer with label smoothing. Unless otherwise stated below, layer normalization is applied to all LSTM gates and 139 feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer 140 activations and sub-word embeddings. Hyper-parameters, such as mixing ratio λ of L2 regularization, 141 dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization 142 ranges of weights are tuned on the development sets provided for each task separately. NMT Models. Our multilingual NMT model consists of a shared multilingual encoder and two 144 decoders, one for English and the other for French. The multilingual encoder uses one bi-directional respectively. We used max-pooling operator for the f pool network to pool activation over time. Multilingual SNLI. We extended the proposed multilingual Encoder-Classifier model to a multi-160 source model since SNLI is an inference task of relations between two input sentences, "premise" and "hypothesis". For the two sources, we use two separate encoders, which are initialized with 162 the same pre-trained multilingual NMT encoder, to obtain their representations. Following our 163 notation, the encoder outputs are processed using f pre, f pool and f post nets, again with two separate 164 network blocks. Specifically, f pre consists of a co-attention layer [62.60 BiCVM BID3 59.03 RANDOM BID5 63.21 RATIO BID5 58.64The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the perfor- In this section, we try to analyze why our simple multilingual Encoder-Classifier system is effective 208 at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study the effect of shared sub-word vocabulary, the amount of multilingual and French, the out-of-vocabulary (OOV) rate for the German test set using our vocabulary is just 219 0.078%. We design this experiment as a control to understand the effect of having a shared sub-word 220 for the proposed system to model a language agnostic representation (interlingua) which enables it to 252 perform better zero-shot classification. Moreover, it should be noted that best zero-shot performance 253 is obtained by using the complex classifier and up to layer 3 of the encoder. Although this gap is not 254 big enough to be significant, we hypothesize that top layer of the encoder could be very specific to 255 the MT task and hence might not be best suited for zero-shot classification. Effect of Early vs Late Phases of the Training. Figure 1 shows that as the number of training 257 steps increases, the test accuracy goes up whereas the test loss on the SNLI task increases slightly, hinting at over-fitting on the English task. As expected, choosing checkpoints which are before the 259 onset of the over-fitting seems to benefit zero-shot performance on the French SNLI test set. This suggests that over-training on the English task might hurt the ability of the model to generalize to a 261 new language and also motivated us to conduct the next set of analysis. which aims to smooth point estimates of the learned parameters by averaging n steps from the training 264 run and using it for inference. This is aimed at improving generalization and being less susceptible to 265 the effects of over-fitting at inference. We hypothesize that a system with enhanced generalization 266 might be better suited for zero-shot classification since it is a measure of the ability of the model to 267 generalize to a new task. learning multilingual representations from a set of bilingual lexical data. Here we combined the best of both worlds by learning contextualized representations which are 309 multilingual in nature and explored its performance in the zero-shot classification tasks. We demon-310 strated that using the encoder from a multilingual NMT system as a pre-trained component in other TAB10 summarizes the accuracy of our proposed system for these three different approaches and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input 498 sequences are very different from the short sequences consumed by the NMT system, and hence 499 freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI 500 and SST , which contain sentence-level input sequences, where we did not find any significant 501 difference between freezing and not freezing the encoder.
Zero-shot cross-lingual transfer by using multilingual neural machine translation
1,731
scitldr
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Discovering state-of-the-art neural network architectures requires substantial effort of human experts. Recently, there has been a growing interest in developing algorithmic solutions to automate the manual process of architecture design. The automatically searched architectures have achieved highly competitive performance in tasks such as image classification BID35 BID36 BID13 a; BID26 and object detection BID36.The best existing architecture search algorithms are computationally demanding despite their remarkable performance. For example, obtaining a state-of-the-art architecture for CIFAR-10 and ImageNet required 2000 GPU days of reinforcement learning (RL) BID36 or 3150 GPU days of evolution BID26. Several approaches for speeding up have been proposed, such as imposing a particular structure of the search space BID13 a), weights or performance prediction for each individual architecture and weight sharing/inheritance across multiple architectures BID0 BID24; ), but the fundamental challenge of scalability remains. An inherent cause of inefficiency for the dominant approaches, e.g. based on RL, evolution, MCTS BID20, SMBO BID12 or Bayesian optimization BID9, is the fact that architecture search is treated as a black-box optimization problem over a discrete domain, which leads to a large number of architecture evaluations required. In this work, we approach the problem from a different angle, and propose a method for efficient architecture search called DARTS (Differentiable ARchiTecture Search). Instead of searching over a discrete set of candidate architectures, we relax the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent. The data efficiency of gradient-based optimization, as opposed to inefficient black-box search, allows DARTS to achieve competitive performance with the state of the art using orders of magnitude less computation resources. It also outperforms another recent efficient architecture search method, ENAS BID24. Notably, DARTS is simpler than many existing approaches as it does not involve controllers BID35; BID36 BID24 BID33, hypernetworks or performance predictors BID12 ), yet it is generic enough handle both convolutional and recurrent architectures. The idea of searching architectures within a continuous domain is not new BID27; BID30 BID28, but there are several major distinctions. While prior works seek to fine-tune a specific aspect of an architecture, such as filter shapes or branching patterns in a convolutional network, DARTS is able to learn high-performance architecture building blocks with complex graph topologies within a rich search space. Moreover, DARTS is not restricted to any specific architecture family, and is applicable to both convolutional and recurrent networks. In our experiments (Sect. 3) we show that DARTS is able to design a convolutional cell that achieves 2.76 ± 0.09% test error on CIFAR-10 for image classification using 3.3M parameters, which is competitive with the state-of-the-art by regularized evolution BID26 obtained using three orders of magnitude more computation resources. The same convolutional cell also achieves 26.7% top-1 error when transferred to ImageNet (mobile setting), which is comparable to the best RL method BID36. On the language modeling task, DARTS efficiently discovers a recurrent cell that achieves 55.7 test perplexity on Penn Treebank (PTB), outperforming both extensively tuned LSTM BID17 and all the existing automatically searched cells based on NAS BID35 and ENAS BID24.Our contributions can be summarized as follows:• We introduce a novel algorithm for differentiable network architecture search based on bilevel optimization, which is applicable to both convolutional and recurrent architectures.• Through extensive experiments on image classification and language modeling tasks we show that gradient-based architecture search achieves highly competitive on CIFAR-10 and outperforms the state of the art on PTB. This is a very interesting , considering that so far the best architecture search methods used non-differentiable search techniques, e.g. based on RL BID36 or evolution BID26 BID13 ).• We achieve remarkable efficiency improvement (reducing the cost of architecture discovery to a few GPU days), which we attribute to the use of gradient-based optimization as opposed to non-differentiable search techniques.• We show that the architectures learned by DARTS on CIFAR-10 and PTB are transferable to ImageNet and WikiText-2, respectively. The implementation of DARTS is available at https://github.com/quark0/darts 2 DIFFERENTIABLE ARCHITECTURE SEARCH We describe our search space in general form in Sect. 2.1, where the computation procedure for an architecture (or a cell in it) is represented as a directed acyclic graph. We then introduce a simple continuous relaxation scheme for our search space which leads to a differentiable learning objective for the joint optimization of the architecture and its weights (Sect. 2.2). Finally, we propose an approximation technique to make the algorithm computationally feasible and efficient (Sect. 2.3). Following BID36; BID26; BID12 BID25, we search for a computation cell as the building block of the final architecture. The learned cell could either be stacked to form a convolutional network or recursively connected to form a recurrent network. A cell is a directed acyclic graph consisting of an ordered sequence of N nodes. Each node x (i) is a latent representation (e.g. a feature map in convolutional networks) and each directed edge (i, j) is associated with some operation o (i,j) that transforms x (i). We assume the cell to have two input nodes and a single output node. For convolutional cells, the input nodes are defined as the cell outputs in the previous two layers BID36. For recurrent cells, these are defined as the input at the current step and the state carried from the previous step. The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to all the intermediate nodes. Each intermediate node is computed based on all of its predecessors: A special zero operation is also included to indicate a lack of connection between two nodes. The task of learning the cell therefore reduces to learning the operations on its edges. DISPLAYFORM0 Let O be a set of candidate operations (e.g., convolution, max pooling, zero) where each operation represents some function o(·) to be applied to x (i). To make the search space continuous, we relax the categorical choice of a particular operation to a softmax over all possible operations: DISPLAYFORM0 where the operation mixing weights for a pair of nodes (i, j) are parameterized by a vector α (i,j) of dimension |O|. The task of architecture search then reduces to learning a set of continuous variables α = α (i,j), as illustrated in FIG0. At the end of search, a discrete architecture can be obtained by replacing each mixed operationō (i,j) with the most likely operation, i.e., o (i,j) = argmax o∈O α (i,j) o. In the following, we refer to α as the (encoding of the) architecture. After relaxation, our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters). Analogous to architecture search using RL BID35 BID36 BID24 or evolution BID13 BID26 where the validation set performance is treated as the reward or fitness, DARTS aims to optimize the validation loss, but using gradient descent. Denote by L train and L val the training and the validation loss, respectively. Both losses are determined not only by the architecture α, but also the weights w in the network. The goal for architecture search is to find α * that minimizes the validation loss L val (w *, α *), where the weights w * associated with the architecture are obtained by minimizing the training loss w * = argmin w L train (w, α *).This implies a bilevel optimization problem with α as the upper-level variable and w as the lower-level variable: DISPLAYFORM1 DISPLAYFORM2 The nested formulation also arises in gradient-based hyperparameter optimization BID16 BID22 BID2, which is related in a sense that the architecture α could be viewed as a special type of hyperparameter, although its dimension is substantially higher than scalar-valued hyperparameters such as the learning rate, and it is harder to optimize. Create a mixed operationō DISPLAYFORM0 (ξ = 0 if using first-order approximation) 2. Update weights w by descending ∇ w L train (w, α) Derive the final architecture based on the learned α. Evaluating the architecture gradient exactly can be prohibitive due to the expensive inner optimization. We therefore propose a simple approximation scheme as follows: DISPLAYFORM0 where w denotes the current weights maintained by the algorithm, and ξ is the learning rate for a step of inner optimization. The idea is to approximate w * (α) by adapting w using only a single training step, without solving the inner optimization (equation 4) completely by training until convergence. Related techniques have been used in meta-learning for model transfer BID1, gradientbased hyperparameter tuning BID15 and unrolled generative adversarial networks BID19. Note equation 6 will reduce to ∇ α L val (w, α) if w is already a local optimum for the inner optimization and thus ∇ w L train (w, α) = 0.The iterative procedure is outlined in Alg. 1. While we are not currently aware of the convergence guarantees for our optimization algorithm, in practice it is able to reach a fixed point with a suitable choice of ξ 1. We also note that when momentum is enabled for weight optimisation, the one-step unrolled learning objective in equation 6 is modified accordingly and all of our analysis still applies. Applying chain rule to the approximate architecture gradient (equation 6) yields DISPLAYFORM1 where w = w − ξ∇ w L train (w, α) denotes the weights for a one-step forward model. The expression above contains an expensive matrix-vector product in its second term. Fortunately, the complexity can be substantially reduced using the finite difference approximation. Let be a small scalar 2 and DISPLAYFORM2 Evaluating the finite difference requires only two forward passes for the weights and two backward passes for α, and the complexity is reduced from O(|α||w|) to O(|α| + |w|).First-order Approximation When ξ = 0, the second-order derivative in equation 7 will disappear. In this case, the architecture gradient is given by ∇ α L val (w, α), corresponding to the simple heuristic of optimizing the validation loss by assuming the current w is the same as w * (α). This leads to some speed-up but empirically worse performance, according to our experimental in TAB0. In the following, we refer to the case of ξ = 0 as the first-order approximation, and refer to the gradient formulation with ξ > 0 as the second-order approximation. To form each node in the discrete architecture, we retain the top-k strongest operations (from distinct nodes) among all non-zero candidate operations collected from all the previous nodes. The strength of an operation is defined as DISPLAYFORM0. To make our derived architecture comparable with 1 A simple working strategy is to set ξ equal to the learning rate for w's optimizer. 2 We found = 0.01/ ∇ w L val (w, α) 2 to be sufficiently accurate in all of our experiments. DISPLAYFORM1. The analytical solution for the corresponding bilevel optimization problem is (α *, w *) =, which is highlighted in the red circle. The dashed red line indicates the feasible set where constraint equation 4 is satisfied exactly (namely, weights in w are optimal for the given architecture α). The example shows that a suitable choice of ξ helps to converge to a better local optimum. those in the existing works, we use k = 2 for convolutional cells BID36 BID12 BID26 and k = 1 for recurrent cells BID24.The zero operations are excluded in the above for two reasons. First, we need exactly k non-zero incoming edges per node for fair comparison with the existing models. Second, the strength of the zero operations is underdetermined, as increasing the logits of zero operations only affects the scale of the ing node representations, and does not affect the final classification outcome due to the presence of batch normalization BID8. Our experiments on CIFAR-10 and PTB consist of two stages, architecture search (Sect. 3.1) and architecture evaluation (Sect. 3.2). In the first stage, we search for the cell architectures using DARTS, and determine the best cells based on their validation performance. In the second stage, we use these cells to construct larger architectures, which we train from scratch and report their performance on the test set. We also investigate the transferability of the best cells learned on CIFAR-10 and PTB by evaluating them on ImageNet and WikiText-2 (WT2) respectively. We include the following operations in O: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, identity, and zero. All operations are of stride one (if applicable) and the convolved feature maps are padded to preserve their spatial resolution. We use the ReLU-Conv-BN order for convolutional operations, and each separable convolution is always applied twice BID36 BID26 BID12.Our convolutional cell consists of N = 7 nodes, among which the output node is defined as the depthwise concatenation of all the intermediate nodes (input nodes excluded). The rest of the setup follows BID36; BID12 BID26, where a network is then formed by stacking multiple cells together. The first and second nodes of cell k are set equal to the outputs of cell k − 2 and cell k − 1, respectively, and 1 × 1 convolutions are inserted as necessary. Cells located at the 1/3 and 2/3 of the total depth of the network are reduction cells, in which all the operations adjacent to the input nodes are of stride two. The architecture encoding therefore is (α normal, α reduce), where α normal is shared by all the normal cells and α reduce is shared by all the reduction cells. Detailed experimental setup for this section can be found in Sect. A.1.1. Our set of available operations includes linear transformations followed by one of tanh, relu, sigmoid activations, as well as the identity mapping and the zero operation. The choice of these candidate operations follows BID35; BID24. Each architecture snapshot is re-trained from scratch using the training set (for 100 epochs on CIFAR-10 and for 300 epochs on PTB) and then evaluated on the validation set. For each task, we repeat the experiments for 4 times with different random seeds, and report the median and the best (per run) validation performance of the architectures over time. As references, we also report the (under the same evaluation setup; with comparable number of parameters) of the best existing cells discovered using RL or evolution, including NASNet-A (2000 GPU days), AmoebaNet-A (3150 GPU days) BID26 and ENAS (0.5 GPU day) BID24.function, as done in the ENAS cell BID24. The rest of the cell is learned. Other settings are similar to ENAS, where each operation is enhanced with a highway bypass BID34 and the cell output is defined as the average of all the intermediate nodes. As in ENAS, we enable batch normalization in each node to prevent gradient explosion during architecture search, and disable it during architecture evaluation. Our recurrent network consists of only a single cell, i.e. we do not assume any repetitive patterns within the recurrent architecture. Detailed experimental setup for this section can be found in Sect. A.1.2. To determine the architecture for final evaluation, we run DARTS four times with different random seeds and pick the best cell based on its validation performance obtained by training from scratch for a short period (100 epochs on CIFAR-10 and 300 epochs on PTB). This is particularly important for recurrent cells, as the optimization outcomes can be initialization-sensitive FIG2.To evaluate the selected architecture, we randomly initialize its weights (weights learned during the search process are discarded), train it from scratch, and report its performance on the test set. We note the test set is never used for architecture search or architecture selection. Detailed experimental setup for architecture evaluation on CIFAR-10 and PTB can be found in Sect. A.2.1 and Sect. A.2.2, respectively. Besides CIFAR-10 and PTB, we further investigated the transferability of our best convolutional cell (searched on CIFAR-10) and recurrent cell (searched on PTB) by evaluating them on ImageNet (mobile setting) and WikiText-2, respectively. More details of the transfer learning experiments can be found in Sect. A.2.3 and Sect. A.2.4. BID33 3.54 39.8 96 8 RL AmoebaNet-A BID26 3.34 ± 0.06 3.2 3150 19 evolution AmoebaNet-A + cutout BID26 ) † 3.12 3.1 3150 19 evolution AmoebaNet-B + cutout BID26 2.55 ± 0.05 2.8 3150 19 evolution Hierarchical evolution BID13 3.75 ± 0.12 15.7 300 6 evolution PNAS BID12 3.41 ± 0.09 3.2 225 8 SMBO ENAS + cutout BID24 2.89 4.6 0.5 6 RL ENAS + cutout BID24 3.00 ± 0.14 3.3 1.5 7 gradient-based DARTS (second order) + cutout 2.76 ± 0.09 3.3 4 7 gradient-based * Obtained by repeating ENAS for 8 times using the code publicly released by the authors. The cell for final evaluation is chosen according to the same selection protocol as for DARTS. † Obtained by training the corresponding architectures using our setup. ‡ Best architecture among 24 samples according to the validation error after 100 training epochs. Table 2: Comparison with state-of-the-art language models on PTB (lower perplexity is better). Note the search cost for DARTS does not include the selection cost (1 GPU day) or the final evaluation cost by training the selected architecture from scratch (3 GPU days). BID18 60.7 58.8 24 --manual LSTM + skip connections BID17 60.9 58.3 24 --manual LSTM + 15 softmax experts BID31 BID23 publicly released by the authors. † Obtained by training the corresponding architecture using our setup. ‡ Best architecture among 8 samples according to the validation perplexity after 300 training epochs. BID36 27.2 8.7 5.3 488 2000 RL NASNet-C BID36 27.5 9.0 4.9 558 2000 RL AmoebaNet-A BID26 25.5 8.0 5.1 555 3150 evolution AmoebaNet-B BID26 26.0 8.5 5.3 555 3150 evolution AmoebaNet-C BID26 24.3 7.6 6.4 570 3150 evolution PNAS BID12 25. The CIFAR-10 for convolutional architectures are presented in TAB0. Notably, DARTS achieved comparable with the state of the art BID36 BID26 while using three orders of magnitude less computation resources (i.e. 1.5 or 4 GPU days vs 2000 GPU days for NASNet and 3150 GPU days for AmoebaNet). Moreover, with slightly longer search time, DARTS outperformed ENAS BID24 ) by discovering cells with comparable error rates but less parameters. The longer search time is due to the fact that we have repeated the search process four times for cell selection. This practice is less important for convolutional cells however, because the performance of discovered architectures does not strongly depend on initialization FIG2.Alternative Optimization Strategies To better understand the necessity of bilevel optimization, we investigated a simplistic search strategy, where α and w are jointly optimized over the union of the training and validation sets using coordinate descent. The ing best convolutional cell (out of 4 runs) yielded 4.16 ± 0.16% test error using 3.1M parameters, which is worse than random search. In the second experiment, we optimized α simultaneously with w (without alteration) using SGD, again over all the data available (training + validation). The ing best cell yielded 3.56 ± 0.10% test error using 3.0M parameters. We hypothesize that these heuristics would cause α (analogous to hyperparameters) to overfit the training data, leading to poor generalization. Note that α is not directly optimized on the training set in DARTS. Table 2 presents the for recurrent architectures on PTB, where a cell discovered by DARTS achieved the test perplexity of 55.7. This is on par with the state-of-the-art model enhanced by a mixture of softmaxes BID31, and better than all the rest of the architectures that are either manually or automatically discovered. Note that our automatically searched cell outperforms the extensively tuned LSTM BID17, demonstrating the importance of architecture search in addition to hyperparameter search. In terms of efficiency, the overall cost (4 runs in total) is within 1 GPU day, which is comparable to ENAS and significantly faster than NAS BID35.It is also interesting to note that random search is competitive for both convolutional and recurrent models, which reflects the importance of the search space design. Nevertheless, with comparable or less search cost, DARTS is able to significantly improve upon random search in both cases (2.76 ± 0.09 vs 3.29 ± 0.15 on CIFAR-10; 55.7 vs 59.4 on PTB). TAB4 show that the cell learned on CIFAR-10 is indeed transferable to ImageNet. It is worth noticing that DARTS achieves competitive performance with the state-of-the-art RL method BID36 while using three orders of magnitude less computation resources. TAB6 shows that the cell identified by DARTS transfers to WT2 better than ENAS, although the overall are less strong than those presented in Table 2 for PTB. The weaker transferability between PTB and WT2 (as compared to that between CIFAR-10 and ImageNet) could be explained by the relatively small size of the source dataset (PTB) for architecture search. The issue of transferability could potentially be circumvented by directly optimizing the architecture on the task of interest. BID4 -68.9 --manual LSTM BID18 69.1 66.0 33 -manual LSTM + skip connections BID17 69.1 65.9 24 -manual LSTM + 15 softmax experts BID31 66 We presented DARTS, a simple yet efficient architecture search algorithm for both convolutional and recurrent networks. By searching in a continuous space, DARTS is able to match or outperform the state-of-the-art non-differentiable architecture search methods on image classification and language modeling tasks with remarkable efficiency improvement by several orders of magnitude. There are many interesting directions to improve DARTS further. For example, the current method may suffer from discrepancies between the continuous architecture encoding and the derived discrete architecture. This could be alleviated, e.g., by annealing the softmax temperature (with a suitable schedule) to enforce one-hot selection. It would also be interesting to explore performance-aware architecture derivation schemes based on the one-shot model learned during the search process. A EXPERIMENTAL DETAILS A.1 ARCHITECTURE SEARCH A.1.1 CIFAR-10Since the architecture will be varying throughout the search process, we always use batch-specific statistics for batch normalization rather than the global moving average. Learnable affine parameters in all batch normalizations are disabled during the search process to avoid rescaling the outputs of the candidate operations. To carry out architecture search, we hold out half of the CIFAR-10 training data as the validation set. A small network of 8 cells is trained using DARTS for 50 epochs, with batch size 64 (for both the training and validation sets) and the initial number of channels 16. The numbers were chosen to ensure the network can fit into a single GPU. We use momentum SGD to optimize the weights w, with initial learning rate η w = 0.025 (annealed down to zero following a cosine schedule without restart BID14), momentum 0.9, and weight decay 3 × 10 −4. We use zero initialization for architecture variables (the α's in both the normal and reduction cells), which implies equal amount of attention (after taking the softmax) over all possible ops. At the early stage this ensures weights in every candidate op to receive sufficient learning signal (more exploration). We use Adam BID10 as the optimizer for α, with initial learning rate η α = 3 × 10 −4, momentum β = (0.5, 0.999) and weight decay 10 −3. The search takes one day on a single GPU 3.A.1.2 PTB For architecture search, both the embedding and the hidden sizes are set to 300. The linear transformation parameters across all incoming operations connected to the same node are shared (their shapes are all 300 × 300), as the algorithm always has the option to focus on one of the predecessors and mask away the others. Tying the weights leads to memory savings and faster computation, allowing us to train the continuous architecture using a single GPU. Learnable affine parameters in batch normalizations are disabled, as we did for convolutional cells. The network is then trained for 50 epochs using SGD without momentum, with learning rate η w = 20, batch size 256, BPTT length 35, and weight decay 5 × 10 −7. We apply variational dropout BID3 of 0.2 to word embeddings, 0.75 to the cell input, and 0.25 to all the hidden nodes. A dropout of 0.75 is also applied to the output layer. Other training settings are identical to those in BID18; BID31. Similarly to the convolutional architectures, we use Adam for the optimization of α (initialized as zeros), with initial learning rate η α = 3 × 10 −3, momentum β = (0.9, 0.999) and weight decay 10 −3. The search takes 6 hours on a single GPU. A large network of 20 cells is trained for 600 epochs with batch size 96. The initial number of channels is increased from 16 to 36 to ensure our model size is comparable with other baselines in the literature (around 3M). Other hyperparameters remain the same as the ones used for architecture search. Following existing works BID24 BID36 BID12 BID26, additional enhancements include cutout , path dropout of probability 0.2 and auxiliary towers with weight 0.4. The training takes 1.5 days on a single GPU with our implementation in PyTorch BID21. Since the CIFAR are subject to high variance even with exactly the same setup BID13, we report the mean and standard deviation of 10 independent runs for our full model. To avoid any discrepancy between different implementations or training settings (e.g. the batch sizes), we incorporated the NASNet-A cell BID36 and the AmoebaNet-A cell BID26 into our training framework and reported their under the same settings as our cells.
We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources.
1,732
scitldr
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our show that maximization is an inappropriate decoding objective for open-ended text generation, the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text. On February 14th 2019, OpenAI surprised the scientific community with an impressively highquality article about Ovid's Unicorn, written by GPT-2. 1 Notably, the top-quality generations obtained from the model rely on randomness in the decoding method, in particular through top-k sampling that samples the next word from the top k most probable choices (; ;), instead of aiming to decode text that maximizes likelihood. In fact, decoding strategies that optimize for output with high probability, such as beam search, lead to text that is incredibly degenerate, even when using state-of-the-art models such as GPT-2-117M, as shown in Figure 1. This may seem counter-intuitive, as one would expect that good models would assign higher probability to more human-like, grammatical text. Indeed, language models do generally assign high scores to well-formed text, yet the highest scores for longer texts are often generic, repetitive, and awkward. Perhaps equally surprising is the right side of Figure 1, which shows that pure sampling -sampling directly from the probabilities predicted by the model - in text that is incoherent and almost unrelated to the context. Why is text produced by pure sampling so degenerate? In this work we show that the "unreliable tail" is to blame. This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate. To overcome these shortcomings we introduce Nucleus Sampling (§3.1). The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus, a small subset of the vocabulary that tends to range between one and a thousand candidates. Instead of relying on a fixed top-k, or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail distribution, we propose sampling from the top-p portion of the probability mass, expanding and contracting the candidate pool dynamically. In order to compare current methods to Nucleus Sampling, we compare various distributional properties of generated text to the reference distribution, such as the likelihood of veering into repetition and the perplexity of generated text. The latter reveals that text generated by maximization or top-k sampling is too probable, indicating a lack of diversity and divergence in vocabulary usage from the human distribution. On the other hand, pure sampling produces text that is significantly less likely than the gold, corresponding to lower generation quality. Vocabulary usage and Self-BLEU statistics reveal that high values of k are needed to make top-k sampling match human statistics. Yet, generations based on high values of k are also found to have incredibly high variance in likelihood, hinting at qualitatively observable incoherency issues. Nucleus Sampling can easily match reference perplexity through a proper value of p, without facing the ing incoherence caused by setting k high enough to match distributional statistics. Finally, we perform Human Unified with Statistical Evaluation (HUSE;) to jointly assess the overall quality and diversity of the decoding strategies, which cannot be captured using either human or automatics evaluation alone. The HUSE evaluation demonstrates that Nucleus sampling is the best overall decoding strategy. We include generated examples for qualitative analysis -see Figure 9 for a representative example, and further examples in the appendix. A number of recent works have alluded to the disadvantages of generation by maximization, which tend to generate output with high grammaticality but low diversity (; ;). Generative Adversarial Networks (GANs) have been a prominent research direction , but recent work has shown that when quality and diversity are considered jointly, GAN-generated text fails to outperform generations from language models (; ;). Work on neural dialog systems have proposed methods for diverse beam search, using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different (a; ; ;). While such utility functions encourage desirable properties in generations, they do not remove the need to choose an appropriate decoding strategy, and we believe that Nucleus Sampling will have complementary advantages in such approaches. Many text generation tasks are defined through (input, output) pairs, such that the output is a constrained transformation of the input. Example applications include machine translation , data-to-text generation , and summarization (Nallapati et al., dsfsadfd An unprecedented number of mostly young whales have become stranded on the West Australian coast since 2008. Poor nutrition has led to a rise in the number of stranded humpback whales on the West Australian coast, veterinary researchers have said. Carly Holyoake, from Murdoch University, at the Australian Veterinary Association's annual conference in Perth on Wednesday, said an unprecedented number of mostly young whales had become stranded on the coast since 2008. There has been an unprecedented number of calves caught in the nets of whaling stations that operate in WA. Pilot whales continue to migrate to feeding grounds to feed their calves. They are now vulnerable due to the decline of wild populations; they are restricted to one breeding site each year. Image copyright Yoon Bo Kim But, with sharp decline in wild populations the size of the Petrels are shrinking and dwindling population means there will only be room for a few new fowl. Pumping Station #3 shut down due to construction damage Find more at: www.abc.net.au/environment/species-worry/ in-the-top-10-killer-whale-catastrophes-in-history.html "In the top 10 killer whale catastrophes in history: 1) 1986: Up to 12 orcas struck by lightning; many drowned and many more badly injured. Last week's intense storms and a series of powerful cyclones have been officially blamed for the deaths of at least nine large fin whales near Whitsundays -the largest loss of any species globally. The fin whales: packed in the belly of one killer whale thrashing madly in fear as another tries to bring it to safety. When the colossal animal breached the waters of Whitsundays, he'd been seen tagged for a decade. The Australian Food Safety Authority has warned Australia's beaches may be revitalised this year because healthy seabirds and seals have been on the move. More than 50,000 seabirds, sea mammals and seahorses have been swept into the sea by the Holden CS118 and Adelaide Airport CS300 from 2013. A major white-bat and umidauda migration across Australia is under way in Australia for the first time, with numbers reaching an estimated 50,000. The number of stranded whales has increased by more than 50 per cent in the past year, with the number of stranded whales on the West Australian coast increasing by more than 50 per cent in the past year. The number of whales stranded on the West Australian coast has increased by more than 50 per cent in the past year, with the number of stranded whales on the West Australian coast increasing by more than 50 per cent in the past year. Sampling, t=0.9 Pure Sampling The whale's fate was confirmed late last week when the animal was found by fishermen off the coast of Bundaberg. Experts believe the whale was struck by a fishing vessel off the coast of Bundaberg, and died after being sucked into the ocean. The whale's fate was confirmed late last week when the animal was found by fishermen off the coast of Bundaberg. Figure 2: Example generations from an initial sentence. Maximization and top-k truncation methods lead to copious repetition (highlighted in blue), while sampling with and without temperature tends to lead to incoherence (highlighted in red). Nucleus Sampling largely avoids both issues. All generations for all hyperparameters will be made publicly available, along with our codebase. 2016). We refer to these tasks as directed generation. Typically encoder-decoder architectures are used, often with an attention mechanisms or using attention-based architectures such as the Transformer . Generation is usually performed using beam search; since output is tightly scoped by the input, repetition and genericness are not as problematic. Still, similar issues have been reported when using large beam sizes , a counter-intuitive observation since large beam sizes help maximize probability. Open-ended generation, which includes conditional story generation and contextual text continuation (as in Figure 1), has recently become a promising research direction due to significant advances in neural language models (; ; ; ;). While the input context restricts the space of acceptable output generations, there is a considerable degree of freedom in what can plausibly come next, unlike in directed generation settings. Our work addresses the challenges faced by neural text generation with this increased level of freedom, but we note that some tasks, such as goal-oriented dialog, may fall somewhere in between open-ended and directed generation. Figure 3: The probability of a repeated phrase increases with each repetition, creating a positive feedback loop. We found this effect to hold for the vast majority of phrases we tested, regardless of phrase length or if the phrases were sampled randomly rather than taken from human text. Given an input text passage as context, the task of open-ended generation is to generate text that forms a coherent continuation from the given context. More formally, given a sequence of m tokens x 1... x m as context, the task is to generate the next n continuation tokens to obtain the completed sequence x 1... x m+n. We assume that models compute P (x 1:m+n) using the common left-to-right decomposition of the text probability, which is used to generate the generation token-by-token using a particular decoding strategy. Maximization-based decoding The most commonly used decoding objective, in particular for directed generation, is maximization-based decoding. Assuming that the model assigns higher probability to higher quality text, these decoding strategies search for the continuation with the highest likelihood. Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable , common practice is to use beam search (b; ;). However, several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text . We propose a new stochastic decoding method: Nucleus Sampling. The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from. Given a distribution P (x|x 1:i−1), we define its top-p vocabulary V (p) ⊂ V as the smallest set such that Let p = x∈V (p) P (x|x 1:i−1). The original distribution is re-scaled to a new distribution, from which the next word is sampled: In practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step. For high values of p, it is a small subset of vocabulary that takes up most of the probability mass -the nucleus. Figure 4: The probability mass assigned to partial human sentences. Flat distributions lead to many moderately probable tokens, while peaked distributions concentrate most probability mass into just a few tokens. The presence of flat distributions makes the use of a small k in top-k sampling problematic, while the presence of peaked distributions makes large k problematic. Top-k Sampling has recently become a popular alternative sampling procedure (; ;). Nucleus Sampling and top-k both sample from truncated Neural LM distributions, differing only in the strategy of where to truncate. Choosing where to truncate can be interpreted as determining the generative model's trustworthy prediction zone. At each time step, the top k possible next tokens are sampled from according to their relative probabilities. Formally, given a distribution P (x|x 1:i−1), we define its top-k vocabulary V (k) ⊂ V as the set of size k which maximizes x∈V (k) P (x|x 1:i−1). Let p = x∈V (k) P (x|x 1:i−1). The distribution is then re-scaled as in equation 3, and sampling is performed based on that distribution. Note that the scaling factor p will be different at each time-step, in contrast to Nucleus Sampling. Difficulty in choosing a suitable value of k While top-k sampling leads to considerably higher quality text than either beam search or sampling from the full distribution, the use of a constant k is sub-optimal across varying contexts. As illustrated on the left of Figure 4, in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options (e.g. nouns or verbs in generic contexts), while in other contexts most of the probability mass is concentrated in one or a small number of tokens, as on the right of the figure. Therefore if k is small, in some contexts there is a risk of generating bland or generic text, while if k is large the top-k vocabulary will include inappropriate candidates which will have their probability of being sampled increased through the truncation. Under Nucleus Sampling, the number of candidates considered rises and falls dynamically, corresponding to the changes in the model's confidence region over the vocabulary which top-k sampling fails to capture for any one choice of k. Another common approach to sampling-based generation is to shape a probability distribution through temperature . Temperature sampling has been applied widely to text generation (; ;). Given the logits u 1:|V | and temperature t, the softmax is re-estimated as Setting t ∈ shapes the distribution to be more skewed towards high probability events, which implicitly lowers the mass in the tail distribution. Low temperature sampling has also been used to partially alleviate the issues of top-k sampling discussed above, by shaping the distribution before top-k sampling . However, recent analysis has shown that, while lowering the temperature improves generation quality, it comes at the cost of decreasing diversity (; has been the most successful in the extremely large-scale training setups in recent literature (; . In this study we use the Generative Pre-trained Transformer, version 2 , which was trained on WebText, a 40GB collection of text scraped from the web. We perform experiments using the Large model (762M parameters). Our analysis is based on generating 5,000 text passages, which end upon reaching an end-of-document token or a maximum length of 200 tokens. We consider a conditional generation setting, in which texts are conditioned on the initial paragraph (restricted to 1-40 tokens) of documents in the validation portion of WebText. Our first evaluation is to compute the perplexity of generated text using various decoding strategies, according to the model that is being generated from. We compare these perplexities against that of the gold text (Figure 5). Importantly, we argue that the optimal generation strategy should produce text which has a perplexity close to that of the gold text: Even though the model has the ability to generate text that has lower perplexity (higher probability), such text tends to be low-diverstiy and get stuck in repetition loops, as shown in §5 and illustrated in Figure 3. We see that perplexity of text obtained from pure sampling (sampling with t = 1) is actually worse than the perplexity of the gold. This indicates that the model is actually confusing itself: sampling too many unlikely tokens and creating context that makes it difficult to recover the human distribution of text, as happens in Figure 1. Yet setting the temperature lower creates diversity repetition issues, as we shall see in §5. Interestingly, even with the relatively fine-grained parameter sweep conducted, Nucleus Sampling gets closer perplexity in both the conditional and under conditional case, than any other method, as shown in Table 1. One might wonder if the issue with maximization search error is a, i.e., there are higher quality sentences to which the model assigns higher probability than to the decoded ones, beam search has just failed to find them. Yet Figure 5 shows that the per-token probability of natural text is, on average, much lower than text generated by beam search. Natural language rarely remains in a high probability zone for multiple consecutive time steps, instead veering into lower-probability but Note that beam search has unnaturally low perplexities. A similar effect is seen using a temperature of 0.7 with top-k as in both and?. Sampling, Top-k, and Nucleus can all be calibrated to human perplexities, but we shall see the first two face coherency issues when their parameters are set this high. more informative tokens. Nor does natural language tend to fall into repetition loops, even though the model tends to assign high probability to this, as seen in Figure 3. Why is naturally existing human text not the most probable text? We conjecture that this is an intrinsic property of human language. Language models that assign probabilities one word at a time without context for the underlying document will have trouble capturing this effect. Grice's Maxims of Communication show that people optimize against stating the obvious. Thus, it is natural that making every word as predictable as possible will be disfavored. This makes solving the problem simply by training larger models or improving neural architectures using standard per-word learning objectives unlikely: models that are blindly predicting the future will be forced to predict the lowest common denominator, rather than information rich language. In order to compare generations to the reference text, we begin by analyzing their use of vocabulary. Zipf's law suggests that there is an exponential relationship between the rank of a word and its frequency in text. The Zipfian coefficient s can be used to compare the distribution in a given text to a theoretically perfect exponential curve, where s = 1 . Figure 6 shows the vocabulary distributions along with estimated Zipf coefficients for selected parameters of different decoding methods. As expected, pure sampling is the closest to the human distribution, followed by Nucleus Sampling. The visualization of the distribution shows that pure sampling slightly overestimates the use of rare words, likely one reason why pure sampling also has higher perplexity than human text. Furthermore, lower temperature sampling avoids sampling these rare words from the tail, which is why it has been used in some recent work . We follow previous work and compute Self-BLEU as a metric of diversity. Self-BLEU is calculated by computing the BLEU score of each generated document using all other generations in the evaluation set as references. Thus a lower Self-BLEU score implies higher diversity. Figure 7 shows that Self-BLEU largely follow that of the Zipfian distribution analysis as a diversity measure. It is worth noting that very high values of k and t are needed to get close to the reference distribution, though these in unnaturally high perplexity (§4). One attribute of text quality that we can quantify is repetition. Figure 8 shows that Nucleus Sampling and top-k sampling have the least repetition for reasonable parameter ranges. Generations from temperature sampling have more repetition unless very high temperatures are used, which we have shown negatively affects coherence (as measured by high perplexity). Further, all stochastic methods face repetition issues when their tuning parameters are set too low, which tends to overtruncate, mimicking greedy search. Therefore we conclude that only Nucleus Sampling satisfies all the distributional criteria for desirable generations. 6 HUMAN EVALUATION Statistical evaluations are unable to measure the coherence of generated text properly. While the metrics in previous sections gave us vital insights into the different decoding methods we compare, human evaluation is still required to get a full measure of the quality of the generated text. However, pure human evaluation does not take into account the diversity of the generated text; therefore we use HUSE to combine human and statistical evaluation. HUSE is computed by training a discriminator to distinguish between text drawn from the human and model distributions, Self-BLEU of Generations Over 5000 Documents Self-BLEU5 Figure 7: Self-BLEU calculated on the unconditional generations produced by stochastic decoding methods; lower Self-BLEU scores imply higher diversity. Horizontal blue and orange lines represent human self-BLEU scores. Note how common values of t ∈ [0.5, 1] and k ∈ in high self-similarity, whereas "normal" values of p ∈ [0.9, 1) closely match the human distribution of text. tokens. Concretely, a phrase (sequence of words with minimum length 2) must repeat at least three times at the end of the generation to be considered a repetition. Since all the methods described used different parameter ranges, we label points with the appropriate parameter value except for t and p. Since values of k greater than 100 are rarely used in practice and values of p are usually in [0.9, 1), it becomes clear that Nucleus sampling is far closer to the human distribution in its usual parameter range. Finally, temperatures lower than 0.9 severely effect repetition, making it problematic for use in long-form generation. based on only two features: The probability assigned by the language model, and human judgements of typicality of generations. Text that is close to the human distribution in terms of quality and diversity should perform well on both likelihood evaluation and human judgements. The in Table 1 shows that Nucleus Sampling obtains the highest HUSE score, with Top-k sampling performing next best. 6.2 QUALITATIVE ANALYSIS Figure 9 shows representative example generations. Unsurprisingly, Beam Search gets stuck in a repetition loop it cannot escape. Of the stochastic decoding schemes, the output of full sampling is clearly the hardest to understand, even inventing a new word "umidauda", apparently a species of bird. The generation produced by Nucleus Sampling isn't perfect -the model appears to get get confused that it was talking about whales, and begins talking about birds instead. Yet, Top-k Sampling immediately veers off into an unrelated event. When top-k sampling is combine with a temperature of 0.7, as is commonly done , the output devolves into repetition. This paper provided a deep analysis into the properties of the most common decoding methods for open-ended language generation. We have shown that likelihood maximizing decoding causes repetition and overly generic language usage, while sampling methods without truncation risk sampling from the low-confidence tail of a model's predicted distribution. Further, we proposed Nucleus Samplingas a solution that captures the region of confidence of language models effectively. In future work, we wish to dynamically characterize this region of confidence and include a more semantic utility function to guide the decoding process. We include another set of examples for further qualitative comparison. What's the best way to read this one? The blog post from the authors discusses the differences between how the marketing mix plays out in terms of visibility, trustworthiness, and engagement, among other things. In my experience, this difference isn't necessarily obvious; it's hard to explain to a layperson, but in all honesty it's probably the most important point to consider when reviewing social media. Where can I purchase my prototypes and printed books in the USA? Currently we have a sign for purchase available in the guide. Since we are based in the States we will be shipping to you by FedEx. Do you make books in America? We do! We love our folks in the US. We haven't had a chance to make a documentary so we love to share our love for our culture. How does the book publisher pay for the book and art? Why are your caustics superior? Purest Grades Available. All US Manufactured. No Asian Imports. Quality Control on all incoming and outgoing shipments. Most suppliers only supply tech grade product. What is difference between Food and Tech Grade Lye? Food grade has a lower parts per million of heavy metal impurities. 80% of our soap makers prefer our food grade product. Figure 9: More example generations from an initial tag line. All generations for all hyperparameters will be made publicly available, along with our codebase.
Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity—we provide evidence of both and propose a solution: Nucleus Sampling.
1,733
scitldr
Up until very recently, inspired by a mass of researches on adversarial examples for computer vision, there has been a growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks, followed by very few works of adversarial defenses for NLP. To our knowledge, there exists no defense method against the successful synonym substitution based attacks that aim to satisfy all the lexical, grammatical, semantic constraints and thus are hard to perceived by humans. We contribute to fill this gap and propose a novel adversarial defense method called Synonym Encoding Method (SEM), which inserts an encoder before the input layer of the model and then trains the model to eliminate adversarial perturbations. Extensive experiments demonstrate that SEM can efficiently defend current best synonym substitution based adversarial attacks with little decay on the accuracy for benign examples. To better evaluate SEM, we also design a strong attack method called Improved Genetic Algorithm (IGA) that adopts the genetic metaheuristic for synonym substitution based attacks. Compared with existing genetic based adversarial attack, IGA can achieve higher attack success rate while maintaining the transferability of the adversarial examples. Deep Neural Networks (DNNs) have made great success in various machine learning tasks, such as computer vision , and Natural Language Processing (NLP) (; ;). However, recent studies have discovered that DNNs are vulnerable to adversarial examples not only for computer vision tasks but also for NLP tasks , causing a serious threat to their safe applications. For instance, spammers can evade spam filtering system with adversarial examples of spam emails while preserving the intended meaning. In contrast to numerous methods proposed for adversarial attacks (; ;) and defenses (; ;) in computer vision, there are only a few list of works in the area of NLP, inspired by the works for images and emerging very recently in the past two years . This is mainly because existing perturbation-based methods for images cannot be directly applied to texts due to their discrete property in nature. Furthermore, if we want the perturbation to be barely perceptible by humans, it should satisfy the lexical, grammatical and semantic constraints in texts, making it even harder to generate the text adversarial examples. Current attacks in NLP can fall into four categories, namely modifying the characters of a word , adding or removing words , replacing words arbitrarily , and substituting words with synonyms . The first three categories are easy to be detected and defended by spell or syntax check . As synonym substitution aims to satisfy all the lexical, grammatical and semantic constraints, it is hard to be detected by automatic spell or syntax check as well as human investigation. To our knowledge, currently there is no defense method specifically designed against the synonym substitution based attacks. In this work, we postulate that the model generalization leads to the existence of adversarial examples: a generalization that is not strong enough causes the problem that there usually exists some neighbors x of a benign example x in the manifold with a different classification. Based on this hypothesis, we propose a novel defense mechanism called Synonym Encoding Method (SEM) that encodes all the synonyms to a unique code so as to force all the neighbors of x to have the same label of x. Specifically, we first cluster the synonyms according to the Euclidean Distance in the embedding space to construct the encoder. Then we insert the encoder before the input layer of the deep model without modifying its architecture, and train the model again to defend adversarial attacks. In this way, we can defend the synonym substitution based adversarial attacks efficiently in the context of text classification. Extensive experiments on three popular datasets demonstrate that the proposed SEM can effectively defend adversarial attacks, while maintaining the efficiency and achieving roughly the same accuracy on benign data as the original model does. To our knowledge, SEM is the first proposed method that can effectively defend the synonym substitution based adversarial attacks. Besides, to demonstrate the efficacy of SEM, we also propose a genetic based attack method, called Improved Genetic Algorithm (IGA), which is well-designed and more efficient as compared with the first proposed genetic based attack algorithm, GA . Experiments show that IGA can degrade the classification accuracy more significantly with lower word substitution rate than GA. At the same time IGA keeps the transferability of adversarial examples as GA does. Let W denote the word set containing all the legal words. Let x = {w 1, . . ., w i, . . ., w n} denote an input text, C the corpus that contains all the possible input texts, and Y ∈ N K the output space. The classifier f: C → Y takes an input x and predicts its label f (x), and let S m (x, y) denote the confidence value for the y-th category at the softmax layer. Let Syn(w, σ, k) represent the set of first k synonyms of w within distance σ, namely where w − w p is the p-norm distance evaluated on the corresponding embedding vectors. Suppose we have an ideal classifier c: C → Y that could always output the correct label for any input text x. For a subset of (train or test) texts T ⊆ C and a small constant, we could define the natural language adversarial examples as follows: where d(x − x adv) is a distance metric to evaluate the dissimilarity between the benign example x = {w 1, . . ., w i, . . ., w n} and the adversarial example x adv = {w 1, . . ., w i, . . ., w n}. It is usually defined as the p-norm distance: In this subsection, we provide a brief overview of three popular synonym substitution based adversarial attack methods. Greedy Search Algorithm (GSA). propose a greedy search algorithm to substitute words with their synonyms so as to maintain the semantic and syntactic similarity. GSA first constructs a synonym set W s for an input text x = {w 1, . . ., w i, . . ., w n}: Initially, let x adv = x. Then at each stage for x adv = {w 1, . . ., w i, . . ., w n}, GSA finds a wordŵ i ∈ W that satisfies the syntactic constraint and minimizes S m (x, y true) wherex = {w 1, . . ., w i−1,ŵ i, w i+1, . . ., w n}, and updates x adv =x. Such process iterates until x adv becomes an adversarial example or the word replacement rate reaches a threshold. Genetic Algorithm (GA). propose a population-based algorithm to replace words with their synonyms so as to generate semantically and syntactically similar adversarial examples. There are three operators in GA: • M utate(x): Randomly choose a word w i in text x that has not been updated and substitute w i with w i, one of its synonyms Syn(w i, σ, k) that does not violate the syntax constraint by the "Google 1 billion words language model" and minimize S m (x, y true) wherex = {w 1, . . ., w i−1, w i, w i+1, . . ., w n} and S m (x, y true) < S m (x, y true); • Sample(P): Randomly sample a text from population P with a probability proportional to 1 − S m (x i, y true) where x i ∈ P; • Crossover (For a text x, GA first generates an initial population P 0 of size m: Then at each iteration, GA generates the next generation of population through crossover and mutation operators: GA terminates when it finds an adversarial example or reaches the maximum number of iteration limit. Probability Weighted Word Saliency (PWWS). propose a new synonym substitution method called Probability Weighted Word Saliency (PWWS), which considers the word saliency as well as the classification confidence. Given a text x = {w 1, . . ., w i, . . ., w n}, PWWS first calculates the saliency of each word S(x, w i): where "unk" means the word is removed. Then PWWS calculates the maximum possible change in the classification confidence ed from substituting word w i with one of its synonyms: Then, PWWS sequentially checks the words in descending order of φ(S(x, w i), and substitutes the current word w i with its optimal synonym w * i: PWWS terminates when it finds an adversarial example x adv or it has replaced all the words in x. There exist very few works for text adversarial defenses. • In the character-level, propose to place a word recognition model in front of the downstream classifier to defend character-level adversarial attacks by combating adversarial spelling mistakes. • In the word level, for defenses on synonym substitution based attacks, only and incorporate the adversarial training strategy proposed in the image domain with their text attack methods respectively, and demonstrate that adversarial training can promote the model's robustness. However, there is no defense method specifically designed to defend the synonym substitution based adversarial attacks. We first introduce our motivation, then present the proposed text defense method, Synonym Encoding Method (SEM). Let X denote the input space, V (x) denote the -neighborhood of data point x ∈ X, where V (x) = {x ∈ X | x − x <}. As illustrated in Figure 1 (a), we postulate that the generalization of the model leads to the existence of adversarial examples. More generally, given a data point x ∈ X, ∃x ∈ V (x), f (x) = y true where x is an adversarial example of x. Ideally, to defend the adversarial attack, we need to train a classifier f which not only guarantees f (x) = y true, but also assures ∀x ∈ V (x), f (x) = y true. Thus, the most effective way is to add more labeled data to improve the adversarial robustness . Ideally, as illustrated in Figure 1 (b), if we have infinite labeled data, we can train a model f: ∀x ∈ V (x), f (x) = y true with high probability so that the model f is robust enough to adversarial examples. Practically, however, labeling data is very expensive and it is impossible to have infinite labeled data. Because it is impossible to have enough labeled data to train a robust model, as illustrated in Figure 1 (c), propose to construct a convex outer bound and guarantee that all data points in this bound share the same label. The goal is to train a model f: ∀x ∈ V, f (x) = f (x) = y true. Specifically, they propose a linear-programming (LP) based upper bound on the robust loss by adopting a linear relaxation of the ReLU activation and minimize this upper bound during the training. Then they bound the LP optimal value and calculate the elementwise bounds on the activation functions based on a backward pass through the network. Although their method does not need any extra data, it is hard to scale to realistically-sized networks due to its high complexity. In this work, as illustrated in Figure 1 (d), we propose a novel way to find a mapping m: X → X where ∀x ∈ V (x), m(x) = x. In this way, we force the classification to be more smooth and we do not need any extra data to train the model or modify the architecture of the model. All we need to do is to insert the mapping before the input layer and train the model on the original training set. Now the problem turns into how to locate the neighbors of data point x. For image tasks, it is hard to find all images in the neighborhood of x in the input space, and there could be infinite number of neighbors. For NLP tasks, however, utilizing the property that words in sentences are discrete tokens, we can easily find almost all neighbors of an input text. Based on this insight, we propose a new method called Synonym Encoding Method to locate the neighbors of an input x. We assume that the closer the meaning of two sentences is, the closer their distance is in the input space. Thus, we can suppose that the neighbors of x are its synonymous sentences. To find the synonymous sentence, we can substitute words in the sentence with their synonyms. To construct the mapping m, all we need to do is to cluster the synonyms and allocate a unique token for each cluster, which we call the Synonym Encoding Method (SEM). The details are in Algorithm 1. else 8: end if 10: for each word w j in Syn(w i, σ, k) do The current synonym substitution based text adversarial attacks (; ;) have a constraint that they only substitute words at the same position once or replace words with the first k synonyms of the word in the original input x. This constraint can lead to local minimum for adversarial examples, and it is hard to choose a suitable k as different words may have different number of synonyms. To address this issue, we propose an Improved Genetic Algorithm (IGA), which allows to substitute words in the same position more than once based on the current text x. In this way, IGA can traverse all synonyms of a word no matter what value k is. Meanwhile, we can avoid local minimum to some extent as we allow the substitution of the word by the original word in the current position. In order to guarantee that the substituted word is still a synonym of the original word, each word in the same position can be replaced at most λ times. Differs to the first genetic based text attack algorithm of , we change the structure of the algorithm, including the operators for crossover and mutation. For more details of IGA, see Appendix A. We evaluate SEM with four attacks, GSA , GA , PWWS and our IGA, on three popular datasets involving three neural network classification models. The demonstrate that SEM can significantly improve the robustness of neural networks and IGA can achieve better attack performance as compared with existing attacks. And we further provide discussion on the hyper-parameter of SEM in Appendix B. We first provide an overview of the datasets and classification models used in the experiments. Datasets. In order to evaluate the efficacy of SEM, we choose three popular datasets: IMDB, AG's News, and Yahoo! Answers. IMDB is a large dataset for binary sentiment classification, containing 25, 000 highly polarized movie reviews for training and 25, 000 for testing. AG's News consists news article pertaining four classes: World, Sports, Business and Sci/Tech. Each class contains 30, 000 training examples and 1, 900 testing examples. Yahoo! Answers is a topic classification dataset from the "Yahoo! Answers Comprehensive Questions and Answers" version 1.0 dataset with 10 categories, such as Society & Culture, Science & Mathematics, etc. Each class contains 140,000 training samples and 5,000 testing samples. Models. To better evaluate our method, we adopt several state-of-the-art models for text classification, including Convolution Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The embedding dimension for all models are 300. We replicate the CNN's architecture from , which contains three convolutional layers with filter size of 3, 4, and 5 respectively, a max-pooling layer and a fully-connected layer. LSTM consists of three LSTM layers where each layer has 300 LSTM units and a fully-connected layer . Bi-LSTM contains a bi-directional LSTM layer whose forward and reverse have 300 LSTM units respectively and a fully-connected layer. Baselines. We take the method of adversarial training as our baseline. However, due to the low efficiency of text adversarial attacks, we cannot implement adversarial training as it is in the image domain. In the experiments, we adopt PWWS, which is quicker than GA and IGA, to generate 10% adversarial examples of the training set, and re-train the model incorporating adversarial examples with the training data. To evaluate the efficiency of the SEM method, we randomly sample 200 correctly classified examples on different models from each dataset and use the above attack methods to generate adversarial examples with or without defense. The more effective the defense method is, the smaller the classification accuracy of the model drops. Table 1 shows the efficacy of various attack and defense methods. For each network model, we look at each row to find the best defense under the setting of no attack, or GSA, PWWS, GA, and IGA attacks: • Under the setting of no attack, adversarial training (AT) could improve the classification accuracy of the models on all datasets, as adversarial training (AT) is also the way to augment the training data. Our defense method SEM reaches an accuracy closing to normal training (NT). • Under the four attacks, however, the classification accuracy with normal training (NT) and adversarial training (AT) drops significantly. For normal training (NT), the accuracy degrades more than 75%, 42% and 40% on the three datasets respectively. And adversarial training (AT) cannot defend these attacks effectively, especially for PWWS and IGA on IMDB and Yahoo! Answers, where AT only improves the accuracy a little (smaller than 5%). By contrast, SEM can remarkably improve the robustness of the deep models for all the four attacks. In the image domain, the transferability of adversarial attack refers to its ability to decrease the accuracy of models using adversarial examples generated based on other models . find that the adversarial examples in NLP also exhibite a good transferability. Therefore, a good defense method not only could defend the adversarial attacks but also resists the transferability of adversarial examples. To evaluate the ability of preventing the transferability of adversarial examples, we generate adversarial examples on each model under normal training, and test them on other models with or without defense on Yahoo! Answers. The are shown in Table 2. Almost on all models with adversarial examples generated by other models, SEM could yield the highest classification accuracy. For text attacks, we compare the proposed IGA with GA from various aspects, including attack efficacy, transferability and human evaluation. Attack Efficacy. As shown in Table 1, looking at each column, we see that under normal training (NT) and adversarial training (AT), IGA can always achieve the lowest classification accuracy, which corresponds to the highest attack success rate, on all models and datasets among the four attacks. Under the third column of SEM defense, though IGA may not be the best among all attacks, IGA always outperforms GA. Besides, as depicted in Table 3, IGA can yield lower word substitution rate than GA on most models. Note that for SEM, GA can yield lower word substitution rate, because GA may not replace the word as most words cannot bring any benefit for the first replacement. This indicates that GA stops at local minimum while IGA continues to substitute words and gain a lower classification accuracy, as demonstrated in Table 1. Transferability. As shown in Table 2, the adversarial examples generated by IGA maintain roughly the same transferability as those generated by GA. For instance, if we generate adversarial examples on Word-CNN (column 2, NT), GA can achieve better transferability on LSTM with NT (column 5) while IGA can achieve better transferability on LSTM with AT and SEM (column 6, 7). Human Evaluation. To further verify that the perturbations in the adversarial examples generayed by IGA are hard for humans to perceive, we also perform a human evaluation on IMDB with 35 volunteers. We first randomly choose 100 benign examples that can be classified correctly and generate adversarial examples by GA and IGA on the three models so that we have a total of 700 examples. Then we randomly split them into 7 groups where each group contains 100 examples. We ask every five volunteers to classify one group independently. The accuracy of human evaluation on benign examples is 93.7%. As shown in Figure 2, the classification accuracy of human on adversarial examples generated by IGA is slightly higher than those generated by GA, and is slightly closer to the accuracy of human on benign examples. Summary. IGA can achieve the highest attack success rate when compared with previous synonyms substitution based adversarial attacks and yield lower word replacement rate than GA. Besides, the adversarial examples generated by IGA maintains the same transferability as GA does and are a little bit harder for humans to distinguish. Several generated adversarial examples by GA and IGA are listed in Appendix C. Synonym substitution based adversarial attacks are currently the best text attack methods, as they are hard to be checked by automatic spell or syntax check as well as human investigation. In this work, we propose a novel defense method called Synonym Encoding Method (SEM), which encodes the synonyms of each word to defend adversarial attacks for text classification task. Extensive experiments show that SEM can defend adversarial attacks efficiently and degrade the transferability of adversarial examples, at the same time SEM maintains the classification accuracy on benign data. To our knowledge, this is the first and efficient text defense method in word level for state-of-the-art synonym substitution based attacks. In addition, we propose a new text attack method called Improved Genetic Attack (IGA), which in most cases can achieve much higher attack success rate as compared with existing attacks, at the same time IGA could maintain the transferability of adversarial examples. Here we introduce our Improved Genetic Algorithm (IGA) in details and show how IGA differs from the first proposed generic attack method, GA . Regard a text as a chromosome, there are two operators in IGA: • • M utate(x, w i): For a text x = {w 1, . . ., w i−1, w i, w i+1, . . ., w n} and a position i, replace w i withŵ i whereŵ i ∈ Syn(w, σ, k) to get a new textx = {w 1, . . ., w i−1,ŵ i, w i+1, . . ., w n} that minimizes S m (x, y true). The details of IGA are described in Algorithm 2. Algorithm 2 The Improved Genetic Algorithm Input: x: input text, y true: true label for x, M: maximum number of iterations Output: x adv: adversarial example 1: for each word w i ∈ x do 2: if f (x adv) = y true then Randomly sample parent 1, parent 2 from P g−1: Randomly choose a word w in child 14: Compared with GA, IGA has the following differences: • Initialization: GA initializes the first population randomly, while IGA initializes the first population by replacing each word by its optimal synonym, so our population is more diversified. • M utation: Different from GA, IGA allows to replace the word that has been replaced before so that we can avoid local minimum. • Crossover: To better simulate the reproduction and biological crossover, we randomly cut the text from two parents and concat two fragments into a new text rather than randomly choose a word of each position from the two parents. The selection of the next generation is similar to GA, greedily choose one offspring, and then generate other offsprings by M utate(Crossover(·, ·)) on two randomly chosen parents. But as M utate and Crossover are different, IGA has very different offsprings. To explore how hyper-parameter of SEM influences the efficacy, we try different ranging from 0 to 1.2 for three models on IMDB with or without adversarial attacks. The are illustrated in Figure 3. On benign data, as shown in Figure 3(a), the classification accuracy of the models decreases a little when increases. Because a bigger indicates that we need less words to train the model, which could degrade the efficacy of the models. Nevertheless, the classification accuracy does not decrease much as SEM could maintain the semantic invariance of the original text after encoding. Then we show the defense efficacy of SEM on the three models when changing the value of, as shown in Figure 3 (b)-(d). When = 0 where SEM could not take any impact, we see that the accuracy is the lowest under all attacks. When increases, SEM starts to defend the attacks, the accuracy increases rapidly and reach the peak when = 0.5. Then the accuracy decays slowly if we continue to increase. Thus, we choose = 0.5 to be a good trade-off on the accuracy of benign examples and adversarial examples. C ADVERSARIAL EXAMPLES GENERATED BY GA AND IGA To show the generated adversarial examples, we randomly pick some benign examples from IMDB and generate adversarial examples by GA and IGA respectively on several models. The examples are shown in Table 5 to Table 6. We see that IGA substitutes less words than GA on these models under normal training. I am sorry but this is the worst film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a unique masterpiece made by the best director ever lived in the ussr. He knows the art of film making and can use it very well. If you find this movie, buy or copy it! 50.6 0 I cared this film which I thought was well written and acted, there was plenty of humour and a igniting storyline, a tepid and enjoyable experience with an emotional ending. GA 92.7 1 I am sorry but this is the harshest film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a sole masterpiece made by the nicest director permanently lived in the ussr. He knows the art of film making and can use it much well. If you find this movie, buy or copy it! 88.3 0 I enjoyed this film which I think was well written and acted, there was plenty of humour and a causing storyline, a lukewarm and enjoyable experience with an emotional ending. IGA 70.8 1 I am sorry but this is the hardest film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a sole masterpiece made by the best director permanently lived in the ussr. He knows the art of film making and can use it very well. If you find this movie, buy or copy it! I am sorry but this is the worst film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a unique masterpiece made by the best director ever lived in the ussr. He knows the art of film making and can use it very well. If you find this movie, buy or copy it! 88.2 0 I enjoyed this film which I thought was well written and proceeded, there was plenty of humorous and a igniting storyline, a tepid and enjoyable experience with an emotional terminate. GA 99.9 1 I am sorry but this is the hardest film I have ever seen in my life. I cannot believe that after making the first one in the series they were able to get a budget to make another. This is the least terrifying film I have ever watched and laughed all the way through to the end. This is a unique masterpiece made by the best superintendent ever lived in the ussr. He knows the art of film making and can use it supremely alright. If you find this movie, buy or copy it! 72.1 0 I enjoyed this film which I thought was well written and acted, there was plenty of humour and a provoking storyline, a lukewarm and agreeable experience with an emotional ending. IGA 99.8 1 I am sorry but this is the hardest film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a sole masterpiece made by the best director ever lived in the ussr. He knows the art of film making and can use it very well. If you find this movie, buy or copy it! Table 6: The adversarial examples generated by GA and IGA on IMDB using Bi-LSTM model. 99.6 1 I enjoyed this film which I thought was well written and acted, there was plenty of humour and a provoking storyline, a warm and enjoyable experience with an emotional ending. Original 97.0 0 I am sorry but this is the worst film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a unique masterpiece made by the best director ever lived in the ussr. He knows the art of film making and can use it very well. If you find this movie, buy or copy it! 98.2 0 I enjoyed this film which I thought was well written and proceeded, there was plenty of humorous and a igniting storyline, a tepid and enjoyable experiment with an emotional terminate. GA 78.8 1 I am sorry but this is the hardest film I have ever seen in my life. I cannot believe that after making the first one in the series, they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a unique masterpiece made by the best superintendent ever lived in the ussr. He knows the art of film making and can use it supremely alright. If you find this movie buy or copy it! 81.2 0 I enjoyed this film which I thought was alright written and acted, there was plenty of humour and a arousing storyline, a lukewarm and enjoyable experiment with an emotional ending. IGA 78.8 1 I am sorry but this is the hardest film I have ever seen in my life. I cannot believe that after making the first one in the series they were able to get a budget to make another. This is the least scary film I have ever watched and laughed all the way through to the end. This is a sole masterpiece made by the best director ever lived in the ussr. He knows the art of film making and can use it very alright. If you find this movie buy or copy it!
The first text adversarial defense method in word level, and the improved generic based attack method against synonyms substitution based attacks.
1,734
scitldr
A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. Recurrent Neural Networks (RNNs) are state-of-the-art for many machine learning sequence processing tasks. Examples where models based on RNNs shine include speech recognition BID21 BID3, image captioning BID32 BID29 BID17, machine translation BID1 BID26 BID18, and speech synthesis BID20. It is common practice to train these models using backpropagation through time (BPTT), wherein the network states are unrolled in time and gradients are backpropagated through the unrolled graph. Since the parameters of an RNN are shared across the different time steps, BPTT is more prone to vanishing and exploding gradients (; BID2 BID11 than equivalent deep feedforward networks with as many stages. This makes credit assignment particularly difficult for events that have occurred many time steps in the past, and thus makes it challenging in practice to capture long-term dependencies in the data (; BID2 . Having to wait for the end of the sequence in order to compute gradients is neither practical for machines nor animals when the dependencies extend over very long timescales. Training is slowed down considerably by long waiting times, as the rate of convergence crucially depends on how often parameters can be updated. In practice, proper long-term credit assignment in RNNs is very inconvenient, and it is common practice to employ truncated versions of BPTT for long sequences BID23 BID24 . In truncated BPTT (TBPTT), gradients are backpropagated only for a fixed and limited number of time steps and parameters are updated after each such subsequence. Truncation is often motivated by computational concerns: memory, computation time and the advantage of faster learning obtained when making more frequent updates of the parameters rather than having to wait for the end of the sequence. However, it makes capturing correlations across distant states even harder. Regular RNNs are parametric: their hidden state vector has a fixed size. We believe that this is a critical element in the classical analysis of the difficulty of learning long-term dependencies BID2. Indeed, the fixed state dimension becomes a bottleneck through which information has to flow, both forward and backward. We thus propose a semi-parametric RNN, where the next state is potentially conditioned on all the previous states of the RNN, making it possible-thanks to attention-to jump through any distance through time. We distinguish three types of states in our proposed semi-parametric RNN:• The fixed-size hidden state h (t), the conventional state of an RNN model at time t;• The monotonically-growing macrostate M = {m,..., m (s) }, the array of all past microstates, which plays the role of a random-access memory;• And the fixed-size microstate m (i), which is the ith hidden state (one of the h (t) ) that was chosen for inclusion within the macrostate M. There are as many hidden states as there are timesteps in the sequence being analyzed by the RNN. A subset of them will become microstates, and this subset is called the macrostate. The computation of the next hidden state h (t+1) is based on the whole macrostate M, in addition to the external input x (t). The macrostate being variable-length, we must devise a special mechanism to read from this ever-growing array. As a key component of our model, we propose to use an attention mechanism over the microstate elements of the macrostate. The attention mechanism in the above setting may be regarded as providing adaptive, dynamic skip connections: any past microstate can be linked, via a dynamic decision, to the current hidden state. Skip connections allow information to propagate over very long sequences. Such architectures should naturally make it easier to learn long-term dependencies. We name our algorithm sparse attentive backtracking (SAB). SAB is especially well-suited to sequences in which two parts of a task are closely related yet occur very far apart in time. Inference in SAB involves examining the macrostate and selecting some of its microstates. Ideally, SAB will not select all microstates, instead attending only to the most salient or relevant ones (e.g., emotionally loaded, in animals). The attention mechanism will select a number of relevant microstates to be incorporated into the hidden state. During training, local backpropagation of gradients happens in a short window of time around the selected microstates only. This allows for the updates to be asynchronous with respect to the time steps we attend to, and credit assignment takes place more globally in the proposed algorithm. With the proposed framework for SAB, we present the following contributions:• A principled way of doing sparse credit assignment, based on a semi-parametric RNN.• A novel way of mitigating exploding and vanishing gradients, based on reducing the number of steps that need to be backtracked through temporal skip connections.• Competitive compared to full backpropagation through time (BPTT), and much better as compared to Truncated Backpropagation through time, with significantly shorter truncation windows in our model. Mechanisms such as SAB may also be biologically plausible. Imagine having taken a wrong turn on a roadtrip and finding out about it several miles later. Our mental focus would most likely shift directly to the location in time and space where we had made the wrong decision, without replaying in reverse the detailed sequence of experienced traffic and landscape impressions. Neurophysiological findings support the existence of such attention mechanisms and their involvement in credit assignment and learning in biological systems. In particular, hippocampal recordings in rats indicate that brief sequences of prior experience are replayed both in the awake resting state and during sleep, both of which conditions are linked to memory consolidation and learning BID7 BID6 BID8. Moreover, it has been observed that these replay events are modulated by the reward an animal does or does not receive at the end of a task in the sense that they are more pronounced in the presence of a reward signal and less pronounced or absent in the absence of a reward signal BID0. Thus, the mental look back into the past seems to occur exactly when credit assignment is to be performed.2 RELATED WORK 2.1 TRUNCATED BACKPROPAGATION THROUGH TIME When training on very long sequences, full backpropagation through time becomes computationally expensive and considerably slows down training by forcing the learner to wait for the end of each (possibly very long sequence) before making a parameter update. A common heuristic is to backpropagate the loss of a particular time step through only a limited number of time steps, and hence truncate the backpropagation computation graph BID30. While truncated backpropagation through time is heavily used in practice, its inability to perform credit assignment over longer sequences is a limiting factor for this algorithm, ing in failure cases even in simple tasks, such as the Copying Memory and Adding task in BID12. The Decoupled Neural Interfaces method BID14 replaces full backpropagation through time with synthetic gradients, which are essentially small networks, mapping the hidden unit values of each layer to an estimator of the gradient of the final loss with respect to that layer. While training the synthetic gradient module requires backpropagation, each layer can make approximate gradient updates for its own parameters in an asynchronous way by using its synthetic gradient module. Thus, the network learns how to do credit assignment for a particular layer from a few examples of the gradients from backpropagation, reducing the total number of times that backpropagation needs to be performed. Online credit assignment in RNNs without backtracking remains an open research problem. One approach BID22 attempts to solve this problem by estimating gradients using an approximation to forward mode automatic differentiation instead of backpropagation. Forward mode automatic differentiation allows for computing unbiased gradient estimates in an online fashion, however it normally requires storage of the gradient of the current hidden state values with respect to the parameters, which is O(N 3) where N is the number of hidden units. The Unbiased Online Recurrent Optimization (UORO) ) method gets around this by updating a rank-1 approximation to this gradient tensor, which is shown to keep the estimate of the gradient unbiased, but potentially at the risk of increasing the variance of the gradient estimator. Neural architectures such as Residual Networks BID9 and Dense Networks BID13 ) allow information to skip over convolutional processing blocks of an underlying convolutional network architecture. In the case of Residual Networks identity connections are used to skip over convolutional processing blocks and this information is recombined using addition. This construction provably mitigates the vanishing gradient problem by allowing the gradient at any given layer to be bounded. Densely-connected convolutional networks alleviate the vanishing gradient problem by allowing a direct path from any point in the network to the output. In contrast, here we propose and explore what one might regard as a form of dynamic skip connection, modulated by an attention mechanism. We now introduce the idea of Sparse Attentive Backtracking (SAB). Classical RNN models such as those based on LSTMs or GRUs only use the previous hidden state in the computation of the next one, and therefore struggle with extremely long-range dependencies. SAB sidesteps this limitation by additionally allowing the model to select and use (a subset of) any of the past microstates in the computation of the next hidden state. In doing so the model may potentially reference microstates computed arbitrarily long ago in time. Since the classic RNN models do not support such operations on their past, we make a few architectural additions. On the forward pass of a training step, a mechanism is introduced that selects microstates from the macrostate, summarizes them, then incorporates this summary into the next hidden state. The hidden state may or may not become a microstate. On the backward pass, the gradient is allowed to flow not only through the (truncated) master chain linking consecutive hidden states, but also to the microstates which are selected in the forward pass. In the forward pass, the microstate selection process can be denser or sparser, and the summarization and incorporation can be more or less sophisticated. In the backward pass, the gating of gradient flow from a hidden state to its ancestor microstates can also be denser or sparser, although it can be no denser than the forward pass was. t{−8,−7,−6,−5,−4,−3,−2,−1} are evaluated, first by broadcasting the current provisional hidden stateĥ (t) against the macrostate (which, in the presented case of k att = 1, consists of all past hidden states), concatenating, then passing the to an MLP. The attention weights are then run through the sparsifier which selects the k top = 3 attention weights, while the others are zeroed out. Black arrows show the microstates corresponding to the non-zero sparse attention weights {ã DISPLAYFORM0}, being weighted, summed, then incorporated intoĥ (t) to compute the current final hidden state h (t).For instance, it is possible for the forward pass to be dense, incorporating a summary of all microstates, but for the backward pass to be sparse, only allowing gradient flow to some of the microstate contributors to the hidden state (Dense Forward, Sparse Backward). Another possibility is for the forward pass to be sparse, making only a few, hard, microstate selections for the summary. In this case, the backward pass will necessarily also be sparse, since few microstates will have contributed to the hidden state, and therefore to the loss (Sparse Forward, Sparse Backward).Noteworthy is that not all hidden states need be eligible to become microstates. In practice, we have found that restricting the pool of eligible hidden states to only every k att'th one still works well, while reducing both memory and computation expense. Such an increase in the granularity of microstate selection can also improve performance, by preventing the model from attending exclusively to the most recent hidden states and temporally spreading microstates out from each other. The SAB algorithm is widely applicable, and is compatible with numerous RNN architectures, including vanilla, GRU and LSTM models. However, since it necessarily requires altering the hiddento-hidden transition function substantially, it's currently incompatible with the accelerated RNN kernels offered by e.g. NVIDIA on its GPU devices through cuDNN library BID5.For vanilla and GRU-inspired RNN architectures, SAB's selection and incorporation mechanisms operate over the (hidden) state. For the LSTM architecture, which we adopt for our experiments, they operate over the hidden state but not the cell state. The microstate selection mechanism determines which microstate subset of the macrostate will be selected for summarization on the forward pass of the RNN, and which subset of that subset will receive gradient on the backward pass during training. This makes it the core of the attention mechanism of a SAB implementation. While the selection mechanism may use hard-coded attention heuristics, there is no reason why the microstate selection mechanism could not itself be a (deep) neural network trained alongside the RNN model over which it operates. In the models we use here, the selection mechanism is chosen to be a 1-hidden-layer Linear-TanhLinear MLP that computes a scalar attention weight a i for each eligible microstate vector m (i), and a sparsifier that masks out all but the k top greatest attention weights, producing the sparse attention weightsã i. We empirically demonstrate that even this simple mechanism learns to focus on past time steps relevant to the current one, thus successfully performing credit assignment. The use of a higher complexity model here would be an interesting avenue for future research. The selected microstates must be somehow combined into a fixed-size summary for incorporation into the next hidden state. While many options exist for doing so, we choose to simply perform a summation of the microstates, weighted by their sparsified attention weightã i. Lastly, the summary must be incorporated into the hidden state. Again, multiple options exist, such as addition (as done in ResNets) or concatenation (as done in DenseNets).For our purposes we choose to simply sum the summary into the provisional hidden state output h (t) computed by the LSTM cell to produce the final hidden state h (t) that will be conditioned upon in the next timestep. We now give the equations for the specific SAB-augmented LSTM model we use in our experiments. At time t, the underlying LSTM receives a vector of hidden states h (t−1), a vector of cell states c (t−1) and an input x (t), and computes a provisional hidden state vectorĥ (t) that also serves as a provisional output. We next use an attention mechanism that is similar to BID1, but modified to produce sparse discrete attention decisions. First, the provisional hidden state vectorĥ (t) is concatenated to each microstate vector m (i). Then, an MLP maps each such concatenated vector to an attention weight a (t)i representing the salience of the microstate i at the current time t. This can be expressed as: a DISPLAYFORM0 DISPLAYFORM1 where the weights matrices w 1, w 2 and w 3 are learned parameters. Following this, we apply a piece-wise linear function that sparsifies the attention while making discrete decisions. (This is different from typical attention mechanisms that normalize attention weights using a Softmax function BID1, whose output is never sparse). Let a (t) ktop be the k top th greatest-valued attention weight at time t; then the sparsified attention weights are computed asã DISPLAYFORM2 This has the effect of zeroing all attention weights less than a DISPLAYFORM3 ktop, thus masking out all but the k top most salient microstates in M. The few selected microstates receive gradient information, while no gradient flows to the rest. A summary vector s (t) is then obtained using a weighted sum over the macrostate, employing the sparse attention weights: DISPLAYFORM4 Given that this sum is very sparse, the summary operation is very fast. To incorporate the summary into the final hidden state at timestep t, we simply sum the summary and the provisional hidden state: DISPLAYFORM5 Lastly, to compute the output at the time step t, we concatenate h (t) and the sparse attention weights a (t), then apply an affine output transform to compute the output. This can be equivalently expressed as: DISPLAYFORM6 where the weights matrices V 1 and V 2 and bias vector b are learned parameters. In summary, for a given time step t, a hidden state h (i) selected by the hard-attention mechanism has two paths contributing to the hidden states h (t) in the forward pass. One path is the regular sequential forward path in an RNN; the other path is through the dynamic skip connections in the attention mechanism. When we perform backpropagation through the skip connections, gradient only flows from h (t) to microstates m (i) selected by the attention mechanism (those for which DISPLAYFORM7 In the preparation of this work, it was discovered that the attention mechanism absolutely must include a non-linearity in the computation of the raw attention weights a (t) i. Our failure to do so in an early iteration of the work ed in a catastrophic cancellation in the subsequent sparsification of the weights toã (t) i. This is because in, a rectified difference between a (t) i is computed to zero all but the k top greatest attention weights. Subtraction is linear; And since our earlier attention mechanism was linear as well, it could be separated into two parts, a first half to which only the microstate m (i) contributed and a second half to which only the hidden state h (t) contributed. This second half of the contribution is catastrophically cancelled in the difference a DISPLAYFORM0 ktop, because it was computed from the same h (t) for both, and therefore equal. We now report and discuss the of an empirical study that analyses the performance of SAB using five different tasks. We first study synthetic tasks-the copying and adding problems designed to measure models' abilities to learn long-term dependenciesmeant to confirm that SAB can successfully perform credit assignment for events that have occurred many time steps in the past. We then study more realistic tasks and larger datasets. Baselines We compare the quantitative performance of our model against two LSTM baselines BID12. The first is trained with backpropagation through time (BPTT) and the second is trained using truncated backpropagation through time (TBTPP). Both methods are trained using teacher forcing BID31. We also used gradient clipping (that is, we clip the gradients to 1 to avoid exploding gradients). Hyperparameters that are task-specific are discussed in the tasks' respective subsections, other hyperparameters that are also used by SAB and that we set to the same value are discussed below. Compared to standard RNNs, our model has two additional hyperparameters:• k top, the number of most-salient microstates to select at each time step for passing gradients in the backward pass• k att, the granularity of attention. Every k att th hidden state is chosen to be a microstate. The special case k att = 1 corresponds to choosing all hidden states to be microstates as well. In addition, we also study the impact of the TBPTT truncation length, which we denote as k trunc. This determines how many timesteps backwards to propagate gradients through in the backward pass. This effect of this hyperparameter will also be studies for the LSTM with TBTPP baseline. For all experiments we used a learning rate of 0.001 with the Adam BID15 optimizer unless otherwise stated. For SAB, we attend to every second hidden states, i.e. k att =2, unless otherwise stated. Our main findings are:1. SAB performs almost optimally and significantly outperforms both full backpropagtion through time (BPTT), and truncated backpropagation through time (TBPTT) on the synthetic copying task.2. For the synthetic adding, two language modelling task (using PennTree Bank and Text8), and permuted sequential MNIST classification tasks, SAB reaches the performance of BPTT and outperforms TBPTT. In addition, for the adding task, SAB outperforms TBPTT using much shorter truncation lengths. The copying memory task tests the model's ability to memorize salient information for long time periods. We follow the setup of the copying memory problem from BID12. In details, the network is given a sequence of T + 20 inputs consisting of: a) 10 (randomly generated) digits (digits 1 to 8) followed by; b) T blank inputs followed by; c) a special end-ofsequence character followed by; d) 10 additional blank inputs. After the end-of-sequence character the network must output a copy of the initial 10 digits. TAB1, and 3 report both accuracy and cross-entropy (CE) of the models' predictions on unseen sequences. We note that SAB is able to learn this copy task almost perfectly for all sequence-lengths T. Further, SAB outperforms all baselines. This is particularly noticeable for longer sequences, for example, when T is 300 the best baseline achieves 35.9% accuracy versus SAB's 98.9%.To better understand the learning process of SAB, we visualized the attention weights while learning the copying task (T = 200, k trunc = 10, k top = 10). FIG5 (appendix) shows the attention weights (averaged over a single mini-batch) at three different learning stages of training, all within the first epoch. We note that the attention quickly (and correctly) focuses on the first ten timesteps which contain the input digits. Furthermore, we experimented with LSTM with self-attention trained using full BPTT. The setup is very similar to unidirectional LSTM with self-attention BID16. Due to GPU memory constraints, we are only able to run this experiment up to small sequence lengths. For T=200 and T = 300, we could see that SAB performs comparably to LSTM with full self-attention trained with full BPTT. The adding task requires the model to sum two specific entries in a sequence of T (input) entries BID12. In the spirit of the copying task, larger values of T will require the model to keep track of longer-term dependencies. The exact setup is as follows. Each example in the task consists of 2 input vectors of length T. The first, is a vector of uniformly generated values between 0 and 1. The second vector encodes binary a mask that indicates which 2 entries in the Table 1: Test accuracy and cross-entropy loss performance on the copying task with sequence lengths of T = 100. Models that use TBPTT cannot solve this task while SAB and BPTT can both achieve optimal performance. Table 3: Test accuracy and cross-entropy loss performance on copying task with sequence lengths of T = 300. On these long sequences SAB's performance can still be very close to optimal. first input to sum (it consists of T − 2 zeros and 2 ones). The mask is randomly generated with the constraint that masked-in entries must be from different halves of the first input vector. TAB2 report the cross-entropy (CE) of the model's predictions on unseen sequences. When T = 200, SAB's performance is similar to the best performance of both baselines. With even longer sequences (T = 400), SAB outperforms the TBPTT but is outperformed by BPTT. We evaluate our model on language modelling task using the Penn TreeBank dataset BID19. Our LSTM baselines use 1000 hidden units and a learning rate of 0.002. We used nonoverlapping sequences of 100 in the batches of 32. We trained SAB for 100 epochs. We evaluate the performance of our model using the bits-per-character (BPC) metric. As shown in Table 6, we perform slightly worse than BPTT, but better than TBPTT. Table 4: Performance on unseen sequences of the T = 200 adding task. We note that all methods have configurations that allow them to perform near optimally. Table 6: BPC evaluation on the validation set of the character-level PTB (lower is better). This dataset is derived from the text of Wikipedia and consists of a sequence of a total of 100M characters (non-alphabetical and non-space characters were removed). We follow the setup of; use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on non-overlapping sequences of length 180. Due to computational constraints, all baselines use 1000 hidden units. We trained all models using a batch size of 64. We trained SAB for a maximum of 30 epochs. We have not done any hyperparameter search for our model as it's computationally expensive. TAB3: Bit-per-character (BPC) Results on the validation and test set for Text8 (lower is better). Our last task is a sequential version of the MNIST classification dataset. The task involves predicting the label of the image after being given the image pixel by pixel (pixels are processed in a fixed random order.). All models use an LSTM with 128 hidden units. The prediction is produced by passing the final hidden state of the network into a softmax. We used a learning rate of 0.001. We trained our model for about 100 epochs, and did early stopping based on the validation set. An interesting direction for future development of the Sparse Attentive Backtracking method from the machine learning standpoint would be improving the computational efficiency when the sequences in question are very long. Since the Sparse Attentive Backtracking method uses selfattention on every step, the memory requirement grows linearly in the length of the sequence and computing the attention mechanism requires computing a scalar between the current hidden states and all previous hidden states (to determine where to attend). It might be possible to reduce the memory requirement by using a hierarchical model as done by BID4, and then recomputing the states for the lower levels of the hierarchy only when our attention mechanism looks at the corresponding higher level of the hierarchy. It might also be possible to reduce the computational cost of the attention mechanism by considering a maximum inner product search algorithm BID25, instead of naively computing the inner product with all hidden states values in the past. Improving the modeling of long-term dependencies is a central challenge in sequence modeling, and the exact gradient computation by BPTT is not biologically plausible as well as inconvenient computationally for realistic applications. Because of this, the most widely used algorithm for training recurrent neural networks on long sequences is truncated backpropagation through time, which is known to produced biased estimates of the gradient, focusing on short-term dependencies. We have proposed Sparse Attentive Backtracking, a new biologically motivated algorithm which aims to combine the strengths of full backpropagation through time and truncated backpropagation through time. It does so by only backpropagating gradients through paths selected by its attention mechanism. This allows the RNN to learn long-term dependencies, as with full backpropagation through time, while still allowing it to only backtrack for a few steps, as with truncated backpropagation through time, thus making it possible to update weights as frequently as needed rather than having to wait for the end of very long sequences. The time complexity of the forward pass of both training and inference in SAB is O(tn 2), with t the number of timesteps and n the size of the hidden state, although our current implementation scales as O(t 2 n 2). The space complexity of the forward pass of training is unchanged at O(tn), but the space complexity of inference in SAB is now O(tn) rather than O(n). However, the time cost of the backward pass of training cost is very difficult to formulate. Hidden states depend on a sparse subset of past microstates, but each of those past microstates may itself depend on several other, even earlier microstates. The web of active connections is, therefore, akin to a directed acyclic graph, and it is quite possible in the worst case for a backpropagation starting at the last hidden state to touch all past microstates several times. However, if the number of microstates truly relevant to a task is low, the attention mechanism will repeatedly focus on them to the exclusion of all others, and pathological runtimes will not be encountered. Our method approximates the true gradient but in a sense it's no different than the kind of approximation made with truncated gradient, except that instead of truncating to the last k trunc time steps, we truncate to one skip-step in the past, which can be arbitrarily far in the past. This provides a way of combating exploding and vanishing gradient problems by learning long-term dependencies. To verify the fact, we ran our model on all the datasets (Text8, Pixel-By-Pixel MNIST, char level PTB) with and without gradient clipping. We empirically found, that we need to use gradient clipping only for text8 dataset, for all the other datasets we observed little or no difference with gradient clipping. We visualize how the attention weights changes during training for the Copying Memory Task in section 4.1. The attention weights are averaged over the batch. The salient information in a copying task are in the first 10 steps. The figure shows how the attention learns to move towards and concentrate on the beginning of the sequence as training procedes. Note these all happened with the first epoch of training, such that the model learns in a reasonable amount of time.
Towards Efficient Credit Assignment in Recurrent Networks without Backpropagation Through Time
1,735
scitldr
We propose a novel adversarial learning framework in this work. Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training. The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage. In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction. Discriminative models are treated as energy-based models. Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores. In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models. The proposed method is able to handle a range of tasks, \emph{e.g.}, multi-label classification and image segmentation. Empirical on these two tasks validate the effectiveness of our learning method. This work focuses on applying adversarial learning BID9 to solve structured prediction tasks, e.g., multi-label classification and image segmentation. Adversarial learning can be formalized as a minimax two-player game between structured prediction models and discriminative models. Discriminative models are learned to distinguish between the outputs predicted by the structured prediction models and the training data, while structured prediction models are learned to predict outputs to fool discriminative models. Though structured prediction models are trained by the gradients of discriminative models, existing methods rarely use discriminative models to improve structured prediction models at the inference stage. A straightforward way of utilizing discriminative models for inference is to follow the ascent gradient directions of discriminative models to refine the predicted outputs. However, due to the wellknown gradient vanishing problems, the gradients of discriminative models are almost zero for all the predicted outputs. It is difficult to resolve the gradient vanishing problems, because they are caused by the training instability of the existing adversarial learning framework. Consequently, most existing methods do not use the information from discriminative models to refine structured prediction models. Most existing adversarial learning methods take discriminative models as classifiers. If discriminative models well separate real and predicted samples, they tend to assign the same scores to all predicted samples. Differently, energy-based models BID21 BID12 usually predict different energy scores for different samples. Therefore, we propose to train discriminative models as energy-based models to ameliorate the gradient vanishing problems. In our framework, discriminative models are learned to assign scores to evaluate the quality of predicted outputs. Structured prediction models are learned to predict outputs that are judged to have maximum scores by discriminative models. In this way, discriminative models are trained to approximate continues value functions which evaluate the quality of predicted output. The gradients of discriminative models are not zero for predicted outputs. Thus, we can refine structured prediction models by following the ascent gradient directions of discriminative models at the inference stage. In this paper, we refer our method as learning discriminative models to refine structured prediction models (LDRSP).The proposed method learns discriminative models utilizing the data generated by the structured prediction models. BID12 found that the key to learning deep value networks is generating proper training data. We propose to augment the training set of discriminative models by following the data generation methods proposed in BID12. At the training stage, we simultaneously run the inference algorithm to generate extra training samples utilizing models in previous iterations. These samples are useful since they are generated along the gradient-based inference trajectory utilized at the inference stage. We also augment the training set with adversarial samples BID10. These samples are used as negative samples to train the discriminative models. To validate our method, experiments are conducted on multi-label classification, binary image segmentation, and 3-class face segmentation tasks, and experimental indicate that our method can learn discriminative models to effectively refine structured prediction models. This work has two contributions: We propose a novel adversarial learning framework for structured prediction, in which the information captured by discriminative models can be used to improve structured prediction models at the inference stage. We propose to learn discriminative models to approximate continues value functions that evaluate the quality of the predicted outputs, and thus ameliorate the gradient vanishing problems. Recently, adversarial learning has been well studied on producing high-resolution and high-quality images BID8 BID18, improving the training stability and getting rid of problems like model collapse BID11 BID29 BID31, image to image translation BID14, semantic image segmentation BID27 BID15 and neural dialogue generation. These methods learn discriminative models and generative models in an adversarial way, but discriminative models are abandoned once models are trained. Since discriminative models are learned to capture the discriminative distributions between the generated samples and the training data, we argue that they are able to be utilized to make the generated samples more real. As we know, we are the first to learn discriminative models that can be directly utilized to refine structured prediction models. There are some researches that utilize iterative inference methods. Since Denoising Autoencoders (DAEs) can estimate the gradients of energy distribution functions, BID30 propose to learn DAEs to capture probability distributions of realistic images. In the inference process, DAEs can be utilized to iteratively refine generated images. BID32 expand similar ideas to the image segmentation. Different from these methods, in which feed-forward networks are utilized to iteratively refine the generated samples, we adopt a gradient-based inference to find samples with the zero task loss. Learning discriminative models as energy-based models has been recently studied BID35 BID7. These works only constrain that discriminative models predict lower energy scores for the training data than the generated samples. However, our method learns discriminative models to approximate value functions that evaluate the quality of the generated samples. Moreover, we propose to generate extra training samples to learn discriminative models, so that discriminative models can better capture the probability distributions of the sample space. There is a rising interest in energy-based structured prediction BID36 BID6 BID33. BID1 proposed to add constraints to the neural network parameters such that the output of the neural network is a convex function of (some of) the inputs. BID3 introduced Structured Prediction Energy Network (SPEN). SPEN relies on a max-margin surrogate objective to ensure that the neural network predicts the lowest energy value for the ground-truth label. BID4 improved SPEN by proposing an end-to-end version of SPEN, which directly back-propagates through a computation graph that unrolls gradient-based energy minimization. Inspired by the reinforcement learning, BID12 proposed a Deep Value Network (DVN) that directly learns to evaluate the quality of different output configures. Compared with these methods, our method adversarially learns energy-based models and structured prediction models rather than learns energy-based models alone. We propose a novel adversarial learning framework for structured prediction, in which discriminative models D(x, y; θ d) can be used to refine structured prediction models G(x; θ g) at the inference stage. In our method, discriminators are treated as energy-based models, which take both input objects x and possible outputs y as inputs and predict scores in the range of. We assume that at the training stage, one can get access to an oracle value function v * (y, y *), which evaluates the quality of y corresponding to x. Here y * is the ground-truth label. The discriminators are learned to mimic the behavior of the oracle value functions. Following BID12, we utilize intersection over union (IOU) and F 1 metrics as the oracle value functions for image segmentation and multi-label classification, respectively, which are defined on (y, y DISPLAYFORM0 DISPLAYFORM1 Here y ∩ y * denotes the number of dimension i where both y i and y * i are active and y ∪ y * denotes the number of dimensions where at least one of y i and y * i is active. y i and y * i denote the i-th variable of y and y * . To apply v * (y, y *) to the continuous output y, the notions of intersection and union are extended by using element-wise min and max operators, DISPLAYFORM2 DISPLAYFORM3 We propose to learn discriminators D(x, y; θ d) to estimate v * (y, y *). The learning of discriminators can be understood from a regression setting with z = (x, y) as inputs and v = v * (x, y) as the target outputs. The structured prediction models are learned to predict outputs scored highly by discriminators. Discriminators and structured prediction models are respectively learned by optimizing: DISPLAYFORM4 DISPLAYFORM5 Here DISPLAYFORM6, and L g is a surrogate loss. Equation 5 can be understood in two aspects: It's a modified version of least-square GAN BID28 loss, and discriminators are learned to predict oracle values v * g for the samples predicted by structured prediction models; It learns an energy-based model using training samples that consist of ground-truth samples (x, y *, 1) and predicted samples (x, G(x; θ g), v * g ). The second term of Equation 6 regularizes structured prediction models such that the predicted samples tend to have higher scores. Once models are trained, discriminators can refine structured prediction models by utilizing a gradient-based inference. The outputs predicted by structured prediction models are updated following the ascent gradient directions of discriminators that lead to the high scores: DISPLAYFORM7 DISPLAYFORM8 Here, P Y denotes an operator that projects the predicted outputs back to the feasible set of solutions. In a simple case, where Y = M, the P Y operator clips the predicted outputs. It is observed in experiments, i.e., the learned discriminators tend to give small gradients during the gradient-based inference. One reason is that the predicted outputs of structured prediction models are already close to the ground-truth labels. In order to further improve the predicted outputs by using the gradientbased inference and overcome the small-gradient issue, we use the normalized gradient method, i.e., DISPLAYFORM9 Gygli et al. FORMULA0 proposed to simultaneously generate training samples to learn their energy-based model, i.e., Deep Value Network (DVN) in the training process. We follow their method to generate extra training samples to learn discriminators. The training samples are a set of tuples (input x, output y, oracle value v DISPLAYFORM10 Here N is the size of the training set, and x (i), y (i) and v * (i) respectively denote the i-th image, i-th output and i-th oracle value in the training set. Similar to BID12, we utilize two different methods to generate training samples:• Inference samples. At each training iteration, structured prediction models are first used to predict samples. Then, we take these samples as initial solutions and run a gradient-based inference to find high-valued samples of discriminators. These samples are useful since they are generated along the inference trajectory at the inference stage. These samples can be generated during training by using models in a previous iteration BID12.• Adversarial samples. Maximize the loss: DISPLAYFORM11 2 with respect to y using a gradient-based optimizer BID10. These samples serve as negative samples to learn discriminators. Similar to the inference samples, the adversarial samples are also generated during training. To utilize these training samples to learn discriminators, we add another loss term to the Equation 5, and the new objective function is as follows: DISPLAYFORM12 We utilize the Adam optimizer BID20 with the momentum term β 1 = 0.5 to train structure prediction models and discriminators. At each training iteration, we randomly generate inference or adversarial samples and update the parameters of the structured prediction models and the discriminators according to Equation and Equation. The models are trained until convergence. Experiments are conducted on three tasks: multi-label classification, binary image segmentation, and 3-class face segmentation. We compare our LDRSP to other state-of-the-art adversarial learning methods on these tasks, and are reported in Section 4.1, 4.2 and 4.3. The code is implemented using the deep learning framework, i.e., Tensorflow BID0. We use standard benchmarks of this task, namely Bibtex and Bookmarks introduced by BID19 in this section. On these two datasets, tags need to be predicted for text inputs and multiple labels are possible for each input. A two-layer neural network BID3 ) is utilized as our structured prediction model. The same network architecture of DVN is adopted as our discriminator, which consists of one or two hidden layers with Softplus non-linearities. Following BID3, a cross-entropy loss is used as the surrogate loss. Besides the proposed LDRSP, different adversarial learning methods, e.g., GAN BID9, WGAN+GP BID11, LSGAN BID28 and EBGAN BID35 are utilized to learn the structured prediction models and the discriminators. These structured prediction models and discriminators have the same network architecture. For all the methods, the hyper-parameter exploration is performed, and we follow the gradient directions of the discriminators to refine the structured prediction models at the inference stage. The experimental are reported in TAB0. For all the adversarial learning methods, we report both the prediction of structured prediction models G and the prediction refined by the discriminators D. As it shows, the proposed LDRSP successfully learns D that can be utilized to refine the G and achieve state-of-the-art performance on both Bibtex and Bookmarks datasets. The refinement improves the performance of G by 3.6 % on Bibtex dataset and 1.1 % on Bookmarks dataset. However, for other adversarial learning methods, the refinement leads to negligible improvements or even decrements in performance. For example, the refinement of EBGAN improves the performance by 0.2 % on Bookmarks dataset, and the refinement of LSGAN decreases the performance by 0.1 % on Bibtex dataset. Following BID3, we implement a baseline model by training structured prediction models via minimizing a cross-entropy loss, i.e., the first term of Equation FORMULA5. Comparing the performance of the baseline model and the performance of our adversarially learned structured prediction models, we notice that jointly learning structured prediction models and discriminators via the proposed method greatly improves the performance of structured prediction models. The performance of state-of-the-art structured prediction methods e.g., logistic regression, a twolayer neural network learned with a cross-entropy loss, SPEN BID3, PRLR , and DVN BID12 ) is also reported in TAB0. Although our method, the DVN, and the SPEN have the same energy network architecture, our method outperforms the DVN and the SPEN on both Bibtex and Bookmarks datasets. Our method also greatly improves over feed-forward models: the logistic regression, the two-layer neural network, and the PRLP. BID24 37.2 30.7 NN baseline BID3 38.9 33.8 SPEN BID3 42.2 34.4 PRLR BID24 44.2 34.9 DVN BID12 44.7 37.1 SPM baseline 38.9 32.8 SPM (GAN) BID9 38.7 32.7 GAN BID9 38.6 32.8 SPM (LSGAN) BID28 40.0 32.4 LSGAN BID28 39.9 32.5 SPM (WGAN + GP) BID11 39.6 30.5 WGAN + GP BID11 39.7 30.6 SPM (EBGAN) BID35 40.2 32.3 EBGAN BID35 41.5 32.5 SPM (Our LDRSP) 42.8 37.2 LDRSP (Our)46.4 38.3 Compared with the multi-label classification task, the image segmentation task is more challenging due to the high-dimensional output space and the complex correlation among variables in the segmentation masks. We utilize the Weizmann horses dataset BID5 in this section. It is a commonly used dataset for binary image segmentation which consists of 328 left oriented horse images and their corresponding binary segmentation masks. Following BID12 BID23, all images and segmentation masks are resized to 32 × 32. The segmentation of horses at this low resolution is challenging and requires models to capture strong priors of the horse shape, since some thin parts of the horse like legs, tails are almost invisible in the images. We follow the experimental protocol of BID23 to split the Weizmann horses dataset and report on the same testing set. In the experiment, we adopt the fully convolutional network (FCN) BID26 baseline model proposed in BID12 as our structured prediction model. It consists of three 5 × 5 convolutional layers and two deconvolution layers. We find that using discriminators similar to PatchGAN discriminators BID25 improves the prediction performance. Therefore, our discriminators are designed to map (x, y) to score matrices. Here x ∈ R W ×3, y ∈ R W ×C, W is the number of pixels in an image, and, C is the number of object classes. We view image segmentation as pixel-level multi-label classification. Instead of approximating discriminators to the intersection over union (IOU) metric, we calculate the F 1 metric for each pixel and estimate an oracle value function v DISPLAYFORM0 where y i ∈ R C denotes the i-th row of y. Our discriminators are implemented by modifying the FCN baseline model: the number of object classes is set to 1 and a sigmoid function is added at the end of the discriminator to ensure v(x, y) ∈W. Similar to the multi-label classification, a cross-entropy loss is adopted as the surrogate loss. For both discriminators and FCNs, we adopt Adam optimizers and set the learning rates to 0.01. Data augmentation is utilized by using random 24 × 24 patches cropped from the 32 × 32 images and by randomly mirroring the images. We empirically find that setting the inference step size η to be 4.0 and setting the number of inference steps to be 30 achieve the best performance. At the inference stage, we divide each test image into 36 crops. The proposed method is utilized to estimate segmentation masks for these crops. The estimated segmentation masks are averaged to obtain the final segmentation mask. We notice that the proposed method usually converges within 20 inference steps. Table 2: The comparison of IOU between our LDRSP and other adversarial learning methods on the Weizmann horses dataset. FCN is adopted as our structured prediction model. Mean IOU % Global IOU % CHOPPS BID23 69.9 -DVN BID12 84.1 84.0 FCN baseline BID12 78.56 78.7 FCN (GAN) BID9 79.8 79.7 GAN BID9 79.7 79.5 FCN (LSGAN) BID28 80.3 79.9 LSGAN BID28 79.7 79.3 FCN (WGAN + GP) BID11 80.6 80.4 WGAN + GP BID11 80.5 80.4 FCN (EBGAN) BID35 80.9 80.7 EBGAN BID35 80.9 80.8 FCN (Our LDRSP) 81.3 81.3 LDRSP (Our) 85.5 85.4As commonly done in the literature, we report the mean image IOU as well as the IOU over the whole testing set on the Weizmann horses dataset in Table 2. A higher IOU score means a more accurate segmentation mask. It is clear that the proposed LDRSP outperforms other state-of-the-art methods on both metrics. It can be observed that using the discriminators of the LDRSP to refine the FCN improves the performance by 4.2 % on the Mean IOU metric and 4.1 % on the Global IOU metric. It indicates that our LDRSP learns discriminators that are able to estimate stronger horse shape priors than the FCN. As it can be seen, the refinement of discriminators learned by other adversarial learning methods, e.g., GAN BID9, WGAN+GP BID11, LSGAN BID28 decreases the performance. It's due to the fact that these methods learn discriminators as classifiers which assign almost the same scores to all the predicted samples. Thus, it's difficult to utilize these discriminators to improve the performance of FCNs. Our LDRSP outperforms the DVN BID12. It indicates that utilizing the proposed method to jointly learn energy-based models and structured prediction models advances over learning energy-based models alone. The qualitative on the Weizmann horses dataset are shown in FIG0. It can be seen that FCN shows poor performance in segmenting thin parts like legs and generating single-connected segmentation masks. The proposed method utilizes discriminators to refine the predictions of the FCN by filling the missing part (e.g., FIG0, seventh and eighth row, far left images), generating legs to connect disconnected parts (e.g., FIG0, seventh and eighth row, first and second images from the right). We utilize the Labeled Faces on the Wild (LFW) dataset BID13 to evaluate our method on 3-class face segmentation. This dataset contains more than 13,000 images, which were first introduced for face recognition and later were annotated on a subset of 2,927 images for face segmentation. The annotations provide superpixel-level labels which consist of three classes: face, hair, and . Since our method generates pixel-level labels, we map pixel-level labels to superpixel-level labels by using the most frequent labels in a superpixel as the superpixel's label following BID34 BID12. We adopt the training, validation, and testing splits proposed in BID17 BID34 BID12. The network architecture and the data augmentation are the same as those utilized on the Weizmann horses dataset. BID12 92.44 FCN baseline BID12 95.36 FCN (GAN) BID9 95.53 GAN BID9 95.54 FCN (LSGAN) BID28 95.51 LSGAN BID28 95.52 FCN (WGAN + GP) BID11 95.59 WGAN + GP BID11 95.59 FCN (EBGAN) BID35 95.50 EBGAN BID35 95 We compare the proposed method with other methods on the LFW dataset in TAB1. It can be observed that the FCN baseline outperforms the DVN BID12 by 2.92 % on the superpixel accuracy metric, while our method outperforms the FCN baseline by 1.11 %. The performance improvement between our method and the DVN BID12 on the LFW dataset is more significant than the performance improvement on the Weizmann horses dataset. It indicates that utilizing the proposed method to jointly learn an energy-based model and a FCN greatly improves the performance of the energy-based model, when the output space is large. The qualitative of the LDRSP on the LFW dataset are shown in FIG1. As it can be observed, our method can generate high-quality hair and face segmentation masks that are close to the ground-truth labels. This paper proposes a novel learning framework, in which discriminative models are learned to refine structured prediction models. Discriminative models are trained as energy-based models to estimate scores that measure the quality of generated samples. Structured prediction models are trained to predict contrastive samples with maximum energy scores. Once models are learned, we perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models. We apply the proposed method to multi-label classification and image segmentation tasks. The experimental indicate that discriminative models learned by the proposed methods can effectively refine generative models. As the future work, we will explore different ways to generate extra training samples and apply our method to more challenging tasks.
We propose a novel adversarial learning framework for structured prediction, in which discriminative models can be used to refine structured prediction models at the inference stage.
1,736
scitldr
We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks. How can we characterize novelty when only normality information is given? Novelty detection is the mechanism to decide whether a data sample is an outlier with respect to the training data. This mechanism is especially useful in situations where a proportion of detection targets is inherently small. Examples are fraudulent transaction detection , intrusion detection , video surveillance (; b), medical diagnosis and equipment failure detection (; ;). Recently, deep autoencoders and their variants have shown outstanding performances in finding compact representations from complex data, and the reconstruction error has been chosen as a popular metric for detecting novelty . However, this approach has a limitation of measuring reconstruction quality only in an input space, which does not fully utilize hierarchical representations in hidden spaces identified by the deep autoencoder. In this paper, we propose RAPP, a new method of detecting novelty samples exploiting hidden activation values in addition to the input values and their autoencoder reconstruction values. While ordinary reconstruction-based methods carry out novelty detection by comparing differences between input data before the input layer and reconstructed data at the output layer, RAPP extends these comparisons to hidden spaces. We first collect a set of hidden activation values by feeding the original input to the autoencoder. Subsequently, we feed the autoencoder reconstructed input to the autoencoder to calculate another set of activation values in the hidden layers. This procedure does not need additional training of the autoencoder. In turn, we quantify the novelty of the input by aggregating these two sets of hidden activation values. To this end, we devise two metrics. The first metric measures the total amount of reconstruction errors in input and hidden spaces. The second metric normalizes the reconstruction errors before summing up. Note that RAPP falls back to the ordinary reconstruction-based method if we only aggregate input values before the input layer and the reconstructed values at the output layer. Also, we explain the motivations that facilitated the development of RAPP. We show that activation values in a hidden space obtained by feeding a reconstructed input to the autoencoder are equivalent to the corresponding reconstruction in that hidden space for the original input. We refer the latter quantity as a hidden reconstruction of the input. Note that this is a natural extension of the reconstruction to the hidden space. Unfortunately, we cannot directly compute the hidden reconstruction as in the computation of the ordinary reconstruction because the autoencoder does not impose any correspondence between encoding-decoding pairs of hidden layers during the training. Nevertheless, we show that it can be computed by feeding a reconstructed input to the autoencoder again. Consequently, RAPP incorporates hidden reconstruction errors as well as the ordinary reconstruction error in detecting novelty. With extensive experiments, we demonstrate using diverse datasets that our method effectively improves autoencoder-based novelty detection methods. In addition, we show by evaluating on popular benchmark datasets that RAPP outperforms competing methods recently developed. Our contributions are summarized as follows. • We propose a new novelty detection method by utilizing hidden activation values of an input and its autoencoder reconstruction, and provide aggregation functions for them to quantify novelty of the input. • We provide motivation that RAPP extends the reconstruction concept in the input space into the hidden spaces. Precisely, we show that hidden activation values of a reconstructed input are equivalent to the corresponding hidden reconstruction of the original input. • We demonstrate that RAPP improves autoencoder-based novelty detection methods in diverse datasets. Moreover, we validate that RAPP outperforms recent novelty detection methods on popular benchmark datasets. Various novelty detection methods with deep neural networks rely on the reconstruction error (; ;), because discriminative learning schemes are not suitable for highly class-imbalanced data which is common in practice. Unsupervised and semi-supervised learning approaches handle such imbalance by focusing on the characterization of normality and detecting samples out of the normality. Variational Autoencoders (VAE) were reported to outperform vanilla autoencoders for novelty detection based on reconstruction error . To carry out the novelty detection outlined in this approach, an autoencoder needs to be trained only with normal data. The autoencoder encodes the training data, which comprises of only normal data in this case, into a lower-dimensional space and decodes them to the input space. To test novelty, an input value is fed to the autoencoder to produce a reconstructed value and calculate the distance between the input and reconstructed values. This distance is the reconstruction error. A higher reconstruction error means that the input value cannot be encoded onto the lower-dimensional space that represents normal data. Therefore, the input value can be marked as a novelty if its reconstruction error exceeds a certain threshold. Instead of autoencoders, Generative Adversarial Networks (GAN) have been also suggested to model a distribution of normal data . Despite the same purpose of discovering a simpler, lower-dimensional representation, the training criterion for GAN is focusing on the quality of data generation rather than the reconstruction quality of training data. Recently, several pieces of research have combined autoencoders and adversarial learning to meet both criteria in dimension reduction and data generation (; ;). One limitation of these methods based on the ordinary reconstruction error is that they do not exploit all the information available along the projection pathway of deep autoencoders. We will explain how to leverage this information for novelty detection in the next section. From the viewpoint of the diversity and ratio of the normal data in novelty detection, there are two cases available. The first case is when a small fraction of classes are normal. This case has been studied in a one-class classification context, and usually evaluated by organizing training data into a collection of samples belonging to a small number of normal classes (; ; ;). The second case is when a majority of classes are assigned as normal (; ; ;). In this case, normal data is more diverse, and the training data is consist of samples of a relatively large number of normal classes: e.g., nine digits of MNIST. One setup does not dominate the other, but depending on applications, either can be more suitable than the other. Different methods may perform differently in both cases. In this paper, we evaluate RAPP and other competing methods with experiments in both setups. In this section, we describe the proposed novelty detection method RAPP based on an autoencoder. The main idea is to compare hidden activations of an input and its hidden reconstructions along the projection pathway of the autoencoder. To be precise, we project the input and its autoencoder reconstruction onto the hidden spaces to obtain pairs of activation values, and aggregate them to quantify the novelty of the input. For the aggregation, we present two metrics to measure the total amount of difference within each pair. An autoencoder A is a neural network consisting of an encoder g and a decoder f, responsible for dimension reduction and its inverse mapping to the original input space, respectively: i.e. A = f • g. For the purpose, training the autoencoder aims to minimize difference between its input x and output A(x). The space that the encoder g constitutes is called the latent space, and provides more concise representation for data than the input space. Due to this unsupervised representation learning property, the autoencoder has been widely used for novelty detection. Specifically, training an autoencoder on normal data samples, novelty of a test sample x is measured by the following reconstruction error: The test sample x is more likely to be novel as the error (x) becomes larger, because it means that x is farther from the manifold that the autoencoder describes. Although this approach has shown promising in novelty detection, the reconstruction error alone does not fully exploit information provided by a trained autoencoder especially when its architecture is deep. In other words, hierarchical information identified by the deep architecture is being ignored. This is rather unfortunate because hierarchical representation learning is one of the most successfully proven capabilities of deep neural networks. To fully leverage that capability, below we will describe the way to exploit hidden spaces to capture the difference between normal and novel samples in more detail. Let A = f • g be a trained autoencoder where g and f are an encoder and a decoder, and be the number of hidden layers of g. Namely, g = g • · · · • g 1. We define partial computation of g as follows: Let x be an input vector, andx be its reconstruction by A: i.e.,x = A(x). In addition to comparing x andx in the input space, as the ordinary approach does, we examine them in hidden spaces along a projection pathway of A. More precisely, feeding x andx into A, we obtain pairs (h i,ĥ i) of their hidden representations where Figure 1a illustrates the procedure of computing h i andĥ i. As a , novelty of the sample x is quantified by aggregating The overall procedure of RAPP is summarized in Algorithm 1. To clearly state the required variables to construct H, we write the algorithm with the for loop in Lines 3-5, but in practice, all of them can be computed by feed-forwarding one time each of x andx to g. Note that RAPP is indeed a generalization of the ordinary reconstruction method with defining g 0 as the identity function and s ord as follows. The quantity that RAPP computes, the hidden activation of the reconstruction input, is equivalent to the hidden reconstruction of the input. Sincef = f, computingĥ 2 =ĥ 2 does not require explicitly evaluatingf i but only g i and f =f. Algorithm 1: RAPP to compute a novelty score. Input: Sample x, trained autoencoder A = f • g, the number of layers, and aggregation s. Output: Novelty score S. In this paper, we provide two metrics s SAP and s N AP which more extensively utilize H than s ord. Those are especially suited when no prior knowledge exists for the selection of layers to derive a novelty metric, which commonly happens when modeling with deep neural networks. Note that, however, more elaborate metrics can be designed if we have knowledge on or can characterize the spaces. This is the most straightforward metric that one can define on H. For a data sample x, SAP is defined by summing the square of Euclidean distances for all pairs in H: Although SAP is intuitive, it does not consider properties of hidden spaces; distance distributions of pairs in H may be different depending on the individual hidden spaces. For instance, the magnitude of distances can depend on layers, or there may exist correlated neurons even across layers which are unintentionally emphasized in SAP. To capture clearer patterns, we propose to normalize the distances via two steps: orthogonalization and scaling.; given a training set X, let D be a matrix whose i-th row corresponds to d(x i) for x i ∈ X, andD be the column-wise centered matrix of D. For the normalization, we computeD = U ΣV, SVD ofD, to obtain its singular values Σ and right singular vectors V. For a given data sample x, we define s N AP as follows: where µ X is the column-wise mean of D, and d(x) is expressed as a column vector. Note that s N AP is equal to the Mahalanobis distance with the covariance matrix V ΣΣV. Although SVD computation time is quadratic in the number of columns of the target matrix, we observe that its impact is relatively small in practical setups. See Appendix A for more details. One natural question in using the ordinary reconstruction method is as follows: why do we investigate only the input space? Or, why do we not use information in hidden spaces? While the reconstruction error in the input space is extensively employed, any similar concept does not exist in hidden spaces. One reason is that the corresponding encoding and decoding layers are not guaranteed to express the same space: e.g. permuted dimensions. This is because the autoencoder objective does not have any term involving activations from intermediate hidden layers. As a , f:i+1 (g(x)) cannot be considered a reconstruction of g:i (x), except for i = 0 with which they become the ordinary reconstruction of and input to an autoencoder, respectively. Nevertheless, in this section, we will show that there is an indirect way to compute the hidden reconstruction. Precisely, we will show thatĥ i (x) = g:i (A(x)) is indeed equivalent to a reconstruction of g:i (x). The overall mechanism is depicted in Figure 1b. Let A = f • g be a trained autoencoder, and M 0 = {A(x): x ∈ R n } be the low dimensional manifold that A describes : i.e., ∀x ∈ M 0, x = A(x). Defining M i = {g :i (x): x ∈ M 0 }, which is the low dimensional image of M 0 defined by g:i, g and f restricted on M 0 and M, respectively, are inverse functions of each other. Quantifying Hidden Reconstruction We first assume that there exists a decoderf The second condition makesf:i+1 a proper decoder corresponding to g i+1:, and thus,f enables to define the i-th hidden reconstructionĥ i (x) as follows: Finally, we conclude thatĥ i (x) is equal toĥ i (x) for x ∈ M 0 as follows. where we do not needf i for computingĥ i (x), but only g i and f. Note that for x ∈ M 0 already on the manifold, its i-th hidden reconstructionĥ i (x) becomes equal to its corresponding hidden input. For x / ∈ M 0, its hidden reconstructionĥ i (x) will differ from the input h i (x). Existence off Since x = A(x) for x ∈ M 0, g i and f i are one-to-one functions from M i−1 and M i, respectively. Let us definef i = g for M i; then it also holdsf = g −1 for M. This implies x = (f • g)(x) for x ∈ M 0, and consequently,f = f on M. This definition off i satisfies the two conditions above, and as discussed, we are able to compute hidden reconstructions given an input x, through computing the i-th hidden activation of the reconstructed input: i.e.ĥ i (x) = (Existence off with Neural Networks Given g i, if the symmetric architecture forf i is used, we may not be able to learnf i = g −1 i . Neural networks are, however, highly flexible frameworks in which we can deal with models of arbitrary function forms by adjusting network architecture. This property enables us to design a layer capable of representingf i . For instance, even iff i is too complicated to be represented with a single fully connected layer, we can still approximatef i by stacking multiple layers. Hence, given g i,f i can be represented by neural networks. In this section, we evaluate RAPP in comparison to existing methods. To this end, we tested the methods on several benchmarks and diverse datasets collected from Kaggle and the UCI repository which are suitable for evaluating novelty detection methods. The datasets from Kaggle and the UCI repository are chosen from problem sets of anomaly detection and multi-class classification, summarized in Table 1 . We note that MI-F and MI-V share the same feature matrix, but are considered to be different datasets because their labels normal and abnormal are assigned by different columns: i.e. machine completed and pass visual inspection, respectively. We use these datasets to compare RAPP with standard autoencoder-based methods described in Section 5.2. To compare RAPP with novelty detection methods in recent literatures, we also use popular benchmark datasets for evaluating deep learning techniques: MNIST and F-MNIST . For theses datasets, we do not take pre-split training and test sets, but instead merge them for post-processing. Novelty detection detects novel patterns by focusing on deviations from model-learned normal patterns. Thus, training sets contain only normal samples and test sets contain both normal and anomaly samples in our evaluation setups. Precisely, if a dataset contains an anomaly label, we assign all samples with that label to the test set for detection. If a dataset does not have any anomaly labels, we consider the following two setups. • Multimodal Normality: A single class is chosen to be the novelty class and the remaining classes are assigned as the normal class. This setup is repeated to produce sub-datasets with all possible novelty assignments. For instance, MNIST in a set of datasets with 10 different novelty classes. • Unimodal Normality: In contrast to the multimodal normality setup, we take one class for normality, and the others for novelty. For instance, MNIST in a set of datasets with 10 different normal classes. We applied these two setups to STL, OTTO, SNSR, MNIST, and F-MNIST datasets. We compare RAPP and the other methods using Area Under Receiver Operating Characteristic (AUROC). Note that we do not employ thresholding-based metrics such as F 1 score because access to abnormal samples is only allowed in testing time. Hence, we focus on the separability of models for novelty with AUROC. For the datasets in Table 1, we compare the effectiveness of the reconstruction error, SAP and NAP for three models: Autoencoder (AE), Variational Autoencoder (VAE), Adversarial Autoencoder (AAE) . For the benchmark datasets, recent approaches including OCNN , GPND , DSVDD and GT are available. To obtain the performances of the existing approaches, we downloaded their codes and applied against our problem setups. Given novelty classes, we create the test sets by randomly selecting samples while maintaining novelty ratios to 35% for the multimodal and 50% for the unimodal normality setups, respectively. Note that the expectation value of AUROC is invariant to the novelty ratio. We use symmetric architecture with fully-connected layers for the three base models, AE, VAE, and AAE. Each encoder and decoder has 10 layers with different bottleneck size. For the Kaggle and UCI datasets, we carry out PCA for each dataset first. The minimum number of principal components that explain at least 90% of the variance is selected as the bottleneck size of the autoencoders. We set bottleneck size to 20 for benchmark datasets. Leaky-ReLU (a) activation and batch normalization layers are appended to all layers except the last layer. We train AE, VAE and AAE with Adam optimizer , and select the model with the lowest validation loss as the best model. For training stability of VAE, 10 Monte Carlo samples were averaged in the reparamterization trick to obtain reconstruction from the decoder. In the calculation of SAP and NAP, we excluded reconstructions in the input space for MNIST and F-MNIST. Each AUROC score is obtained by averaging AUROC scores from five trials to reduce the random errors in training neural networks. More are provided in Appendix: standard deviations in Appendix B, comparison to baselines other than autoencoder variants C, and the effect of varying hidden layers involved in RAPP computation in Appendix D. Table 2 summarizes the of our performance evaluation; the best score for each model is in bold, and the best score for each dataset with an underline. Since STL, OTTO, SNSR, MNIST, and F-MNIST do not have anomaly labels, their scores are averaged over all possible anomaly class assignments. For instance, the AUROC value for OTTO in the unimodal normality setup is the average of 9 AUROC values with different anomaly class assignments. In Table 2, RAPP shows the highest AUROC scores for most of the cases. If we examine the performance for each dataset, RAPP achieves the best for 13 cases out of 15 (see the underlines). Table 3 summarizes the comparison of RAPP to recent novelty detection methods. As in Table 2, AUROC values are calculated by averaging from 10 cases with different anomaly class assignments for both datasets. Except for the unimodal F-MNIST setup, NAP outperforms all competing methods regardless of base model choice. Notably, NAP combined with VAE always shows the best performance, which is even higher than that of GT relying on image-specific data transformations for all cases. In this paper, we propose a novelty detection method which utilizes hidden reconstructions along a projection pathway of deep autoencoders. To this end, we extend the concept of reconstruction in the input space to hidden spaces found by an autoencoder and present a tractable way to compute the hidden reconstructions, which requires neither modifying nor retraining the autoencoder. Our experimental show that the proposed method outperforms other competing methods in terms of AUROC for diverse datasets including popular benchmarks. A SVD COMPUTATION TIME We compare running times of training an autoencoder and computing SVD for NAP. We choose two packages for the SVD computation: Pytorch SVD and fbpca provided in https://fbpca. readthedocs.io/en/latest/. Since the time complexity of SVD is linear in the number of data samples 1, we mainly focus on the performance of SVD with varying the number of columns of the input matrix that SVD is applied. To obtain variable sizes of the columns, we vary the depth and bottleneck size of autoencoders. The is shown below. Notably, Pytorch SVD utilizing GPU is at least 47x faster than training neural networks. Even, fbpca running only on CPU achieves at least 2.4x speedup. The detailed setups to obtain the matrices for the experiment are given in the 1 20 100 20 40 20 90 2 18 100 18 40 18 90 3 16 100 16 40 16 90 4 14 100 14 40 14 90 5 12 100 12 40 12 90 6 10 100 10 40 10 90 7 8 100 8 40 8 90 8 6 100 6 40 6 90 9 4 100 4 40 4 90 10 2 100 2 40 2 90 11 2 80 2 30 2 70 12 2 60 2 20 2 50 13 2 40 2 10 2 30 14 2 20 2 10 B STANDARD DEVIATIONS OF EXPERIMENTAL We provide the standard deviations of the in We investigate the performance of NAP while increasing the number of hidden layers involved in the NAP computation. Specifically, we consider two ways for the increment: 1) adding hidden layers one by one from the input layer (forward addition), and 2) adding hidden layers one by one from the bottleneck layer (backward addition). Experimental on two datasets are shown below. For most cases, more hidden layers tend to in higher performance. The values are obtained from one trial, not averaged over 5 trials as done in Section 5.
A new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder.
1,737
scitldr
Learning preferences of users over plan traces can be a challenging task given a large number of features and limited queries that we can ask a single user. Additionally, the preference function itself can be quite convoluted and non-linear. Our approach uses feature-directed active learning to gather the necessary information about plan trace preferences. This data is used to train a simple feedforward neural network to learn preferences over the sequential data. We evaluate the impact of active learning on the number of traces that are needed to train a model that is accurate and interpretable. This evaluation is done by comparing the aforementioned feedforward network to a more complex neural network model that uses LSTMs and is trained with a larger dataset without active learning. When we have a human-in-the-loop during planning, learning that person's preferences over plan traces becomes an important problem. These preferences can be used to choose a plan from amongst a set of plans that are comparable by the planner's cost metrics. Such a plan would naturally be more desired by the human. The user may not like to constantly dictate their preferences, and may not always be in the loop during execution. Thus, it is important for the user's preference function to be learned well, and for the user to be able to verify them. For verification, there ought to be a way to interpret how the model's decisions were made, and verify how faithful the learned model is to the user's preferences. A user's preferences function may be quite complex with dependencies over different subsets of features. The utility of some features maybe non-linear as well. Such a preference function may require a fair amount of information to approximate. We cannot expect a single user to give feedback over a large set of traces to get the relevant information. So Active learning, with a sufficiently expressive user interface for feedback, is essential to minimize queries and redundant information. In this work, our objective was to model the user's preferences over plan traces. There do exist techniques that efficiently represent and reason about preference relationships. CP-nets BID1 and Generalized additive independence BID2 ) models are typically used to represent preferences over sets of variables without consideration to the order in which they appear. While these models can be adapted to handle sequential data, they are not intended for it. LTL rules, however, can capture trajectory preferences very well and are used in PDDL 3.0 BID3, and LPP BID0. However, it can be very hard for a user to express their preferences in this form. We discuss existing approaches in more detail and the differences with respect to our work under the related work section. In our approach to learning preferences, we want to efficiently identify the relevant features and the degree to which they affect the preference score of a plan. We thus employ a feature-directed active learning approach that specifically picks plan traces that are most informative about the feature's effects on preference. After active learning, we encode a plan trace in terms of the relevant features it contains. We gather a set of training data from active learning, along with the user's preference score to help train a simple Neural Network (NN) that we call the FeatureNN model. We use a Neural Network as they can approximate complex functions to a good degree. Our approach is in one way, related to Generalized Additive Independence in that we try to learn a utility function over pertinent features, but we do not explicitly define or restrict the form of any utility functions. Rather a simple one hidden-layer feed-forward neural network learns the functions, dependencies, and relative weights over the relevant features. The FeatureNN then predicts a preference score for each plan reflecting the user's preferences. We also compare the performance of the FeatureNN to another SequenceNN model that processes sequential data using an LSTM BID6 module. The SequenceNN is not trained with data from active learning, but with a larger dataset of traces with ratings. This is to evaluate how efficient our active learning approach is with respect to the number of traces. Specifically, we compare the number of traces required by SequenceNN and FeatureNN for the same accuracy and interpretability. Neural networks, unlike linear functions, are not as easy to interpret. Even simple NN with a single hidden layer can be a challenge. We help the user interpret the decisions of the neural network by showing how the preference score is affected by removing different features of a plan trace. This is similar to using Saliency Maps BID7 in images to explain what parts of the image contributed to the classification. In this way, we can explain to the user what plan features contributed to the preference value and by how much. The difference in preference score should correspond to the user's expectations as per their preference model. The more similar the effect of changes are to the user's preferences, the more interpretable the NN model is to the user as it approximates well their own preference function. Such a method of explaining a decision(score) is also related to explaining using counterfactuals BID5. Here the counterfactual is the plan trace without a specific feature. Additionally, when the specific features used to compute preferences comes from the user's feedback (during active learning), this interpretability is obviously improved. We present our work by first defining the problem before delving into the methodology of our approach. In the Methodology section, we discuss the domain used, the user preference model, and the feature-directed active learning process. We also discuss the two neural network models used to learn the preference model, viz. the FeatureNN and the SequenceNN models. Then we present our experimental in which we compare the two models with respect to their accuracy in predicting the preference score, as well as interpretability. Lastly, we discuss the and possible extensions to the work. Given a Domain D, with a set of features F, and a planner P, the problem is to learn the preference function F p that captures the user's preference model U p and scores traces accordingly. The types of preferences we learn in this work are a function of the feature set F of the domain and not hidden variables or action costs. The user U is available to rate a plan trace on its preference, and annotate what features contributed positively or negatively to the rating. Features can be categorical or cardinal(count), and involve sequences. Plans are rated between with higher values indicating a greater preference. If there are no features in the plan that either contributed positively or negatively to the preference, then the preference score is 0.5.An equivalent problem formulation assumes that instead of a domain D and planner P, we are given as input a large enough set of plan traces B (backlog of traces) over a relevant set of initial and goal states. We assume that this set of plans covers the space of possible preferences that the user might have. For our experiments, we chose to use gridworld with features that any human can relate to. We chose gridworld as it is easy to quickly generate many diverse plans that cover the range of features. Given the domain and a task, we go through r rounds of active learning. Each round comprises of r t traces. Both r and r t are hyperparameters. For our experiments, we set the number of rounds at r = 3. After acquiring the data, we train the NN model and test it on a hidden set of traces. We now go over different parts of our methodology in detail. The objective in our gridworld domain, which we call Journey − W orld, is to travel from home to the campsite which is shown in the grid in FIG1. Each step of a plan corresponds to a cell of the grid. While some cells are empty, there a lot of cells that have features. These features can be eateries(like a coffee shop, restaurant), landmarks(natural history museum) or activities(visiting the library, watching a movie). The user can move through any of these states before reaching the campsite. A majority of the cells also contain landscapes like mountains, lake, sea or industries. The user is not allowed to move through Landscape cells. Moving through cells adjacent to Landscape cells corresponds to seeing the landscape along the journey. For example, if a step in the plan goes through a cell which is adjacent to a lake, this corresponds to the plan going through a state where the user passed by a lake. All non-landscape features (like coffee, donut) are binary features in a plan trace i.e. the user has either visited one or not. On the other hand, the landscape features are cardinal, i.e. we count the number of such landscape features in the plan trace. We assume that the count of cardinal features can make a difference in the preference score. In total, there are 13 features in Journey − W orld. We had designed Journey − W orld with simple and commonly understood features to make it easier for subsequent human-studies. We assume people will have preferences over these features. For our current experiments, we chose to use a completely defined user preference model to represent the user. This made it easier for us to test and debug our methodology. In future extensions of this work, we will include evaluations with human trials. The user's preference model is defined as follows DISPLAYFORM0 where n(x) = min(x, 2), C ∈ {0, 1}, D ∈ {0, 1} and L, I ∈ N C is a binary variable that is 1 when the plan trace has a coffee. D is also a binary variable and represents a Donut. CD represents a binary variable set to true when the plan trace a coffee and a donut. When CD is true, C and D are false and this dependency affects the preference score computation as shown in the preceding equation. L and I represent the number of lake and industry regions respectively. These are cardinal features, and the preference of the plan increases based on their count, up to 2, and then stops increasing. The function itself, while simple to understand, is non-linear and hidden underneath a large hypothesis space of functions that could be learned in the domain, over a larger set of features (13 in total for the Journey − W orld domain). In our experiments, we programmed a separate module to rate and annotate plan traces based on an input preference function like the one described previously. This synthetic human is what rates and annotates in the active learning process that we will describe shortly. Using a synthetic human helped speed up the testing and debugging process, and gives us a baseline noiseless scenario to test against. The current user interface FIG1 (a)) for Journey − W orld displays the entire grid. Icons are used to show the features present in cells. The plans for each round of active learning are then shown one at a time. The plan steps are visualized as a line going from the home to the campsite. The user has to input a rating to indicate their preference for the plan based on the features that are visited. They can also annotate features of the plan that they like(green) or dislike(red) as shown in FIG1 (b). The user can click on the N ext P lan button to then move on to the next plan. The interface automatically switches to the next round of Active Learning when the current round's plans have all been rated. In our active learning process, we go through multiple (r) rounds of feedback. Each successive round utilizes the knowledge from previous rounds to select the most informative queries. In the first round, the user is shown the most diverse set of plan trajectories that were generated for the domain. We choose diverse plans because in the first round we do not have any knowledge of what features might affect the user's preference and hence, we want to cover the feature space as much as possible in r t = 30 traces. In order to get the required diverse plan set, we first generate a large number of plan traces(10000 plans) over a user specified set of initial and goal states that we refer to as the backlog of plans, B. In our current experiments we only have one initial and one goal state. We are easily able to generate such a large backlog of plans because it is a type of gridworld domain. We did not want the computational cost of diverse plan generation to hamper the work. This is a computational cost that needs to be considered when working with other domains. The plans generated cover the entire feature space of the Journey − W orld domain. We then select the 30 most diverse plans within the set of backlog plans for the first round. We will now discuss how this is determined. The diversity score between any two plans p a and p b is denoted by d(p a, p b). The diversity is based on the sum of feature count differences for features f ∈ F that are present p a and p b. For a particular feature f, we compute the feature count difference f ∆. Rather than use the difference in count per feature, we use a geometric series sum as computed in Equation 1. The first count in the difference contributes 1, the second count contributes a 0.5, the third contributes 0.25 and so forth. So the count difference for a single feature contributes to at most 2 to the diversity computation. This avoids any single feature from dominating the diversity computation. The diversity between two traces is computed as the average f E∆ (Equation 2) over all the features in the domain. Finally, we calculate the backlog-diversity d B for a plan p using equation 3. The backlog-diversity is the average pairwise diversity over every other plan in the backlog. Using this diversity score, we select the top r t plans (r t = 30 in our experiments) for the user feedback. DISPLAYFORM0 After the first round of diverse plans, we then make use of the ratings and annotations provided by the user in the first round to generate the most informative plan traces for the subsequent rounds. Given our acquired knowledge of relevant features from the previous round, our objective now is to figure out the effects and dependencies between these features. We also want to select traces for the next round that are more likely to be rated either significantly higher or lower. This region of data is typically harder to get as we expect most data to be closer to the average. In order to estimate which plan traces would be either most preferred or least preferred, we use a fast weak predictor that predicts the rating of any arbitrary plan p given prior knowledge. We need the predictor to be fast as we have to give traces or queries for the next round in a short amount of time. The weak predictor estimates a value for each feature based on the prior annotated data. It can then estimate the score of an unrated plan as just the sum of the features present in it. The value of each feature is scored using a quick and simple method. First, for each scored plan trace p with rating r p, the feature f is given a score f p score for that plan by equation 4. Then the feature's score, f score, is computed as the average f p score over all plans that the feature appears in. Then to predict the score for an unrated plan, the weak predictor assigns a score predict(p) which is the sum of f score for all features present in the plan. DISPLAYFORM1 In addition wanting plans that are likely to be rated much higher or much lower, we also want the next round traces to have two more properties. We still want to include some diversity in the plan traces with respect to the overall backlog of traces to uncover features that we might have missed in the first round. Additionally, we want to maintain some similarity in traces between the rounds. We think that the similarity between plans reduces the cognitive load on the user as they need not parse wholly different traces. Given a plan p, we denote its similarity to the already scored traces as S(p).Finally, we assign a combined weighted score of p c to all the plans in the backlog given by equation 5. The top r t = 30 plans are then picked for the next round, and in this way the active learning proceeds for r rounds. For our experiments r is 3 rounds. DISPLAYFORM2 Preference Learning using Neural NetworksFor learning the preference function we used two models, SequenceNN model and FeatureNN model. The SequenceNN model uses an LSTM module in it. We considered an LSTM based model as they are well suited to learning patterns overs sequential data. The input plan trace was encoded such that each step was an encoding over the features of the cell visited at that step. There are 13 features in total, and so each step is a 13-dimensional vector. We do not provide or restrict the input to only the features that the user annotated during active learning for the SequenceNN model. We wanted to test how easily the model could still figure out the relevant features and learn the preference function well. The training data for the SequenceNN model was a set of rated plan traces. We varied the number of traces given from as small as 30 to 12,000 in increasingly larger step sizes. A plan trace would be a N × 13 array where N is the plan length. We trained the model for 10 epochs with a batch size of 8, a learning rate of 0.01 and using stochastic gradient descent. The SequenceNN module in our model has 16 memory cells. After processing the plan trace through the LSTM module, we concatenate the output vector and memory nodes of the LSTM module and pass it through a single fully connected hidden layer, followed by the output layer which outputs the preference score between. The model summary is in FIG2. The idea is that the LSTM module output and memory, at the end of processing the sequence, will have the necessary information related to the sequence for predicting the score. For the FeatureNN model, the input was an encoding of the plan trace that only comprised of the features the user annotated as relevant during active learning. The entire plan trace was summarized into one encoded vector. For example, in the user preference model in our experiments, only 5 features matter to the user. We determine what these features are through active learning, and then defined our FeatureNN model accordingly to take a 5-dimensional vector as input. For example, if a plan trace had a step with coffee and steps that passed by 3 lakes, then the values at the corresponding indices are set to 1, and 3 respectively. Note that since coffee is a binary feature, even if two coffee steps were in the plan, it's value in the encoding is only either 1 or 0. As for the model description of FeatureNN, it was a simple fully connected neural net with one hidden layer of 4 dimensions and one output layer. The model summary is in FIG3. Note that 4 dimensions or nodes for the hidden layer is not a magic number, and would need to be larger if there were more features. We reduced the number of dimensions for the hidden layer until the were measurably worse. To train the FeatureNN model we vary the number of traces per round r t from 5 to 50 traces for r = 3 rounds. Since the dataset size is very small (smallest is 15 traces), we create 200 duplicates of the data points uniformly and train for 10 epochs. We also shuffle the data and train with a batch size of 8, a learning rate of 0.01, and using stochastic gradient descent. A user can interpret a Neural Network's behavior through analyzing what features are salient to its decision, and by how much. This can be analyzed by adding or removing features and seeing the ant effect on the predicted score. When done over a set of different traces, the user can intuit what mattered and how much. With this in mind, we compute a measure of interpretability we call the Attribution Error AE. The AE for a feature f of a Plan p is computed as follows: DISPLAYFORM0 where U p is the preference function of the user (true model of preferences) and F p is the learned preference function. AE is simply the difference in the effect of the feature on the preference scores. The overall AE for each test plan p, AE(p), is the average of AE(p, f) for all f present in p. We compute the AE score for the test set as the average over only the top 10% of AE(p) errors. We do this because neural networks can sometimes have enough capacity to memorize many cases and increase accuracy. So it can predict the correct preference score of both of the original trace and modified trace (with dropped feature) by the memory of very similar traces. It would then seem like it's preference function predicts the same way as the ground-truth preference function, but it maybe using unrelated features. Therefore, it is in the failure cases that we get a true measure of its generalization and how faithful it is to the true model of preferences. That is why we use the average over the top 10% of AE(p) errors. These failure cases could correspond to the cases when a rare or unseen pattern of features are input, and thus not memorized. When varying the number of training input traces given to the SequenceNN model, we observed that the accuracy improved (error decreased) as expected (Figure 4(a) ). Surprisingly, even with 30 traces, it was able to predict with an error of 2.5% over the test set of unseen 1000 traces. We attribute this to the fact that there are enough simple correlations with other features that can predict the score well for the preference function that we tested with. This is evidenced by the fact that the interpretability measure (Attribution Error) is very low for 30 traces(Figure 5 (a) ); The attribution error was greater than 0.3 and the value range of AE is. Additionally, we give the most diverse N traces for each training set size to the SequenceNN model. Diverse traces are more likely to contain relevant information. The interpretability of the LSTM model was not impressive. The attribution error did decrease over the range of training set sizes, but only as low as 0.09 as shown in Figure 5(a). Given that the preference scores are between, this would correspond to a 9 percent error after 7500 rated traces. Needless to say, it is unreasonable to expect a single human to rate 7500 traces. We also tried varying the size of the SequenceNN model from 16 to 64 dimensions. This improved accuracy by a minuscule amount (order of 1e−4), and interpretability did not improve. The performance of the FeatureNN model was significantly better both in accuracy(lower error) and interpretability than the SequenceNN model as seen in Figure 5. This should come as no surprise since we restrict the input space based on user feedback (knowledge) on relevant features. This also restricts the hypothesis space of functions that the simple feed-forward network could search over. We think this will make it more likely that the NN will find a good and faithful What is interesting to note is that the interpretability, as measured by the AE error drops to as little as 2.5 % in as little as 60 traces (20 traces per round over 3 rounds) for FeatureNN model as shown in Figure 5b. It drops below 2 % with 150 traces. The FeatureNN model with 90 traces is as accurate as the SequenceNN with 7500 traces in our problem, with 8% less Attribution Error (more interpretable). While we expected FeatureNN to be better, we did not expect such a large difference in efficiency. Even with as little as 13 features and a relatively uncomplicated preference function, a sufficiently powerful SequenceNN model did not find the underlying preference function. Instead, it found correlations that predicted the preference score to a very high level of accuracy. This, unfortunately, makes the model suffer in interpretability. As the number of features increases, the hypothesis space of a NN will increase significantly. This makes it much more likely for any NN to find spurious correlations, and suffer in interpretability. So active learning and using a simpler NN becomes very important for learning preferences in plan traces. As for prior feature knowledge, we assumed knowledge about what features were categorical (binary in our experiments) and what features were cardinal. Rather than assume this knowledge, we can get this from the user as well, and reduce the assumptions about the domain features. Alternatively, we could have just encoded all features as cardinal features, and let the neural network determine what features were categorical. While this is certainly possible, we think it better to get this knowledge from the user and encode the plan trace based on this knowledge. This makes the job of the neural network easier, and less likely to learn spurious correlations. In our current encoding of features in FeatureNN model and our experiments, we have not included a preference dependency that considers the number of steps between features. For example, I would like to have a donut within 3 plan steps after having a coffee. This omission was not intentional. One can easily encode such a sequential feature as a variable as well. The number of steps between the two (state) features becomes a cardinal variable to represents this sequential feature. Two well known paradigms for learning, representing and reasoning over preferences are CP-nets and Generalized additive independence (and their variants). Both of them were intended for preferences over outcomes. Each outcome can be comprised of many parts(decisions). One can think of each decision as choosing a value for a variable. The user would have preferences over the possible outcomes, or there maybe a utility (value) associated to each outcome. In CPnets BID1, the decisions or variables are represented in a graph, and there exists dependencies over variables. The preferences of a variable's values are affected by the value of the parent variables. The CP in CP-nets stands for Ceteris Paribus or "all else being equal". Here, the all else refers to the parents of the node, and when they are equal, then a particular set of preference orderings for the child variable's values hold. The knowledge of the dependence graph is either known apriori, or can be queried from the user. Once the hierarchy of dependence is known, the user is then queried about preferences at each node. For CPnets to be used in plan trace preferences, we would have to ask the user what the dependencies are. Then we would have to ask the user for their relative preferences over features, given fixed parent feature values. Note that the variables for plan preferences may also have to incorporate information about order. So there are significantly more variables (features) to consider in sequential data versus unordered data. We think querying for such knowledge is very demanding, and not natural for preferences over plan traces. We believe it is more natural for the user to specify the relevant conditional dependencies over features while annotating a plan trace. Additionally, we think it easier to give a preference value for the plan trace rather than relative preference orderings over the features in the domain. The features could include sequential dependencies or positiondependent features. We think it would be hard for the user to be able to describe sequential features and the relative ordering over them. Lastly, CP-nets do not compute utility values and some outcomes can be incomparable for a particular network of dependencies. In our problem, we would like a total order over the plans, to select the most preferred plan. So it helps to have a utility/preference value for every plan trace. On the other hand, GAI BID2 ) models do provide a single utility value for a set of features. As stated in their work, they provide an additive decomposition of a utility function (into sub-utility functions) in situations where single attributes are not additively independent, but (possibly overlapping) subsets of attributes are. Since the subsets of attributes for the different sub-utility functions can overlap, one must query either with only global queries or a combination of local queries (over the subsets of features) with global queries to calibrate (as was done in the GAI work) (Braziunas and Boutilier 2006)(p. 3). To learn a GAI with active learning from a single user, there are one of two methods. We could make assumptions about what subsets of variables are part of each sub-utility function, and what those functions are, or the user would need to know and give us this information. We think this is a very difficult task for the user. In our approach, we only do full trace queries and ask for annotations and preference ratings. We think it is more natural to ask the user for their overall rating of a plan, rather than how much each subset of features affected the rating. The neural network then handles the job of learning the preference function over the user-specified features (and any dependencies).The other formalism for specifying preferences are LTL rules (Huth and Ryan 2004) (p.175), which allow the user to specify sequential patterns. Expecting the user to be able to specify LTL rules might be unreasonable. The user would also have to give utilities or preference orderings over the specified LTL rules. One can interpret our interface as extracting a subset of simple LTL rules (through annotations) which are present in a plan trace. The user gives a rating to the trace, as well as what features (LTL rules) were good or bad. Extending the LTL analogy, our encoding of a plan trace can be seen as a vector of the relevant LTL rules. The index corresponding to an LTL rule is set to 1 if the rule is satisfied in the plan trace. However, recall that we also allow cardinal features (counts) in our encoding, and not just binary variables. Our interface and learning framework does not handle the entire gamut of possible LTL rules. We are working on extending the types of sequential preferences supported, while keeping the interface intuitive and expressive. In our approach, we use feature-directed Active Learning complemented with an intuitive and expressive user interface to learn the user's preference function efficiently. The traces obtained during active learning are rated and annotated by the user. These traces are encoded as a vector over the features that the user indicated as relevant to their preferences. The feature vectors are used to train a simple feedforward Neural Network to learn the preference function. We show that the SimpleNN neural network is more accurate and interpretable with fewer, more informative plan traces as compared to the LSTM based SequenceNN model. The latter was trained with a larger dataset of rated plan traces without active learning. Our current experiments use a user preference function over only a few variables. It is important to see how efficiently our framework learns a more complex preference function. Moreover, the current preference function is completely deterministic as it provides consistent annotation and rating to the plan trace. A human, however, might not behave in a consistent manner. We will test with a noisy or probabilistic preference model in future work. The user interface itself can be extended to include more complex annotations. For example, the user can also provide annotations for some features to be added/dropped from the plan. This is especially useful for cardinal feature as the modified feature count represents what is ideal to the user. For example, if the user's preference doesn't increase after visiting more than 2 lakes. Then this can be communicated by removing extra lake features from a plan trace. We have mentioned categorical and cardinal features, but our framework is also intended to support real-valued features. We would need to adapt our active learning process to elicit feedback as to what the minimum, optimum and maximum values of such features are. These would be the minimum essential points to sample for approximating the underlying utility function. Lastly, we would like to simplify the function by which we choose plan traces in successive rounds of active learning. We think that the similarity with traces from previous rounds is unnecessary, and might not appreciably reduce the cognitive load on the user. We think that just diversity and selecting traces that are much more preferred(closer to 1.0) or much less preferred(closer to 0.0) would be sufficient.
Learning preferences over plan traces using active learning.
1,738
scitldr
Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer. Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives. But people can communicate objectives to each other simply by describing or demonstrating them. How can we build learning algorithms that will allow us to tell machines what we want them to do? In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments. We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network. We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance. Figure 1: A task where an agent (green triangle) must execute the command "go to the fruit bowl." This is a simple example where the reward function is easier to specify than the policy. While reinforcement learning provides a powerful and flexible framework for describing and solving control tasks, it requires the practitioner to specify objectives in terms of reward functions. Engineering reward functions is often done by experienced practitioners and researchers, and even then can pose a significant challenge, such as when working with complex image-based observations. While researchers have investigated alternative means of specifying objectives, such as learning from demonstration BID1, or through binary preferences BID5, language is often a more natural and desirable way for humans to communicate goals. A common approach to building natural language interfaces for reinforcement learning agents is to build language-conditioned policies that directly map observations and language commands to a sequence of actions that perform the desired task. However, this requires the policy to solve two challenging problems together: understanding how to plan and solve tasks in the physical world, and understanding the language command itself. The trained policy must simultaneously interpret a command and plan through possibly complicated environment dynamics. The performance of the system then hinges entirely on its ability to generalize to new environments -if either the language interpretation or the physical control fail to generalize, the entire system will fail. We can recognize instead that the role of language in such a system is to communicate the goal, and rather than mapping language directly to policies, we propose to learn how to convert language-defined goals into reward functions. In this manner, the agent can learn how to plan and perform the task on its own via reinforcement learning, directly interacting with the environment, without relying on zero-shot transfer of policies. A simple example is shown in Figure 1, where an agent is tasked with navigating through a house. If an agent is commanded "go to the fruit bowl", a valid reward function could simply be a fruit bowl detector from first-person views of the agent. However, if we were to learn a mapping from language to actions, given the same goal description, the model would need to generate a different plan for each house. In this work, we investigate the feasibility of grounding free-form natural language commands as reward functions using inverse reinforcement learning (IRL). Learning language-conditioned rewards poses unique computational problems. IRL methods generally require solving a reinforcement learning problem as an inner-loop BID26, or rely on potentially unstable adversarial optimization procedures BID8 BID10. This is compounded by the fact that we wish to train our model across multiple tasks, meaning the IRL problem itself is an inner-loop. In order to isolate the language-learning problem from the difficulties in solving reinforcement learning and adversarial learning problems, we base our method on an exact MaxEnt IRL BID26 procedure, which requires full knowledge of environment dynamics to train a language-conditioned reward function represented by a deep neural network. While using exact IRL procedures may seem limiting, in many cases (such as indoor robotic navigation) full environment dynamics are available, and this formulation allows us to remove the difficulty of using RL from the training procedure. The crucial insight is that we can use dynamic programming methods during training to learn a reward function that maps from observations, but we do not need knowledge of dynamics to use the reward function, meaning during test time we can evaluate using a reinforcement learning agent without knowledge of the underlying environment dynamics. We evaluate our method on a dataset of realistic indoor house navigation and pick-and-place tasks using the SUNCG dataset, with natural language commands. We demonstrate that our approach generalizes not only to novel tasks, but also to entirely new scenes, while directly learning a language-conditioned policy leads to poor performance and fails to generalize. A popular class of approaches to language grounding in reinforcement learning is to directly train a policy that consumes language as an input. Several works adopt a behavioral cloning approach, where the model is trained using supervised learning with language-action sequences pairs BID0 BID15 BID23. A second approach is to forego demonstrations but instead reward an agent whenever the desired task is completed BID21 BID17 BID11 BID3 ). This approach requires reward functions (the task completion detector) to be hand-designed for the training tasks considered. Another related approach is semantic parsing, which has also been used to convert language into an executable form that corresponds to actions within an environment BID9 BID16 BID24. In a related task to instruction following, BID6 consider an embodied question-answering task where an agent must produce an answer to a question, where the relevant information lies within the environment. They adopt a hybrid approach, where they pretrain with supervised learning but also give the agent reward for completing intermediate tasks. Overall, our experiments show that policy-based approaches have worse generalization performance to new environments, because the policy must rely on zero-shot generalization at test time as we show in Section 6.3. While in this paper we argue for the performance benefits of a reward-based approach, a reason one may want to adopt a policy-based approach over a reward-based one is if one cannot run RL to train a new policy in a new environment, such as for time or safety reasons. A second approach to the language grounding problem is to learn a mapping from language to reward functions. There are several other works that apply IRL or IRL-like procedures to the problem of language grounding. Perhaps most closely related to our work is BID14, which also aims to learn a language-conditioned reward function via IRL. However, this method requires an extensively hand-designed, symbolic reward function class, whereas we use generic, differentiable function approximators that can handle arbitrary observations, including raw images. BID2; BID25 also learn language-conditioned reward functions, but do not perform IRL, meaning that the objective does not correspond to matching the expert's trajectory distribution. BID25 train a task-completion classifier, but do not evaluate their reward on control problems. The strategy they use is similar to directly regressing onto a ground-truth reward function, which we include a comparison to in Section 6 to as an oracle baseline. BID2 adopt an adversarial approach similar to GAIL BID12, and use the learned discriminator as the reward function. While this produces a reward function, it does not provide any guarantees that the ing reward function can be reoptimized in new environments to yield behavior similar to the expert. We believe our work is the first to apply language-conditioned inverse reinforcement learning to environments with image observations and deep neural networks, and we show that our rewards generalize to novel tasks and environments. We build off of the MaxEnt IRL model BID27, which considers an entropy-regularized Markov decision process (MDP), defined by the tuple (S, A, T, r, γ, ρ 0). S, A are the state and action spaces respectively and γ ∈ is the discount factor. T (s |s, a) represents the transition distribution or dynamics. We additionally consider partially-observed environments, where each state is associated with an observation within an observations space o ∈ O.The goal of "forward" reinforcement learning is to find the optimal policy π *. Let r(τ) = T t=0 γ t r(s t, a t) denote the returns of a trajectory, where τ denotes a sequence of states and actions DISPLAYFORM0 Inverse reinforcement learning (IRL) seeks to infer the reward function r(s, a) given a set of expert demonstrations D = {τ 1, ..., τ N},. In IRL, we assume the demonstrations are drawn from an optimal policy π * (a|s). We can interpret the IRL problem as solving the maximum likelihood problem: DISPLAYFORM1 In the MaxEnt IRL framework, optimal trajectories are observed with probabilities proportional to the exponentiated returns, meaning p(τ) ∝ exp{r(τ)} BID27. Thus, learning a reward function r θ (τ) is equivalent to fitting an energy-based model p θ (τ) ∝ exp{r θ (τ)} to the maximum likelihood objective in Eqn 1. The gradient to update the reward function is BID26: DISPLAYFORM2 where ρ D (s, a) represents the state-action marginal of the demonstrations, and ρ * θ (s, a) represents the state-action marginal of the optimal policy under reward r θ (s, a). A unique challenge of the language-conditioned IRL problem, compared to standard IRL, is that the goal is to learn a reward function that generalizes across multiple tasks. While standard IRL methods are typically trained and evaluated on the same task, we want our language-conditioned reward function to produce correct behavior when presented with new tasks. Several previous works consider a multi-task scenario, such as in a Bayesian or meta-learning setting BID13 BID7 BID4. We adopt a similar approach adapted for the language-IRL problem, and formalize the notion of a task, denoted by ξ, as an MDP, where individual tasks may not share the same state spaces, dynamics or reward functions. Each task is associated with a context c ξ which is a unique identifier (i.e. an indicator vector) for that task. Thus, we wish to optimize the following multi-task objective, where τ ξ denotes expert demonstrations for that task: Sample task ξ, demonstrations d ξ, and language L ξ. DISPLAYFORM0 Compute optimal q * (s, a) using q-iteration and ρ * (s, a) using the forward algorithm. Update reward r θ with the gradient (ρ DISPLAYFORM0 In order to optimize this objective, we first require that all tasks share the same observation space and action space, and the reward to be a function of the observation, rather than of the state. For example, in our experiments, all observations are in the form of 32x24 images taken from simulated houses, but the state space for each house is allowed to differ (i.e., the houses have different layouts). This means the same reward can be used across all MDPs even though the state spaces differ. Second, we share the reward function across all tasks, but substitute a language command L ξ as a proxy for the context c ξ, ing in a model p θ (τ ξ, L ξ) that takes as input language, states, and actions. For computational efficiency we run stochastic gradient descent on the objective in Eqn. 3 by sampling over the set of environments on each iteration. We learn language-conditioned reward functions using maximum causal entropy IRL, adapted for a multi-task setting and rewards represented by language-conditioned convolutional neural networks. While during training we use dynamic programming methods that require dynamics knowledge, we do not need knowledge of dynamics to evaluate the reward function. Thus, at test time we can use standard model-free RL algorithms to learn the task from the inferred reward function in new environments. Our algorithm is briefly summarized in Algorithm 1. In order to take gradient steps on the objective of Eqn. 3, we update our reward function in terms of the Maximum Entropy IRL gradient BID26 according to Eqn. 2. The stochastic gradient update (for a single task ξ) adapted to our case is: DISPLAYFORM0 Where o(s) denotes the observation for state s. Note that during training we need access to the ground truth states s. While the update depends on the underlying state, the reward itself is only a function of the observation, the action, and the language. This enables us to evaluate the reward without knowing the underlying state space and dynamics of the environment. While requiring dynamics knowledge during training may seem limiting, in practice many environments we may wish to train a robot in can easily be mapped. This training strategy is analogous to training a robot in only known environments such as a laboratory, but the ing reward can be used in unknown environments. In order to compute ρ * θ (s, a), one normally has to first compute the optimal policy with respect to reward r θ (o, a) using reinforcement learning, and then compute the occupancy measure using the forward algorithm for Markov chains to compute the state visitation distributions at each time-step. Because this embeds a difficult RL optimization problem within nested inner-loops, this quickly becomes computationally intractable. Thus, we train in tabular environments with known dynamics, where we can compute optimal policies exactly using Q-iteration. However, we emphasize that this is only a training time restriction, and knowledge of dynamics is not required to evaluate the rewards. Our network architecture is shown in FIG0. The network has two main modalities of input: a variable-length language input represented by a sequence of one-hot vectors (one vector for each tokenized word), and a panoramic image observation. The language embedding is formed by processing the language input sequence through an LSTM network, and the final time-step of the topmost layer is used as a fixed-dimensional embedding e language of the input command. The agent receives image observations in the form of four 32x24 image observations, one for each cardinal direction view (N, S, E, W). The convolutional neural network (CNN) consists of a sequence of convolutional and max pool layers, with the final operation being a channel-wise global pooling operation that produces an image embedding of the same length as the language embedding. Each image is passed through an identical CNN with shared weights, and the outputs are summed together to form the image embedding. That is, e image = CNN(img N) + CNN(img S) + CNN(img E) + CNN(img W).Finally, these two embeddings are element-wise multiplied and passed through a fully-connected network (FC) to produce a reward output. Letting denote elementwise multiplication, we have r = FC(e image e language).We found that the max global-pooling architecture in the CNN was able to select out objects from a scene and allow the language embedding to modulate which features to attend to. We selected our architecture via a hyper-parameter search, and found that the choice of using an element-wise multiplication versus a concatenation for combining embeddings had no appreciable performance difference, but a global pooling architecture performed significantly better than using a fully connected layer at the end of the CNN. We evaluate our method within a collection of simulated indoor house environments, built on top of the SUNCG BID22 dataset. The SUNCG dataset provides a large repository of complex and realistic 3D environments which we find very suitable for our goals. An example task from our environment is shown in FIG1. An example of a successful execution of a task is showin in FIG3. We consider two typical kinds of tasks that an indoor robot may wish to perform:• Navigation (NAV): In the navigation task, the agent is given a location which corresponds to a room or object, and the agent must navigate through the house to reach the target location. For example, in FIG1, the target could be "cup" or "laptop" or "living room". FIG1. We only use the semantic labels as input to our model. • Pick-and-place (PICK): In the pick and place task, the agent must move an object from one location to another. For example, in Fig. 1 the task could be to move the cup from to the sink to the kitchen table. Each environment corresponds to one 3D scene, which is discretized into a grid to form a tabular environment where the grid coordinates plus agent orientation (N, S, E, W) correspond to the state of the agent. The agent receives observations with two components: one is a free-form language command, and one is a first-person panoramic image of the environment. Because the agent can move objects without directly looking at them, the panoramic view gives the agent a full view of its surroundings. The panoramic image is formed from 4 semantic image observations, one for each orientation of the agent. Each semantic image observation is 32x32 pixels and contains 61 channels, one per semantic image class. Each agent is equipped with 4 actions: step forward one grid tile, turn left or right, or interact with an object. We generate language commands based on a preset grammar, and using names of objects and locations associated with the task. These are of the form "go to X" for NAV tasks, or "move X to Y" for PICK tasks, where X and Y stand for names of locations and objects within the environment. We explicitly do not use step-by-step instruction language such as "turn left, walk down the hallway, go through the door", as these commands remove the planning aspect of the problem and tell the agent directly which actions to take in order to solve the problem. The interact action only has meaning within the PICK task. Executing this action will either pick up an object if the agent is within a 1 meter of an object, or drop an object if the agent is currently holding an object. To limit the size of the state space, within a single task, there is only one object an agent may interact with and two locations the object can be in. This setup only increases the size of the state-space by a factor of 3. However, different objects may be placed in different locations across environments, meaning the model still must learn to detect the object in the correct location rather than memorizing the specific object and location associated with a single task. In order to evaluate how well different methods generalize, we split our dataset of tasks into three segments: a training set, and two test sets -"task" and "house". The "task" test set contains tasks within the same houses as training, but requires the agent to interact with novel combinations of objects and locations. The "house" test set contains tasks on entirely new houses that were not in the training set. The purpose of this split is to investigate varying degrees of generalization: the "task" test set requires the model to execute novel language commands, but using landmarks and objects which were seen during training. The "house" test set adds another layer of difficulty, requiring the model to detect familiar objects situated in entirely new scenes. In total, our dataset contains 1413 tasks (716 PICK, 697 NAV). Across all tasks, there are 14 objects, and 76 different house layouts. There are 1004 tasks (71%) in the training set, 236 (17%) in the "task" test set, and 173 (12%) in the "house" test set. We evaluate two methods for reward learning. LC-RL refers to the language-conditioned IRL method outlined in Section 5, which takes as input demonstration and language pairs and learns a shared reward function across all tasks. In particular, we use 10 demonstrations per task, sampled from the computed optimal policy. To provide an upper bound for performance, we can also regress directly onto the ground-truth rewards, a method we label as "Reward Regression". While this is not possible in a practical scenario, this evaluation serves to show oracle performance on our task. Note that our method does not require access to ground-truth rewards, and only uses demonstrations. Success rates for each reward-learning method are evaluated using two policy learning procedures. Q-iteration (QI) computes the optimal policy exactly using dynamic programming, which we report in TAB1. We also experiment with reoptimizing the learned reward using DQN BID18, a sample-based RL method that does not require ground-truth knowledge of the environment dynamics. We use the position, orientation, and whether an object is held as the observation (this is identical to the state representation). This experiment represents the testing use-case where we can evaluate the reward at test-time in novel, unmapped environments despite the fact that at training time we require dynamics knowledge. However, because the probability that the random policy receives rewards on our task is tiny, we found that epsilon-greedy exploration was not enough. Thus, we also report using a reward shaping term with a state-based potential equal to the optimal value function BID19. We note that this shaping term does require dynamics knowledge compute, but we include this to highlight the difficulty of the RL problem even if reward learning is done properly. We also compare against two baselines derived from GAIL BID12, using the learned discriminator as the "reward" function. We first compare to AGILE BID2, which modifies GAIL to use a goal-based discriminator and false negative filtering, using DQN as a policy optimizer and ρ = 0.25. We found it difficult to learn rewards using a reinforcement learningbased policy optimizer, and the model was only able to solve the simpler NAV environments. This experiment emphasizes the gap between using a sampling-based solver and an exact solver during the training of reward-based methods. To create a more fair comparison, we also compare against GAIL using a dynamic programming solver (labeled GAIL-Exact), and we see that the performance is comparable to LC-RL on training environments, but performs significantly worse on test environments. These are in line with our intuitions -GAIL and IRL are equivalent in training scenarios BID12, but the discriminator of GAIL does not correspond to the true reward function, and thus performs worse when evaluated in novel environments. In order compare against a policy-learning approach, we compare against an optimal behavioral cloning baseline. We train the optimal cloning baseline by computing the exact optimal policy using Q-iteration, and perform supervised learning to regress directly on to the optimal action probabilities. To make a fair comparison, we keep the policy architecture identical to the reward architecture, except we add two additional inputs: the orientation indicator and and indicator on whether the ob- ject (during PICK tasks) is held by the agent or not. Each indicator is transformed by an embedding lookup, and all embeddings are element-wise multiplied along with with the language and image embeddings in the original architecture. Our main experimental on reward learning are reported in TAB1, and experiments in reoptimizing the learned reward function are reported in TAB2. Qualitative with diagrams of learned reward functions can be found in Appendix B. Additional supplementary material can be viewed at https://sites.google.com/view/language-irl, and experiment hyperparameters are detailed in Appendix A.We found that both LC-RL and Reward Regression were able to learn reward functions which generalize to both novel tasks and novel house layouts, and both achieve significant performance over the policy-based approach. As expected, we found that Reward Regression has superior performance when compared to LC-RL, due to the fact that it uses oracle ground-truth supervision. We include examples of learned reward functions for both methods in Appendix B. We found that a common error made by the learned rewards, aside from simply misidentifying objects and locations, was rewarding the agent for reaching the goal position without placing the object down on PICK tasks. This is reflected in the as the performance on PICK tasks is much lower than that of NAV tasks. Additionally, there is some ambiguity in the language commands, as the same environment may contain multiple copies of a single object or location, and we do not consider the case when agents can ask for additional clarification (for example, there are 2 beds in FIG1).We observed especially poor performance from the cloning baseline on both training as well as testing environments, even though it was trained by directly regressing onto the optimal policy. We suspect that it is significantly more difficult for the cloning baseline to learn across multiple environments. Our language is high-level and only consists of descriptions of the goal task (such as "move the cup to the bathroom") rather than step-by-step instructions used in other work such as BID15 that allow the policy to follow a sequence of instructions. This makes the task much more difficult for a policy-learning agent as it needs to learn a mapping from language to house layouts instead of blindly following the actions specified in the language command. Regarding re-optimization of the learned rewards, we found that DQN with epsilon-greedy exploration alone achieved poor performance compared to the exact solver and comparable performance to the cloning baseline (however, note that our cloning baseline was regressing onto the exact optimal actions). Adding a shaping term based on the value-function improves , but computing this shaping term requires ground-truth knowledge of the environment dynamics. We also note that it appears that rewards learned through regression are easier to re-optimize than rewards learned through IRL. One explanation for this is that IRL rewards appear more "noisy" (for example, see reward plots in Appendix B, because small variations in the reward may not affect the trajectories taken by the optimal policy if a large reward occurs at the goal position. However, while RL is training it may never see the large reward and thus is heavily influenced by small, spurious variations in the reward. Nevertheless, with proper exploration methods, we believe that language-conditioned reward learning provides a performant and conceptually simple method for grounding language as concrete tasks an agent can perform within an interactive environment. In this paper, we introduced LC-RL, an algorithm for scalable training of language-conditioned reward functions represented by neural networks. Our method restricts training to tractable domains with known dynamics, but learns a reward function which can be used with standard RL methods in environments with unknown dynamics. We demonstrate that the reward-learning approach to instruction following outperforms the policy-learning when evaluated in test environments, because the reward-learning enables an agent to learn and interact within the test environment rather than relying on zero-shot policy transfer. For our environment, we give the agent a time limit 30 time-steps to complete a task. For the purpose of reward regression and generating demonstrations, the environment gives a reward of 10 when the agent successfully completes the task. We sample demonstrations from the optimal policy using this ground-truth reward. Our MDP solvers use a discount of γ = 0.99.For our model, we used 10 demonstrations per environment to train IRL, and optimized with Adam using a learning rate of 5 * 10 −4 .For our convolutional neural network, we used a 5x5 convolution with 16 filters, followed by a 3x3 convolution with 32 filters. The size of each embedding was 32. The final fully connected layer had sizes of 32, and 1 (for the final output). We did not find significant performance differences increasing the number of filters or embedding sizes. We selected our architecture through a hyper-parameter sweep, with train and test accuracies presented below (averaged over 3 seeds each). The main architectures we swept through were whether to produce the image embedding via a global pooling layer (labeled Pooling) versus a single fully connected layer (labeled FC) versus FiLM BID20, and whether to combine the language and image embeddings using a point-wise multiplication or a pooling operation (labeled Mult) versus concatenation (labeled Concat). Below is an example of a learned reward (and the computed value function) from our IRL model. The task presented here is to bring the fruit bowl (green arrow) to the bathroom (red arrow). In the top row, we plot the reward function, and in the bottom row we plot the ing value function. The left column shows the rewards/values before the object (fruit bowl) is acquired, and the right column shows the rewards/values after. Note that before the object is acquired, the value directs the agent to the fruit bowl, and once the object is found, the value directs the agent to the bathroom. In these figures, a blue shaded square means a high values and red means low. The green-outlined tiles correspond to all locations within a 1-meter radius of the fruit bowl, or all tiles an agent can pick up the bowl from. The red-outlined tile likewise represents all tiles within a 1-meter radius of the drop-off location in the bathroom. The blue-outlined square represents the starting location of the agent. Next, we show rewards learned by 3 different methods: inverse reinforcement learning (IRL), GAIL, and reward regression. Again, low rewards are denoted by red squares and high rewards are denoted by blue squares. For each task, we also include a birds-eye view of task, where the object is highlighted in green and the agent is denoted by a green triangle. In general, we find that rewards learned by IRL and GAIL tend to be noisy and contain small artifacts. This is not unexpected, as both of these methods are sample-based and observe demonstrations instead of ground-truth rewards as supervision. We believe that such artifacts are detrimental when using RL to reoptimize the learned reward, as without adequate exploration RL cannot find the large reward at the true goal state, and instead ends up finding local minima. Figure 6: Learned rewards and a corresponding birds-eye view rollout for the task "go to fruit bowl". Figure 7: Learned rewards and a corresponding birds-eye view rollout for the task "move pan to living room". The running time for dynamic programming algorithms for solving MDPs (such as q-iteration) scales with the size of the state space, and in our environments we found reward evaluation becoming a major bottleneck in runtime. One major optimization we make to our algorithm is to cache computation on repeated observations. Computation-wise, the main bottleneck in Algorithm 1 is evaluating and back-propagating the reward function at all states and actions, rather than Q-iteration itself (even though it carries a cubic run-time dependency on the size of the state space). However, in many environments this is extremely wasteful. For example, in the house depicted in FIG1, information about where the cup is located must be included in the state. However, if our observations are images of what the robot sees, whether the cup is in the kitchen or in the bathroom has no impact on the images inside the living room. This means that we should only need to evaluate our reward in each living room images once for both locations of the cup. A major factor in speeding up our computation was to cache such repeated computation. In practice we found this to be significant, ing in 10-100 times speedups on reward computation depending on the structure of the environment.
We ground language commands in a high-dimensional visual environment by learning language-conditioned rewards using inverse reinforcement learning.
1,739
scitldr
Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons. For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned. A network with d inputs per neuron is found to be equivalent to an additive model of order d, whereas with a degree distribution the network combines additive terms of different orders. We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable. Thus, even simple brain architectures can be powerful function approximators. Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists. Kernel function spaces are popular among machine learning researchers as a potentially tractable framework for understanding artificial neural networks trained via gradient descent [e.g. 1, 2, 3, 4, 5, 6]. Artificial neural networks are an area of intense interest due to their often surprising empirical performance on a number of challenging problems and our still incomplete theoretical understanding. Yet computational neuroscientists have not widely applied these new theoretical tools to describe the ability of biological networks to perform function approximation. The idea of using fixed random weights in a neural network is primordial, and was a part of Rosenblatt's perceptron model of the retina. Random features have then resurfaced under many guises: random centers in radial basis function networks, functional link networks, Gaussian processes (GPs), and so-called extreme learning machines; see for a review. Random feature networks, where the neurons are initialized with random weights and only the readout layer is trained, were proposed by Rahimi and Recht in order to improve the performance of kernel methods and can perform well for many problems. In parallel to these developments in machine learning, computational neuroscientists have also studied the properties of random networks with a goal towards understanding neurons in real brains. To a first approximation, many neuronal circuits seem to be randomly organized. However, the recent theory of random features appears to be mostly unknown to the greater computational neuroscience community. Here, we study random feature networks with sparse connectivity: the hidden neurons each receive input from a random, sparse subset of input neurons. This is inspired by the observation that the connectivity in a variety of predominantly feedforward brain networks is approximately random and sparse. These brain areas include the cerebellar cortex, invertebrate mushroom body, and dentate gyrus of the hippocampus. All of these areas perform pattern separation and associative learning. The cerebellum is important for motor control, while the mushroom body and dentate gyrus are The function shown is the sparse random feature approximation to an additive sum of sines, learned from poorly distributed samples (red crosses). Additivity offers structure which may be leveraged for fast and efficient learning. general learning and memory areas for invertebrates and vertebrates, respectively, and may have evolved from a similar structure in the ancient bilaterian ancestor. Recent work has argued that the sparsity observed in these areas may be optimized to balance the dimensionality of representation with wiring cost. Sparse connectivity has been used to compress artificial networks and speed up computation, whereas convolutions are a kind of structured sparsity. We show that sparse random features approximate additive kernels with arbitrary orders of interaction. The in-degree of the hidden neurons d sets the order of interaction. When the degrees of the neurons are drawn from a distribution, the ing kernel contains a weighted mixture of interactions. These sparse features offer advantages of generalization in high-dimensions, stability under perturbations of their input, and computational and biological efficiency. Now we will introduce the mathematical setting and review how random features give rise to kernels. The simplest artificial neural network contains a single hidden layer, of size m, receiving input from a layer of size l (Figure 1). The activity in the hidden layer is given by, for i ∈ [m], Here each φ i is a feature in the hidden layer, h is the nonlinearity, W = (w 1, w 2, . . ., w m) ∈ R l×m are the input to mixed weights, and b ∈ R m are their biases. We can write this in vector notation as Random features networks draw their input-hidden layer weights at random. Let the weights w i and biases b i in the feature expansion be sampled i.i.d. from a distribution µ on R l+1. Under mild assumptions, the inner product of the feature vectors for two inputs converges to its expectation We identify the limit with a reproducing kernel k(x, x) induced by the random features, since the limiting function is an inner product and thus always positive semidefinite. The kernel defines an associated reproducing kernel Hilbert space (RKHS) of functions. For a finite network of width m, the inner product 1 m φ(x) φ(x) is a randomized approximation to the kernel k(x, x). We now turn to our main : the general form of the random feature kernels with sparse, independent weights. For simplicity, we start with a regular model and then generalize the to networks with varying in-degree. Two kernels that can be computed in closed form are highlighted. Fix an in-degree d, where 1 ≤ d ≤ l, and let µ|d be a distribution on R d which induce, together with some nonlinearity h, the kernel denote this set of neighbors. Second, sample w ji ∼ µ|d if j ∈ N i and otherwise set w ji = 0. We find that the ing kernel Here x N denotes the length d vector of x restricted to the neighborhood N, with the other l − d entries in x ignored. More generally, the in-degrees may be chosen independently according to a degree distribution, so that d becomes a random variable. Let D(d) be the probability mass function of the hidden node in-degrees. Conditional on node i having degree d i, the in-neighborhood N i is chosen uniformly at random among the l di possible sets. Then the induced kernel becomes For example, if every layer-two node chooses its inputs independently with probability p, the is the probability mass function of the binomial distribution Bin(l, p). The regular model is a special case of with Extending the proof techniques in yields: C ) many features (the proof is contained in Appendix C). Two simple examples With Gaussian weights and regular d = 1, we find that (see Appendix B) 4 Advantages of sparse connectivity The regular degree kernel is a sum of kernels that only depend on combinations of d inputs, making it an additive kernel of order d. The general expression for the degree distribution kernel illustrates that sparsity leads to a mixture of additive kernels of different orders. These have been referred to as additive GPs, but these kind of models have a long history as generalized additive models [e.g. 28, 32]. For the regular degree model with d = 1, the sum in is over neighborhoods of size one, simply the individual indices of the input space. Thus, for any two input neighborhoods N 1 and N 2, we have |N 1 ∩ N 2 | = ∅, and the RKHS corresponding to k reg 1 (x, x) is the direct sum of the subspaces H = H 1 ⊕... ⊕ H l. Thus regular d = 1 defines a first-order additive model, where, all pairwise terms. These interactions are defined by the structure of the terms k d (x N, x N). Finally, the degree distribution D(d) determines how much weight to place on different degrees of interaction. Generalization from fewer examples in high dimensions Stone proved that first-order additive models do not suffer from the curse of dimensionality, as the excess risk does not depend on the dimension l. Kandasamy and Yu extended this to dth-order additive models and found a bound on the excess risk of O(l 2d n −2s for kernels with polynomial or exponential eigenvalue decay rates (n is the number of samples and the constants s and C parametrize rates). Without additivity, these weaken to O(n −2s 2s+l) and O(C l /n), much worse when l d. Similarity to dropout Dropout regularization in deep networks has been analyzed in a kernel/GP framework, leading to with D = Bin(l, p) for a particular base kernel. Dropout may thus improve generalization by enforcing approximate additivity, for the reasons above. Equations and are similar: They differ only by the presence of an 0 -"norm" versus an 1 -norm and the presence of the sign function. Both norms are stable to outlying coordinates in an input x. This property also holds for different nonlinearities and 1 < d l, since every feature φ i (x) only depends on d inputs, and therefore only a minority of the m features will be affected by the few outliers. 1 Sufficiently sparse features will then be less affected by sparse noise than a fully-connected network, offering denoising advantages [e.g . 20]. A regressor f (x) = α φ(x) built from these features is stable so long as α p is small, since |f (x) − f (x)| ≤ α p φ(x) − φ(x) q for any Hölder conjugates 1/p + 1/q = 1. Thus if x = x + e where e contains a small number of nonzero entries, then f (x) ≈ f (x) since φ(x) ≈ φ(x). Stability also may guarantee the robustness of these networks to sparse adversarial attacks, although exactly the conditions under which these approximations hold (p = ∞, q = 1 is an interesting case) we leave for future work. Computational Sparse random features give potentially huge improvements in scaling. Direct implementations of additive models incur a large cost for d > 1, since requires a sum over time to compute the Gram matrix of n examples and O(nl d) operations to evaluate f (x). In our case, since the random features method is primal, we need to perform O(nmd) computations to evaluate the feature matrix and the cost of evaluating f (x) remains O(md). 3 Sparse matrix-vector multiplication makes evaluation faster than the O(ml) time it takes when connectivity is dense. For ridge regression, we have the usual advantages that computing an estimator takes O(nm 2 + nmd) time and O(nm + md) memory, rather than O(n 3) time and O(n 2) memory for a naïve kernel ridge method. Biological In a small animal such as a flying insect, space is extremely limited. Sparsity offers a huge advantage in terms of wiring cost. Additive approximation also means that such animals can learn much more quickly, as seen in the mushroom body. While the previous computational points do not apply as well to biology, since real neurons operate in parallel, fewer operations translate into lower metabolic cost for the animal. Inspired by their ubiquity in biology, we have studied sparse random networks of neurons using the theory of random features, finding the advantages of additivity, stability, and scalability. This theory shows that sparse networks such as those found in the mushroom body, cerebellum, and hippocampus can be powerful function approximators. Kernel theories of neural circuits may be more broadly applicable in the field of computational neuroscience. Expanding the theory of dimensionality in neuroscience Learning is easier in additive function spaces because they are low-dimensional, a possible explanation for few-shot learning in biological systems. Our theory is complementary to existing theories of dimensionality in neural systems, which defined dimensionality using a skewness measure of covariance eigenvalues. Kernel theory extends this concept, measuring dimensionality similarly in the space of nonlinear functions spanned by the kernel. Limitations We model biological neurons as simple scalar functions, completely ignoring time and neuromodulatory context. It seems possible that a kernel theory could be developed for timeand context-dependent features. Our networks suppose i.i.d. weights, but weights that follow Dale's law should also be considered. We have not studied the sparsity of activity, postulated to be relevant in cerebellum. It remains to be demonstrated how the theory can make concrete, testable predictions, e.g. whether this theory may explain identity versus concentration encoding of odors or the discrimination/generalization tradeoff under experimental conditions. Appendices: Additive function approximation in the brain As said in the main text, Kandasamy and Yu created a theory of the generalization properties of higher-order additive models. They supplemented this with an empirical study of a number of datasets using their Shrunk Additive Least Squares Approximation (SALSA) implementation of the additive kernel ridge regression (KRR). Their data and code were obtained from https: //github.com/kirthevasank/salsa. We compared the performance of SALSA to the sparse random feature approximation of the same kernel. We employ random sparse Fourier features with Gaussian weights N (0, σ 2 I) with σ = 0.05 · √ dn 1/5 in order to match the Gaussian radial basis function used by Kandasamy and Yu. We use m = 300l features for every problem, with regular degree d selected equal to the one chosen by SALSA. The regressor on the features is cross-validated ridge regression (RidgeCV from scikit-learn) with ridge penalty selected from 5 logarithmically spaced points between 10 −4 · n and 10 2 · n. In Figure 2, we compare the performance of sparse random features to SALSA. Generally, the training and testing errors of the sparse model are slightly higher than for the kernel method, except for the forestfires dataset. We studied the speed of learning for a test function as well. The function to be learned f (x) was a sparse polynomial plus a linear term: The linear term took a ∼ N (0, I), the polynomial p was chosen to have 3 terms of degree 3 with In Figure 3, we show the test error as well as the selected ridge penalty for different values of d and n. With a small amount of data (n < 250), the model with d = 1 has the lowest test error, since this "simplest" model is less likely to overfit. On the other hand, in the intermediate data regime (250 < n < 400), the model with d = 3 does best. For large amounts of data (n > 400), all of the models with interactions d ≥ 3 do roughly the same. Note that with the RBF kernel the RKHS can still capture the degree 3 polynomial model. However, we see that the more complex models have a higher ridge penalty selected. The penalty is able to adaptively control this complexity given enough data. Here we show that sparse random features are stable for spike-and-slab input noise. In this example, the truth follows a linear model, where we have random input points x i ∼ N (0, I) and linear observations y i = x i β for i = 1,..., n and β ∼ N (0, I). However, we only have access to sparsely corruputed inputs w i = x i + e i, where e i = 0 with probability 1 − p and e i = x − x i with probability p, x ∼ N (0, σ 2 I). That is, the corrupted inputs are replaced with pure noise. We use p = 0.03 1 and σ = 6 1 so that the noise is sparse but large when it occurs. In Table 1 we show the performance of various methods on this regression problem given the corrupted data (W, y). Note that if the practitioner has access to the uncorrupted data X, linear regression succeeds with a perfect score of 1. Using kernel ridge regression with k(x, x) = 1 − Table 1: Scores (R 2 coefficient) of various regression models on linear data with corrupted inputs. In the presence of these errors, linear regression fails to acheive as good a test score as the kernel method, which is almost as good as trimming before performing regression and better than the robust Huber estimator. Figure 4: Kernel eigenvalue amplification while (left) varying p with σ = 6 fixed, and (right) varying σ with p = 0.03 fixed. Plotted is the ratio of eigenvalues of the kernel matrix corrupted by noise to those without any corruption, ordered from largest to smallest in magnitude. We see that the sparse feature kernel shows little noise amplification when it is sparse (right), even for large amplitude. On the other hand, less sparse noise does get amplified (left). best performance is attained by trimming the outliers and then performing linear regression. However, this is meant to illustrate our point that sparse random features and their corresponding kernels may be useful when dealing with noisy inputs in a learning problem. In Figure 4 we show another way of measuring this stability property. We compute the eigenvalues of the kernel matrix on a fixed dataset of size n = 800 points both with noise and without noise. Plotted are the ratio of the noisy to noiseless eigenvalues, in decibels, which we call the amplification and is a measure of how corrupted the kernel matrix is by this noise. The main trend that we see is, for fixed p = 3, changing the amplitude of the noise σ does not lead to significant amplification, especially of the early eigenvalues which are of largest magnitude. On the other hand, making the outliers denser does lead to more amplification of all the eigenvalues. The eigenspace spanned by the largest eigenvalues is the most "important" for any learning problem. We will now describe a number of common random features and the kernels they generate with fully-connected weights. Later on, we will see how these change as sparsity is introduced in the input-hidden connections. Translation invariant kernels The classical random features sample Gaussian weights w ∼ N (0, σ −2 I), uniform biases b ∼ U [−a, a], and employ the Fourier nonlinearity h(·) = cos(·). This leads to the Gaussian radial basis function kernel In fact, every translation-invariant kernel arises from Fourier nonlinearities for some distributions of weights and biases (Bôchner's theorem). Moment generating function kernels The exponential function is more similar to the kinds of monotone firing rate curves found in biological neurons. In this case, we have k(x, x) = E exp(w (x + x) + 2b). We can often evaluate this expectation using moment generating functions. For example, if w and b are independent, which is a common assumption, then where E (exp(w (x + x)) is the moment generating function for the marginal distribution of w, and E exp(2b) is just a constant that scales the kernel. For multivariate Gaussian weights w ∼ N (m, Σ) this becomes This equation becomes more interpretable if m = 0 and Σ = σ −2 I and the input data are normalized: This highlights that dot product kernels k(x, x) = v(x x), where v: R → R, are radial basis functions on the sphere S l−1 = {x ∈ R l : x 2 = 1}. The eigenbasis of these kernels are the spherical harmonics. Arc-cosine kernels This class of kernels is also induced by monotone "neuronal" nonlinearities and leads to different radial basis functions on the sphere. Consider standard normal weights w ∼ N (0, I) and nonlinearities which are threshold polynomial functions +, where Θ(·) is the Heaviside step function. The kernel in this case is given by for a known function J p (θ) where θ = arccos. Note that arc-cosine kernels are also dot product kernels. Also, if the weights are drawn as w ∼ N (0, σ −2 I), the terms x are replaced by x/σ, but this does not affect θ. With p = 0, corresponding to the step function nonlinearity, we have J 0 (θ) = π − θ, and the ing kernel does not depend on x or x: Sign nonlinearity We also consider a shifted version of the step function nonlinearity, the sign function sgn(z), equal to +1 when z > 0, −1 when z < 0, and zero when and w ∼ P, where P is any spherically symmetric distribution, such as a Gaussian. Then, where e = (x − x)/ x − x 2. The factor E(|w e|) in front of the norm is just a function of the radial part of the distribution P, which we should set inversely proportional to The sparsest networks possible have d = 1, leading to first-order additive kernels. Here we look at two simple nonlinearities where we can perform the sum and obtain an explicit formula for the additive kernel. In both cases, the kernels are simply related to a robust distance metric. This suggests that such kernels may be useful in cases where there are outlier coordinates in the input data. Step function nonlinearity We again consider the step function nonlinearity h(·) = Θ(·), which in the case of fully-connected Gaussian weights leads to the degree p = 0 arc-cosine kernel k(x, For a scalar a, normalization leads to a/ a = sgn(a). Therefore, θ = arccos (sgn(x i) sgn(x i)) = 0 if sgn(x i) = sgn(x i) and π otherwise. Performing the sum in, we find that the kernel becomes This kernel is equal to one minus the normalized Hamming distance of vectors sgn(x) and sgn(x). The fully-connected kernel, on the other hand, uses the full angle between the vectors x and x. The sparsity can be seen as inducing a "quantization," via the sign function, on these vectors. Finally, if the data are in the binary hypercube, with x and x ∈ {−1, +1} l, then the kernel is exactly one minus the normalized Hamming distance. Sign nonlinearity We now consider a slightly different nonlinearity, the sign function. It will turn out that the kernel is quite different than for the step function. This has h(·) = sgn(·) = 2Θ(·) − 1. Choosing P (w) = 1 2 δ(w + 1) + 1 2 δ(w − 1) and a 2 = −a 1 = a recovers the "random stump" of Rahimi and Recht. Despite the fact that sign is just a shifted version of the step function, the kernels are quite different: the sign nonlinearity does not exhibit the quantization effect and depends on the 1 -norm rather than the 0 -"norm". We now show a basic uniform convergence for any random features, not necessarily sparse, that use Lipschitz continuous nonlinearities. Recall the definition of a Lipschitz function: holds for all x, y ∈ X. Here, · is a norm on X (the 2 -norm unless otherwise specified). Assuming that h is Lipschitz and some regularity assumptions on the distribution µ, the random feature expansion approximates the kernel uniformly over X. As far as we are aware, this has not been stated previously, although it appears to be known (see Bach ) and is very similar to Claim 1 in Rahimi and Recht which holds only for random Fourier features (see also Sutherland and Schneider and Sriperumbudur and Szabo for improved in this case). The rates we obtain for Lipschitz nonlinearities are not essentially different than those obtained in the Fourier features case. As for the examples we have given, the only ones which are not Lipschitz are the step function (order 0 arc-cosine kernel) and sign nonlinearities. Since these functions are discontinuous, their convergence to the kernel occurs in a weaker than uniform sense. However, our does apply to the rectified linear nonlinearity (order 1 arc-cosine kernel), which is non-differentiable at zero but 1-Lipschitz and widely applied in artificial neural networks. The proof of the following Theorem appears at the end of this section. Assume that x ∈ X ⊂ R l and that X is compact, ∆ = diam(X), and the null vector 0 ∈ X. Let the weights and biases (w, b) follow the distribution µ on R l+1 with finite second moments. Let h(·) be a nonlinearity which is L-Lipschitz continuous and define the random feature φ: R l → R by φ(x) = h(w x − b). We assume that the following hold for all x ∈ X: |φ(x)| ≤ κ almost surely, E |φ(x)| 2 < ∞, and ≤ with probability at least Sample complexity Theorem 1 guarantees uniform approximation up to error using m = O features. This is precisely the same dependence on l and as for random Fourier features. Note that also found that m should scale linearly with l to minimize error in a particular classification task. A limitation of Theorem 1 is that it only shows approximation of the limiting kernel rather than direct approximation of functions in the RKHS. A more detailed analysis of the convergence to RKHS is contained in the work of Bach, whereas Rudi and Rosasco directly analyze the generalization ability of these approximations. Sun et al. show even faster rates which also apply to SVMs, assuming that the features are compatible ("optimized") for the learning problem. Also, the techniques of Sutherland and Schneider and Sriperumbudur and Szabo could be used to improve our constants and prove convergence in other L p norms. In the sparse case, we must extend our probability space to capture the randomness of the degrees, the neighborhoods conditional on the degree, and the weight vectors conditional on the degree and neighborhood. The degrees are distributed independently according to d i ∼ D, with some abuse of notation since we also use D(d) to represent the probability mass function. We shall always think of the neighborhoods N ∼ ν|d as chosen uniformly among all d element subsets, where ν|d represents this conditional distribution. Finally, given a neighborhood of some degree, the nonzero weights and bias are drawn from a distribution (w, b) ∼ µ|d on R d+1. For simpler notation, we do not show any dependence on the neighborhood here, since we will always take the actual weight values to not depend on the particular neighborhood N. However, strictly speaking, the weights do depend on N because that determines their support. Finally, we use E to denote expectation over all variables (degree, neighborhood, and weights), whereas we use E µ|d for the expectation under µ|d for a given degree. Corollary 2 (Kernel approximation with sparse features). Assume that x ∈ X ⊂ R l and that X is compact, ∆ = diam(X), and the null vector 0 ∈ X. Let the degrees d follow the degree distribution D on [l]. For every d ∈ [l], let µ|d denote the conditional distributions for (w, b) on R d+1 and assume that these have finite second moments. Let h(·) be a nonlinearity which is L-Lipschitz continuous, and define the random feature φ: R l → R by φ(x) = h(w x − b), where w follows the degree distribution model. We assume that the following hold for all x N ∈ X N with |N | = d, and for all 1 ≤ d ≤ l: |φ(x N)| 2 ≤ κ almost surely under µ|d, E |φ(x N)| 2 |d < ∞, and, with probability at least The kernels k are given by equations and. Proof. It suffices to show that conditions on the conditional distributions µ|d, d ∈ [l], imply conditions in Theorem 1. Conditions and clearly hold, since the distribution D has finite support. By construction,, which concludes the proof. Differences of sparsity The only difference we find with sparse random features is in the terms E w 2 and E w, since sparsity adds variance to the weights. This suggests that scaling the weights so that E µ|d w 2 is constant for all d is a good idea. For example, setting With this choice, the number of sparse features needed to achieve an error is the same as in the dense case, up to a small constant factor. This is perhaps remarkable since there could be as many as 2 l terms in the expression of k dist D (x, x). However, the random feature expansion does not need to approximate all of these terms well, just their average. Proof of Theorem 1. We follow the approach of Claim 1 in, a similar for random Fourier features but which crucially uses the fact that the trigonometric functions are differentiable and bounded. For simplicity of notation, let ξ = (x, x) and define the direct sum norm on X + = X ⊕ X as ξ + = x + x. Under this norm X + is a Banach space but not a Hilbert space, however this will not matter. For i = 1,..., m, let f i (ξ) = φ i (x)φ i (x), and note that these g i are i.i.d., centered random variables. By assumptions and, f i and g i are absolutely integrable and k(x, x) = E φ i (x)φ i (x). Denote their mean bȳ Our goal is to show that |ḡ(ξ)| ≤ for all ξ ∈ X + with sufficiently high probability. The space X + is compact and 2l-dimensional, and it has diameter at most twice the diameter of X under the sum norm. Thus we can cover X + with an -net using at most T = (4∆/R) 2l balls of radius R. Call the centers of these balls ξ i for i = 1,..., T, and letL denote the Lipschitz constant ofḡ with respect to the sum norm. Then we can show that |ḡ(ξ)| ≤ for all ξ ∈ X + if we show that we have that f i has Lipschitz constant κL w i. This implies that g i has Lipschitz constant ≤ κL(w i + E w). LetL denote the Lipschitz constant ofḡ. Note that EL ≤ 2κLE w. Also, Markov Now we would like to show that |ḡ(ξ i)| ≤ /2 for all i = 1,..., T anchors in the -net. A straightforward application of Hoeffding's inequality and a union bound shows that since |f i (ξ)| ≤ κ 2. Combining equations and in a probability of failure Pr sup Set R = (a/b) 1 2l+2, so that the probability has the form, 2a for all l ∈ N, assuming ∆κL E w 2 + 3(E w) 2 >. Considering the complementary event concludes the proof.
We advocate for random features as a theory of biological neural networks, focusing on sparsely connected networks
1,740
scitldr
We propose a new application of embedding techniques to problem retrieval in adaptive tutoring. The objective is to retrieve problems similar in mathematical concepts. There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts. Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships. Second, it is difficult for humans to determine a similarity score consistent across a large enough training set. We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step. Prob2Vec achieves 96.88\% accuracy on a problem similarity test, in contrast to 75\% from directly applying state-of-the-art sentence embedding methods. It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire. In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right. It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate. We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set. The traditional teaching methods that are widely used at universities for science, technology, engineering, and mathematic (STEM) courses do not take different abilities of learners into account. Instead, they provide learners with a fixed set of textbooks and homework problems. This ignorance of learners' prior knowledge, pace of learning, various preferences, and learning goals in current education system can cause tremendous pain and discouragement for those who do not keep pace with this inefficient system BID6; BID5; BID7; BID18; BID43. Hence, e-learning methods are given considerable attention in an effort to personalize the learning process by providing learners with optimal and adaptive curriculum sequences. Over the years, many web-based tools have emerged to adaptively recommend problems to learners based on courseware difficulty. These tools tune the difficulty level of the recommended problems for learners and push them to learn by gradually increasing the difficulty level of recommended problems on a specific concept. The downside of such methods is that they do not take the concept continuity and mixture of concepts into account, but focus on the difficulty level of single concepts. Note that a learner who knows every individual concept does not necessarily have the ability to bring all of them together for solving a realistic problem on a mixture of concepts. As a , the recommender system needs to know similarity/dissimilarity of problems with mixture of concepts to respond to learners' performance more effectively as described in the next paragraph, which is something that is missing in the literature and needs more attention. Since it is difficult for humans to determine a similarity score consistent across a large enough training set, it is not feasible to simply apply supervised methods to learn a similarity score for problems. In order to take difficulty, continuity, and mixture of concepts into account for similarity score used in a personalized problem recommender system in an adaptive practice, we propose to use a proper numerical representation of problems on mixture of concepts equipped with a similarity measure. By virtue of vector representations for a set of problems on both single and mixture of concepts (problem embedding) that capture similarity of problems, learners' performance on a problem can be projected onto other problems. As we see in this paper, creating a proper problem representation that captures mathematical similarity of problems is a challenging task, where baseline text representation methods and their refined versions fail to work. Although the state-of-the-art methods for phrase/sentence/paragraph representation are doing a great job for general purposes, their shortcoming in our application is that they take lexical and semantic similarity of words into account, which is totally invalid when dealing with text related to math or any other special topic. The words or even subject-related keywords of problems are not completely informative and cannot contribute to embedding of math problems on their own; as a , the similarity of two problems is not highly correlated with the wording of the problems. Hence, baseline methods perform poorly on the problem similarity detection test in problem recommender application. We find that instead of words or even subject-related keywords, conceptual ideas behind the problems determine their identity. The conceptual particles (concepts) of problems are mostly not directly mentioned in problem wording, but there can be footprints of them in problems. Since problem wording does not capture the similarity of problems, we propose an alternative hierarchical approach called Prob2Vec consisting of an abstraction and an embedding step. The abstraction step projects a problem to a set of concepts. The idea is that there exists a concept space with a reasonable dimension N, with N ranging from tens to a hundred, that can represent a much larger variety of problems of order O(2 N). Each variety can be sparsely inhabited, with some concept combination having only one problem. This is because making problems is itself a creative process: The more innovative a problem is, the less likely it has exactly the same concept combination as another problem. The explicit representation of problems using concepts also enables state-dependent similarity computation, which we will explore in future work. The embedding step constructs a vector representation of the problems based on concept cooccurrence. Like sentence embedding, not only does it capture the common concepts between problems, but also the continuity among concepts. The proposed Prob2Vec algorithm achieves 96.88% accuracy on a problem similarity test, where human experts are asked to label the relative similarity among each triplet of problems. In contrast, the best of the existing methods, which directly applies sentence embedding, achieves 75% accuracy. It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, as the problems in some triplets are highly similar to each other, and only humans with extensive training in the subject are able to identify their relative order of similarity. The problem embedding obtained from Prob2Vec is being used in the recommender system of an e-learning tool for an undergraduate probability course for four semesters with successful on hundreds of students, specially benefiting minorities who tend to be more isolated in the current education system. In addition, the sub-problem of concept labeling in the abstraction step is interesting in its own right. It is a multi-label problem suffering from dimensionality explosion, as there can be as many as 2 N problem types. This in two challenges: First, there are very few problems for some types, hence a direct classification on 2 N classes suffers from a severe lack of data. Second, per-concept classification suffers from imbalance of training samples and needs a very small per-concept false positive in order to achieve a reasonable per-problem false positive. We propose pre-training of the neural network with negative samples (negative pre-training) that beats a similar idea to oneshot learning BID15, where the neural network is pre-trained on classification of other concepts to have a warm start on classification of the concept of interest (transfer learning). Embedding applications: the success of simple and low-cost word embedding technique using the well-known neural network (NN) based word embedding method, Word2Vec by BID27 a), compared to expensive natural language processing methods, has motivated researchers to use embedding methods for many other areas. As examples, Doc2Vec , Paper2Vec BID41; BID16 ), Gene2Vec Cox, Graph2Vec Narayanan et al. (2017, Like2Vec, Follower2Vec and many more share the same techniques originally proposed for word embedding with modifications based on their domains, e.g. see BID16 BID23 . In this work, we propose a new application for problem embedding for personalized problem recommendation. Word/Phrase/Sentence/Paragraph embedding: the prior works on word embedding include learning a distributed representation for words by BID2, multi-task training of a convolutional neural network using weight-sharing by BID10, the continuous Skip-gram model by BID27, and the low rank representation of word-word co-occurrence matrix by BID12 ; BID33 . Previous works on phrase/sentence/paragraph embedding include vector composition that is operationalized in terms of additive and multiplicative functions by BID28 BID29 ; BID3, uniform averaging for short phrases by BID27, supervised and unsupervised recursive autoencoder defined on syntactic trees by BID38, training an encoderdecoder model to reconstruct surrounding sentences of an encoded passage by BID19, modeling sentences by long short-term memory (LSTM) neural network by BID40 and convolutional neural networks by BID4, and the weighted average of the words in addition to a modification using PCA/SVD by BID0. The simple-tough-to-beat method proposed by BID0 beats all the previous methods and is the baseline in text embedding. The rest of the paper is outlined as follows. The description of the data set for which we do problem embedding in addition to our proposed Prob2Vec method for problem embedding are presented in section 2. Negative pre-training for NN-based concept extraction is proposed in section 3. Section 4 describes the setup for similarity detection test for problem embedding, evaluates the performance of our proposed Prob2Vec method versus baselines, and presents the on the negative pre-training method. Section 5 concludes the paper with a discussion on opportunities for future work. Consider a set of M problems P = {P 1, P 2, · · ·, P M} for an undergraduate probability course, where each problem can be on a single or mixture of concepts among the set of all N concepts C = {C 1, C 2, · · ·, C N}. Note that these concepts are different from keywords of problem wordings and are not originally included in problems, but labeling problems with concepts is a contribution of this work that is proposed for achieving a proper problem representation. Instead, problems are made of words from the set DISPLAYFORM0 |Wi|. In the following subsection, we propose the Prob2Vec method for problem embedding that uses an automated rule-based concept extractor, which relieves reliance on human labeling and annotation for problem concepts. As shown in section 4, using the set of words DISPLAYFORM0 or even a subset of keywords to represent problems, text embedding baselines fail to achieve high accuracy in similarity detection task for triplets of problems. In the keyword-based approach, all redundant words of problems are ignored and the subject-related and informative words such as binomial, random variable, etc., are kept. However, since the conceptual ideas behind problems are not necessarily mapped into normal and mathematical words used in problems, even the keyword-based approach fails to work well in the similarity detection task that is explained in section 4. Alternatively, we propose a hierarchical method consisting of abstraction and embedding steps that generates a precise embedding for problems that is completely automated. The block diagram of the proposed Prob2Vec method is depicted in figure 1.(i) Abstraction step: similarity among mathematical problems is not captured by the wording of problems; instead, it is determined by the abstraction of the problems. Learners who have difficulty solving mathematical problems mostly lack the ability to do abstraction and relate problems with appropriate concepts. Instead, they try to remember procedure-based rules to fit problems in them and use their memory to solve them, which does not apply to solving hard problems on mixture of concepts. We observe the same pattern in problem representation; i.e. problem statements do not necessarily determine their identity, instead abstraction of problems by mapping them into representative concepts moves problem embedding from lexical similarity to conceptual similarity. The concepts of a problem are not mostly mentioned directly in its text, but there can be footmarks of concepts in problems. A professor and two experienced teaching assistants are asked to formulate rule-based mappings from footmarks to concepts for automation of concept extraction. As an example, the rule for labeling a problem with concept "nchoosek" is \\\\choose|\\\\binom|\\\\frac\{\s*\w+\!\s*\}\{\s*\w+\!\s*\\\\times\s*\w+\!\s*\}. By applying the rule-based concept extractor to problems, we have a new representation for problems in concept space instead of word space; i.e. (ii) Embedding step: a method similar to Skip-gram in Word2Vec is used for concept embedding. The high-level insight of Skip-gram is that a neural network with a single hidden layer is trained, where its output relates to how likely it is to have each concept co-occurred in a problem with the input concept. As an example, if concept "law-of-total-probability" of a problem is input of the neural network, we expect the neural network to state it more likely to have concept "conditional-probability" in the problem than for unrelated concepts like "Poisson-process". However, the neural network is not used for this task, but the goal is to use weights of the hidden layer for embedding. Recall the set of all concepts as {C 1, C 2, · · ·, C N}, where a problem is typically labeled with a few of them. Consider onehot coding from concepts to real-valued vectors of size N that are used for training of the neural network, where the element corresponding to a concept is one and all other elements are zero. We consider 10 neurons in hidden layer with no activation functions (so the embedded concept vectors have 10 features) and N neurons in the output that form a softmax regression classifier. In order to clarify on input-output pair of neural network, assume a problem that has a set of concepts {C 1, C 2, C 5}. The neural network is trained on all pairs DISPLAYFORM1 DISPLAYFORM2, and (C 5, C 2), where the one-hot code of the first element of a pair is the input and the one-hot code of the second element of a pair is the output of the neural network in the training phase. Hence, the neural network is trained over DISPLAYFORM3 number of training data. This way, the neural network learns the statistic from the number of times that a pair is fed into it (the neural network is probably fed with more training pairs of ("law-of-total-probability", "conditional-probability") than the pair ("law-of-total-probability", "Poisson-process")). Note that during training phase, input and output are one-hot vectors representing the input and output concepts, but after training when using the neural network, given a one-hot input vector, the output is a probability distribution on the set of all concepts. Finally, since input concepts are coded as one-hot codes, rows of hidden layer weight matrix, which is of size N by 10, are concept vectors (concept embedding) which we are really after. Denoting embedding of concept c ∈ C by E(c), problem embedding denoted by E i for problem P i is obtained as follows: DISPLAYFORM4 where f c is frequency of concept c ∈ C in our data set of M = 635 problems. Concept embedding is scaled by its corresponding frequency to penalize concepts with high frequency that are not as informative as low frequency ones. For example, concept "pmf" is less informative than concept "ML-parameter-estimation". Given problem embedding, similarity between two problems is defined as cosine of the angle between corresponding vector representations. Remark. We choose rule-based concept extractor for the abstraction step over any supervised/unsupervised classification methods for concept extraction because of two main reasons. First, there is a limited number of problems for most concepts as few as a couple of problems which makes any supervised classification method inapplicable due to lack of training data. Second, there are N concepts, so potentially there can be 2 N − 1 categories of problems which makes classification challenging for any supervised or unsupervised methods. Consider the maximum number of concepts in a problem to be k, then there are on the order of O(N k) categories of problems. Even if we consider possessing acceptable number of problems for each of the O(N k) categories, false positive needs to be on the order of O(1 N k) so that the number of false positives for each category be in the order of O. Given that there are N = 96 concepts, utilizing a supervised or unsupervised approach to achieve such a low false positive is not feasible. Exploiting the rule-based classifier for problem labeling though, we succeed to achieve as low as average 0.98% false positive and average 9.17% false negative for all concepts, where problem concepts annotated by experts are considered to be ground truth. Although 100.00% accuracy is observed in similarity detection test when utilizing problem concepts annotated by experts, those concepts are not necessarily the global optimum ground truth or the only one. Thinking of increasing accuracy in similarity detection task as an optimization problem, there is not necessarily a unique global optimum for problem concepts that lead to good performance. Hence, not having a very low false negative for rule-based does not necessarily mean that such labels are not close to a local/global optimum problem concepts. In fact, rule-based extracted concepts achieve a high 96.88% accuracy on a similarity test as is mentioned in section 4. For the purpose of problem embedding, Prob2Vec discussed in section 2.1 with a rule-based concept extractor has an acceptable performance. Here, a NN-based concept extractor is proposed that can be a complement to the rule-based version, but we mainly study it to propose our novel negative pre-training method for reducing false negative and positive ratios for concept extraction with an imbalanced training data set. Negative pre-training outperforms a similar method to one-shot learning (transfer learning) as data level classification algorithms to tackle imbalanced training data sets. The setup for concept extractor using neural networks without any snooping of human knowledge is presented first, then we propose some tricks for reducing false negative and positive ratios. The neural network used for concept extraction has an embedding layer with linear perceptrons followed by two layers of perceptrons with sigmoid activation function, and an output layer with a single perceptron with sigmoid classifier. For common words in Glove and our data set, embedding layer is initialized by Glove BID33, but for words in our data set that are not in Glove, the weights are initialized according to a uniform distribution over [−1, 1]. The embedding size is considered to be 300 and each of the other two layers have 60 perceptrons, followed by output which has a single perceptron, indicating if a concept is in the input problem or not. Note that for each concept, a separate neural network is trained. The issue with training a single neural network for all concepts is the imbalanced number of positive and negative samples for each concept. A concept is only present in a few of M = 635 problems, so having a single neural network, too many negative samples for each concept are fed into it, dramatically increasing false negatives. In the following, some tricks used for training of the above naive NN-based concept extractor are presented that reduce FN and FP ratios by at least 43.67% and up to 76.51% compared to using down sampling, which is a standard approach for training on imbalanced training data sets. The main challenge in obtaining low FN and FP is that as few as 12 problems are labeled by a concept, which makes training of neural network challenging by having 12 positive and 623 negative samples.(a) Negative pre-training: few of M = 635 problems are labeled with a specific concept, e.g. 12 problems have concept "hypothesis-MAP". Hence, few positive samples and many negative samples are provided in our data set for training of a NN-based concept extractor for a specific concept. A neural network cannot obviously be trained on an imbalanced set where all negative samples are mixed with few positive ones or FN increases dramatically. Instead, we propose two phases of training for concept C i. Consider P i = {P j : C i ∈ C j} and N i = {P j : DISPLAYFORM0 In the first phase, neural network is pre-trained on a pure set of negative samples, N i \ N i, where the trained neural network is used as a warm start for the second phase. In the second phase of training, neural network is trained on a balanced mixture of positive and negative samples, P i ∪ N i . Utilizing negative pre-training, we take advantage of negative samples in training, and not only does FN not increase, but we get an overall lower FN and FP compared to down sampling. Due to curse of dimensionality, neural network learns a good portion of the structure of negative samples in the first phase of negative pre-training that provides us with a warm start for the second phase.(b) One-shot learning (transfer learning), BID15: in the first phase of training, the neural network is first trained on classification of bags of problems with equal number of negative and positive samples of concepts that are not of interest, P j ∪ N j j =i. Then, the trained neural network is used as a warm start in the second training phase for classification of the concept of interest on a balanced set, P i ∪ N i.(c) Word selection: due to limited number of positive training samples, a neural network cannot tune any number of parameters to find important features of problems. Moreover, as a rule of thumb, the fewer parameters the neural network has, the less it is vulnerable to over-fitting and the faster it can converge to an acceptable classifier. To this end, an expert TA is asked to select informative words out of total 2242 words that are originally used for input of neural network, where this process took less than an hour. The redundant words in problems are omitted and only those among 208 selected words related to probability are kept in each problem, which reduces size of embedding matrix from 2242 × 300 to 208 × 300 and inputs more informative features to neural network. In section 4, it is shown that FP and FN ratios are reduced under this trick by at least 25.33% and up to 61.34%, which is an indication that selected words are more representative than the original ones. These selected words have been used in problem embedding for modified versions of baselines in section 4 as evidence that even keyword-based versions of embedding baselines do not capture similarity of problems. For evaluation of different problem embedding methods, a ground truth on similarity of problems is needed. To this end, four TAs are asked to select random triplets of problems, say (A, B, C) ∈ P 3 with A = B = C, and order them so that problem A is more similar to B than C; i.e. if the similarity between two problems is denoted by sim(., .), we have sim(A, B) > sim(A, C). Finally, a head TA brings into a consensus and chooses 64 triplets of problems. Note that the set of all M = 635 problems are divided into 26 modules, where each module is on a specific topic, e.g. hypothesis testing, central limit theorem, and so on. The three problems of a triplet are determined to be in the same module, so they are already on the same topic that makes similarity detection task challenging. As evidence, the similarity gap histogram for the 64 triplets of problems, sim(A, B) − sim(A, C), according to expert annotation for problem concepts and Skip-gram-based problem embedding, is shown in FIG2. It should be noted that the mentioned problem embedding is empirically proven to have the highest accuracy of 100.00% in our similarity detection test. The expert annotation for problem concepts are done by an experienced TA unaware of problem embedding project, so no bias is brought to concept labeling process. The similarity gap histogram depicts it well that similarity detection for the 64 triplets of problems is challenging due to skewedness of triplets in first bins. Prob2Vec is compared with different baseline text embedding methods in terms of the accuracy in determining the more similar problem of the second and third to the first one of a triplet. The experimental are reported in table 1. The baselines are mainly categorized into three groups as 1-Glove-based problem embedding that is derived by taking the uniform (or weighted with word frequencies) average of Glove word embedding, where the average can be taken over all words of a problem or some representative words of the problem. 2- BID0 suggest to remove the first singular vector from Glove-based problem embedding, where that singular vector corresponds to syntactic information and common words. 3-SVD-based problem embedding that has the same hierarchical approach as Prob2Vec, but concept embedding in the second step is done based on SVD decomposition of the concept co-occurrence matrix BID21. The details on baseline methods can be found in appendix A. The number of errors that the method with the best performance in each of the above categories makes in different bins of the similarity gap are shown in figure 2. For example, there are 20 triplets with similarity gap in the range [0.01, 0.21] and the best Glove-based method makes six errors out of these 20 triplets in the similarity detection test. According to table 1, the best Glove-based method is taking uniform average of embedding of selected words. Interesting patterns on concept continuity and similarity are observed from Prob2Vec concept embedding where two of them are shown in table 2. As other examples, it is observed that the most similar concept to function-RV is CDF, where function-RV refers to finding the distribution of a function of a random variable. As a TA of probability course for three years, most students do not have any clues where to start on problems for function-RV, and we always tell them to start with finding CDF of the function of random variable. It is worthy to see that NN-based concept embedding can capture such relation between concepts in seconds with small number of training samples while a human that is trained over the whole semester at university is mostly clueless where to start. We further observe the ML-parameter-E concept to be most related to the concept differentiation, where ML-parameter-E refers to maximum likelihood (ML) parameter estimation. Again, students do not get this relation for a while and they need a lot of training to get the association of MLparameter-E with differentiation of likelihood of observation to find ML parameter estimation. As another example, Bayes-formula is most similar to law-of-total-probability and the list goes on. Table 3: False negative and positive ratios of NN-based concept extraction for 1: "sample-space", 2: "Covariance", 3: "joint-Gaussian", 4: "hypothesis-MAP", 5: "hypothesis-ML".Comparison of negative pre-training, one-shot learning, word selection, down sampling, and combination of these methods applied to training process of the NN-based concept extractor is presented in table 3 for five concepts. In order to find the empirical false negative and positive ratios for each combination of methods in table 3, training and cross validation are done for 100 rounds on different random training and test samples, and false negative and positive ratios are averaged in the 100 rounds. As an example, the set N i ⊂ N i and test set are randomly selected in each of the 100 rounds for negative pre-training method, then the false negative and positive ratios of the trained neural network on the 100 instances are averaged. Employing the combination of word selection and negative pre-training reduces false negative and positive ratios by at least 43.67% and up to 76.51% compared to the naive down sampling method. For some concepts, the combination of word selection, one-shot learning, and negative pre-training in a slightly lower false negative and positive ratios than the combination of word selection and negative pre-training. However, investigating the whole table, one finds out that word selection and negative pre-training are the causes for reducing false negative and positive ratios. It is of interest that NN-based approach can reduce FN for the concept "event" to 5.11% with FP of 6.06%, where rule-based has FN of 35.71% with FP of 5.31%. A hierarchical embedding method called Prob2Vec for subject specific text is proposed in this paper. Prob2Vec is empirically proved to outperform baselines by more than 20% in a properly validated similarity detection test on triplets of problems. The Prob2Vec embedding vectors for problems are being used in the recommender system of an e-learning tool for an undergraduate probability course for four semesters. We also propose negative pre-training for training with imbalanced data sets to decrease false negatives and positives. As future work, we plan on using graphical models along with problem embedding vectors to more precisely evaluate the strengths and weaknesses of students on single and mixture of concepts to do problem recommendation in a more effective way.one of popular methods, E w: w ∈ W, where we use Glove, problem embedding for P i that is denoted by E i is computed as follows: DISPLAYFORM0 where u is the first principle component of E i: 1 ≤ i ≤ M and a is a hyper-parameter which is claimed to in best performance when a = 10 −3 to a = 10 −4. We tried different values for a inside this interval and out of it and found a = 10 −5 and a = 10 −3 to best work for our data set when using all words and a = 2 × 10 −2 to best work for when using selected words. (iii) 3-SVD: using the same hierarchical approach as Prob2Vec, concept embedding in the second step can be done with an SVD-based method instead of the Skip-gram method as follows. Recall that the concept dictionary is denoted by {C 1, C 2, · · ·, C N}, where each problem is labeled with a subset of these concepts. Let N c (C i, C j) for i = j denote number of cooccurrences of concepts C i and C j in problems of data set; i.e. there are N c (C i, C j) number of problems that are labeled with both C i and C j. The co-occurrence matrix is formed as follows: The SVD decomposition of the P P M I matrix is as P P M I = U SV, where U, S, V ∈ R N ×N, and S is a diagonal matrix. Denote embedding size of concepts by d ≤ N, and let U d be the first d columns of matrix U, S d be a diagonal matrix with the first d diagonal elements of diagonal matrix S, and V d be the first d rows of matrix V. The followings are different variants of SVD-based concept embedding BID21: DISPLAYFORM1 • eig: embedding of N concepts are given by N rows of matrix U d that are of embedding length d. • sub: N rows of U d S d are embedding of N concepts.• shifted: the P P M I matrix is defined in a slightly different way in this variant as follows: Note that the P P M I matrix is not necessarily symmetric in this case. By deriving U d and S d matrices as before, embedding of N concepts are given by N rows of U d S d.
We propose the Prob2Vec method for problem embedding used in a personalized e-learning tool in addition to a data level classification method, called negative pre-training, for cases where the training data set is imbalanced.
1,741
scitldr
We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections. Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training. Deep convolutional neural networks (CNNs) have significantly improved the performance of image classification BID3 BID25. However, training a CNN also becomes increasingly difficult with the network deepening. One of important research efforts to overcome this difficulty is to develop new neural network architectures BID6 BID14.Recently, residual network BID3 and its variants BID8 have used residual connections among layers to train very deep CNN. The residual connections promote the feature reuse, help the gradient flow, and reduce the need for massive parameters. The ResNet BID3 and DenseNet BID8 achieved state-of-the-art accuracy on benchmark datasets. Alternatively, FractalNet BID14 expanded the convolutional layers in a fractal form to generate deep CNNs. Without residual connections BID3 and manual deep supervision BID15, FractalNet achieved high performance on image classification based on network structural design only. Many studies tried to understand reasons behind the representation view of deep CNNs. BID27 showed that residual network can be seen as an ensemble of relatively shallow effective paths. However, BID2 argued that ensembles of shallow networks cannot explain the experimental of lesioning, layer dropout, and layer reshuffling on ResNet. They proposed that residual connections have led to unrolled iterative estimation in ResNet. Meanwhile, BID14 speculated that the high performance of FractalNet was due to the unrolled iterative estimation of features of the longest path using features of shorter paths. Although unrolled iterative estimation model can explain many experimental , it is unclear how it helps improve the classification performance of ResNet and FractalNet. On the other hand, the ensemble model can explain the performance improvement easily. In this work, we propose CrescendoNet, a new deep convolutional neural network with ensemble behavior. Same as other deep CNNs, CrescendoNet is created by stacking simple building blocks, called Crescendo blocks FIG0 ). Each Crescendo block comprises a set of independent feed-forward paths with increased number of convolution and batch-norm layers (a). We only use the identical size, 3 × 3, for all convolutional filters in the entire network. Despite its simplicity, CrescendoNet shows competitive performance on benchmark CIFAR10, CI-FAR100, and SVHN datasets. Similar to FractalNet, CrescendoNet does not include residual connections. The high performance of CrescendoNet also comes completely from its network structural design. Unlike the FractalNet, in which the numbers of convolutional layers and associated parameters are increased exponentially, the numbers of convolutional layers and parameters in Crescendo blocks are increased linearly. CrescendoNet shows clear ensemble behavior (Section 3.4). In CrescendoNet, although the longer paths have better performances than those of shorter paths, the combination of different length paths have even better performance. A set of paths generally outperform its subsets. This is different from FractalNet, in which the longest path alone achieves the similar performance as the entire network does, far better than other paths do. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new pathwise training procedure, in which paths in each building block are trained independently and sequentially. The path-wise procedure can reduce the memory needed for training. Especially, we can reduce the amortized memory used for training CrescendoNet to about one fourth. We summarize our contribution as follows:• We propose the Crescendo block with linearly increased convolutional and batch-norm layers. The CrescendoNet generated by stacking Crescendo blocks further demonstrates that the high performance of deep CNNs can be achieved without explicit residual learning.• Through our analysis and experiments, we discovered an emergent behavior which is significantly different from which of FractalNet. The entire CrescendoNet outperforms any subset of it can provide an insight of improving the model performance by increasing the number of paths by a pattern.• We introduce a path-wise training approach for CrescendoNet, which can lower the memory requirements without significant loss of accuracy given sufficient data. Crescendo Block The Crescendo block is built by two layers, the convolution layer with the activation function and the following batch normalization layer BID10. The convolutional layers have the identical size, 3 × 3. The Conv-Activation-BatchNorm unit f 1, defined in the Eq.1 is the base branch of the Crescendo block. We use ReLU as the activation function to avoid the problem of vanishing gradients BID17. DISPLAYFORM0 The variable z denotes the input feature maps. We use two hyper-parameters, the scale S and the interval I to define the structure of the Crescendo block H S. The interval I specifies the depth difference between every two adjacent branches and the scale S sets the number of branches per block. The structure of the n th branch is defined by the following equation: DISPLAYFORM1 where the superscript nI is the number of recursion time of the function f 1. The structure of Crescendo block H S can be obtained below: DISPLAYFORM2 where ⊕ denotes an element-wise averaging operation. Note that the feature maps from each path are averaged element-wise, leaving the width of the channel unchanged. A Crescendo block with S = 4 and I = 1 is shown in FIG0.The structure of Crescendo block is designed for exploiting more feature expressiveness. The different depths of parallel paths lead to different receptive fields and therefore generate features in different abstract levels. In addition, such an incremental and parallel form explicitly supports the ensemble effects, which shows excellent characteristics for efficient training and anytime classification. We will explain and demonstrate this in the following sections. CrescendoNet Architecture The main body of CrescendoNet is composed of stacked Crescendo blocks with max-pooling layers between adjacent blocks (FIG0). Following the main body, like most deep CNNs, we use two fully connected layers and a soft-max layer as the classifier. In all experiments, the two fully connected layers have 384 hidden units and 192 hidden units respectively. The overall structure of CrescendoNet is simple and we only need to tune the Crescendo block to modify the entire network. To reduce the memory consumption during training CrescendoNet, we propose a path-wise training procedure, leveraging the independent multi-path structure of our model. We denote stacked Conv-BatchNorm layers in one Crescendo block as one path. We train each path individually, from the shortest to the longest repetitively. When we are training one path, we freeze the parameters of other paths. In other words, these frozen layers only provide learned features to support the training. FIG1 illustrates the procedure of path-wise training within a CrescendoNet block containing four paths. There are two advantages of path-wise training. First, path-wise training procedure significantly reduces the memory requirements for convolutional layers, which constitutes the major memory cost for training CNNs. For example, the higher bound of the memory required for computation and storage of gradients using momentum stochastic gradient descent algorithms can be reduced to about 40% for a Crescendo block with 4 paths where interval = 1. Second, path-wise training works well with various optimizers and regularizations. Even dropout and drop-path can be applied to the model during the training. Dropout and drop-connect BID28, which randomly set a selected subset of activations or weights to zero respectively, are effective regularization techniques for deep neural networks. Their variant, drop-path BID14, shows further performance improvement by dropping paths when training FractalNet. We use both dropout and drop-path for regularizing the Crescendo block. We drop the branches in each block with a predefined probability. For example, given drop-path rate, p = 0.3, the expectation of the number of dropped branches is 1.2 for a Crescendo block with four branches. For the fully connected layers, we use L2 norm of their weights as an additional term to the loss. We evaluate our models with three benchmark datasets: CIFAR10, CIFAR100, and Street View House Numbers (SVHN) BID19. CIFAR10 and CIFAR100 each have 50,000 training images and 10,000 test images, belonging to 10 and 100 classes respectively. All the images are in RGB format with the size of 32 × 32-pixel. SVHN are color images, with the same size of 32 × 32-pixel, containing 604,388 and 26,032 images for training and testing respectively. Note that these digits are cropped from a series of numbers. Thus, there may be more than one digit in an image, but only the one in the center is used as the label. For data augmentation, we use a widely adopted scheme BID16 BID14 BID6 b; BID24 BID21 BID3. We first pad images with 4 zero pixels on each side, then crop padded images to 32 × 32-pixel randomly and horizontally flipping with a 50% probability. We preprocess each image in all three datasets by subtracting off the mean and dividing the variance of the pixels. We use Mini-batch gradient descent to train all our models. We implement our models using TensorFlow distributed computation framework BID0 and run them on NVidia P100 GPU. We also optimize our models by adaptive momentum estimation (Adam) optimization BID11 and Nesterov Momentum optimization BID18 respectively. For Adam optimization, we set the learning rate hyper-parameter to 0.001 and let Adam adaptively tune the learning rate during the training. We choose the momentum decay hyper-parameter β 1 = 0.9 and β 2 = 0.999. And we set the smoothing term = 10 −8. This configuration is the default setting for the AdamOptimizer class in TensorFlow. For Nesterov Momentum optimization, we set the hyper-parameter momentum = 0.9. We decay the learning rate from 0.1 to 0.01 after 512 epochs for CIFAR and from 0.05 to 0.005, then to 0.0005, after 42 epochs and 63 epochs respectively for SVHN. We use truncated normal distribution for parameter initialization. The standard deviation of hyper-parameters is 0.05 for convolutional weights and 0.04 for fully connected layer weights. For all datasets, we use the batch size of 128 on each training replica. For the whole net training, we run 700 epochs on CIFAR and 70 epochs on SVHN. For the path-wise training, we run 1400 epochs on CIFAR and 100 epochs on SVHN.Using a CrescendoNet model with three blocks each contains four branches as illustrated in FIG0, we investigate the following preliminary aspects: the model performance under different block widths, the ensemble effect, and the path-wise training performance. We study the Crescendo block with three different width configurations: equal width globally, equal width within the block, and increasing width. All the three configurations have the same fully connected layers. For the first one, we set the number of feature maps to 128 for all the convolutional layers. For the second, the numbers of feature maps are for convolutional layers in each block. For the last, we gradually increase the feature maps for each branch in three blocks to correspondingly. For example, the number of feature maps for the second and fourth branches in the second block is and. The exact number of maps for each layer is defined by the following equation: DISPLAYFORM0 where n maps denotes the number of feature maps for a layer, n inmaps and n outmaps are number of input and output maps respectively, n layers is the number of layers in the block, and i layer is the index of the layer in the branch, starting from 1.To inspect the ensemble behavior of CrescendoNet, we compare the performance of models with and without drop-path technique and subnets composed by different combinations of branches in each block. For the simplicity, we denote the branch combination as a set P containing the index of the branch. For example, P = {1, 3} means the blocks in the subnet only contains the first and third branches. The same notation is used in TAB0 and FIG2. Table 1 gives a comparison among CrescendoNet and other representative models on CIFAR and SVHN benchmark datasets. For five datasets, CrescendoNet with only 15 layers outperforms almost all networks without residual connections, plus original ResNet and ResNet with Stochastic Depth. For CIFAR10 and CIFAR100 without data augmentation, CrescendoNet also performs better than all the given models except DenseNet with bottleneck layers and compression (DenseNet-BC) with 250 layers. However, CrescendoNet's error rate 1.76% matches the 1.74% error rate of given DenseNet-BC, on SVHN dataset which has plentiful data for each class. Comparing with FractalNet, another outstanding model without residual connection, CrescendoNet has a simpler structure, fewer parameters, but higher accuracies. The lower rows in Table 1 compare the performance of our model given different configuration. In three different widths, the performance simultaneously grows with the number of feature maps. In other words, there is no over-fitting when we increase the capacity of CrescendoNet in an appropriate scope. Thus, CrescendoNet demonstrates a potential to further improve its performance by scaling up. In addition, the drop-path technique shows its benefits to our models on all the datasets, just as it does to FractalNet. Another interesting from Table 1 is the performance comparison between Adam and Nesterov Momentum optimization methods. Comparing with Nesterov Momentum method, Adam performs similarly on CIFAR10 and SVHN, but worse on CIFAR100. Note that there are roughly 60000, 5000, and 500 training images for each class in SVHN, CIFAR10, and CIFAR100 respectively. This implies that Adam may be a better option for training CrescendoNet when the training data is abundant, due to the convenience of its adaptive learning rate scheduling. The last row of Table 1 gives the from path-wise training. Training the model with less memory requirement can be achieved at the cost of some performance degradation. However, Pathwise trained CrescendoNet still outperform many of networks without residual connections on given datasets.. The show the ensemble behavior of our model. Specifically, the more paths contained in the network, the better the Table 1: Whole net classification error (%) on CIFAR10/CIFAR100/SVHN. We highlight the top three accuracies in each column with the bold font. The three numbers in the parentheses denote the number of output feature maps of each block. The plus sign (+) denotes the data augmentation. The sign (-W) means that the feature maps of layers in each branch increase as explained in the model configuration section. The compared models include: Network in Network BID24, ALL-CNN , Deeply Supervised Net BID15, Highway Network BID24, FractalNet BID14, ResNet BID3, ResNet with Stochastic Depth BID7, Wide ResNet BID30, and DenseNet BID6 performance. And the whole net outperforms any single path network with a large margin. For example, the whole net and the net based on the longest path show the inference error rate of 6.90% and 10.69% respectively, for CIFAR10 without data augmentation. This implicit ensemble behavior differentiates CrescendoNet from FractalNet, which shows a student-teacher effect. Specifically, the longest path in FractalNet can achieve a similar or even lower error rate compared to the whole net. To investigate the dynamic behavior of subnets, we test the error rate changes of subnets during the training. We use Adam to train the CrescendoNet with the structure shown in FIG0 on CIFAR10 for 450 epochs. FIG2 illustrates the behavior of different path combinations during the training. It shows that the inference accuracy of the whole net grows simultaneously with all the subnets, which demonstrates the ensemble effect. Second, for any single path network, the performance grows with the depth. This behavior of the anytime classifier is also shown by FractalNet. In other words, we could use the short path network to give a rough but quick inference, then use more paths to gradually increase the accuracy. This may be useful for time-critical applications, like integrated recognition system for autonomous driving tasks. Conventional deep CNNs, such as AlexNet VGG-19 (Simonyan & BID20, directly stacked the convolutional layers. However, the vanishing gradient problem makes it difficult to train and tune very deep CNN of conventional structures. Recently, stacking small convolutional blocks has become an important method to build deep CNNs. Introducing new building blocks becomes the key to improve the performance of deep CNN. BID16 first introduced the NetworkInNetwork module which is a micro neural network using a multiple layer perceptron (MLP) for local modeling. Then, they piled the micro neural networks into a deep macro neural network. BID25 introduced a new building block called Inception, based on which they built GoogLeNet. Each Inception block has four branches of shallow CNNs, building by convolutional kernels with size 1 × 1, 3 × 3, 5 × 5, and max-pooling with kernel size 3 × 3. Such a multiple-branch scheme is used to extract diversified features while reducing the need for tuning the convolutional sizes. The main body of GoogLeNet has 9 Inception blocks stacked each other. Stacking multiplebranch blocks can create an exponential combination of feed-forward paths. Such a structure com-bined with the dropout technique can show an implicit ensemble effect BID27 BID22. GoogLeNet was further improved with new blocks to more powerful models, such as Xception BID1 and Inception-v4 BID26. To improve the scalability of GoogLeNet, BID26 used convolution factorization and label-smoothing regularization in Inception-v4. In addition, BID1 explicitly defined a depth-wise separable convolution module replacing Inception module. Recently, BID14 introduced FractalNet built by stacked Fractal blocks, which are the combination of identical convolutional layers in a fractal expansion fashion. FractalNet showed that it is possible to train very deep neural network through the network architecture design. FractalNet implicitly also achieved deep supervision and student-teacher learning by the fractal architecture. However, the fractal expansion form increases the number of convolution layers and associated parameters exponentially. For example, the original FractalNet model with 21 layers has 38.6 million parameters, while a ResNet of depth 1001 with similar accuracy has only 10.2 million parameters BID6. Thus, the exponential expansion reduced the scalability of FractalNet. Another successful idea in network architecture design is the use of skip-connections BID3 b; BID6 BID30 BID29. ResNet BID3 used the identity mapping to short connect stacked convolutional layers, which allows the data to pass from a layer to its subsequent layers. With the identity mapping, it is possible to train a 1000-layer convolutional neural network. BID6 recently proposed DenseNet with extremely residual connections. They connected each layer in the Dense block to every subsequent layer. DenseNet achieved the best performance on benchmark datasets so far. On the other hand, Highway networks BID23 used skip-connections to adaptively infuse the input and output of traditional stacked neural network layers. Highway networks have helped to achieve high performance in language modeling and translation. CNN has shown excellent performance on image recognition tasks. However, it is still challenging to tune, modify, and design an CNN. We propose CrescendoNet, which has a simple convolutional neural network architecture without residual connections BID3. Crescendo block uses convolutional layers with same size 3 × 3 and joins feature maps from each branch by the averaging operation. The number of convolutional layers grows linearly in CrescendoNet while exponentially in FractalNet BID14. This leads to a significant reduction of computational complexity. Even with much fewer layers and a simpler structure, CrescendoNet matches the performance of the original and most of the variants of ResNet on CIFAR10 and CIFAR100 classification tasks. Like FractalNet BID14, we use dropout and drop-path as regularization mechanisms, which can train CrescendoNet to be an anytime classifier, namely, CrescendoNet can perform inference with any combination of the branches according to the latency requirements. Our experiments also demonstrated that CrescendoNet synergized well with Adam optimization, especially when the training data is sufficient. In other words, we can avoid scheduling the learning rate which is usually performed empirically for training existing CNN architectures. CrescendoNet shows a different behavior from FractalNet in experiments on CIFAR10/100 and SVHN. In FractalNet BID14, the longest path alone achieves the similar performance as the entire network, far better than other paths, which shows the student-teacher effect. The whole FractalNet except the longest path acts as a scaffold for the training and becomes dispensable later. On the other hand, CrescendoNet shows that the whole network significantly outperforms any set of it. This fact sheds the light on exploring the mechanism which can improve the performance of deep CNNs by increasing the number of paths.
We introduce CrescendoNet, a deep CNN architecture by stacking simple building blocks without residual connections.
1,742
scitldr
Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity). Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline. Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models. We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity. Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input. Gaussian Processes (GPs) are one of the main tools for modeling random functions BID24. They allow control of the smoothness of the function by choosing an appropriate kernel but have the disadvantage that, except in special cases (for example BID11 ; BID9), inference in GP models scales poorly in both memory and runtime. Furthermore, GPs cannot easily handle shape constraints. It can often be of interest to model a function under some shape constraint, for example nonnegativity, monotonicity or convexity/concavity BID22 BID26 BID23 BID20. While some shape constraints can be enforced by transforming the GP or by enforcing them at a finite number of points, doing so cannot always be done and usually makes inference harder, see for example BID18.Splines are another popular tool for modeling unknown functions BID29. When there are no shape constraints, frequentist inference is straightforward and can be performed using linear regression, by writing the spline as a linear combination of basis functions. Under shape constraints, the basis function expansion usually no longer applies, since the space of shape constrained splines is not typically a vector space. However, the problem can usually still be written down as a tractable constrained optimization problem BID26. Furthermore, when using splines to model a random function, a distribution must be placed on the spline's parameters, so the inference problem becomes Bayesian. BID7 proposed a method to perform Bayesian inference in a setting without shape constraints, but the method relies on the basis function expansion and cannot be used in a shape constrained setting. Furthermore, fairly simple distributions have to be placed on the spline parameters for their approximate posterior sampling algorithm to work adequately, which in the splines having a restrictive and oversimplified distribution. On the other hand, deep probabilistic models take advantage of the major progress in neural networks to fit rich, complex distributions to data in a tractable way BID25 BID21 BID15 BID10 BID14. However, their goal is not usually to model random functions. In this paper, we introduce Deep Random Splines (DRS), an alternative to GPs for modeling random functions. DRS are a deep probabilistic model in which standard Gaussian noise is transformed through a neural network to obtain the parameters of a spline, and the random function is then the corresponding spline. This combines the complexity of deep generative models and the ability to enforce shape constraints of splines. We use DRS to model the nonnegative intensity functions of Poisson processes BID16. In order to ensure that the splines are nonnegative, we use a parametrization of nonnegative splines that can be written as an intersection of convex sets, and then use the method of alternating projections BID28 to obtain a point in that intersection (and differentiate through that during learning). To perform scalable inference, we use a variational autoencoder BID15 with a novel encoder architecture that takes multiple, truly continuous point processes as input (not discretized in bins, as is common).Our contributions are: (i) Introducing DRS, (ii) using the method of alternating projections to constrain splines, (iii) proposing a variational autoencoder model whith a novel encoder architecture for point process data which uses DRS, and (iv) showing that our model outperforms commonly used alternatives in both simulated and real data. The rest of the paper is organized as follows: we first explain DRS, how to parametrize them and how constraints can be enforced in section 2. We then present our model and how to do inference in section 3. We then compare our model against competing alternatives in simulated data and in two real spiking activity datasets in section 4, and observe that our method outperforms the alternatives. Finally, we summarize our work in section 5. Throughout the paper we will consider functions on the interval [T 1, T 2) and will select I + 1 fixed knots T 1 = t 0 < · · · < t I = T 2. We will refer to a function as a spline of degree d and smoothness s < d if the function is a d-degree polynomial in each interval [t i, t i+1) for i = 0,..., I − 1, is continuous, and s times differentiable. We will denote the set of splines of degree d and smoothness s by G d,s = {g ψ : ψ ∈ Ψ d,s}, where Ψ d,s is the set of parameters of each polynomial in each interval. That is, every ψ ∈ Ψ d,s contains the parameters of each of the I polynomial pieces (it does not contain the locations of the knots as we take them to be fixed since we observed overfitting when not doing so). While the most natural ways to parametrize splines of degree d are a linear combination of basis functions or with the d + 1 polynomial coefficients of each interval, these parametrizations do not lend themselves to easily enforce constraints such as nonnegativity BID26 ). We will thus use a different parametrization which we will explain in detail in the next section. We will denote by Ψ ⊆ Ψ d,s the subset of spline parameters that in the splines having the shape constraint of interest, for example, nonnegativity. DRS are a distribution over G d,s. To sample from a DRS, a standard Gaussian random variable Z ∈ R m is transformed through a neural network parametrized by θ, f θ: R m → Ψ. The DRS is then given by g f θ (Z) and inference on θ can be performed through a variational autoencoder BID15. Note that f maps to Ψ, thus ensuring that the spline has the relevant shape constraint. We now explain how we can enforce piecewise polynomials to form a nonnegative spline. We add the nonnegativity constraint to the spline as we will use it for our model in section 3, but constraints such as monotonicity and convexity/concavity can be enforced in an analogous way. In order to achieve this, we use a parametrization of nonnegative splines that might seem overly complicated at first. However, it has the critical advantage that it decomposes into the intersection of convex sets that are easily characterized in terms of the parameters, which is not the case for the naive parametrization which only includes the d + 1 coefficients of every polynomial. We will see how to take advantage of this fact in the next section. A beautiful but perhaps lesser known spline (see BID17) gives that a polynomial p(t) of degree d, where d = 2k + 1 for some k ∈ N, is nonnegative in the interval [l, u) if and only if it can be written down as follows: DISPLAYFORM0where [t] = (1, t, t 2, . . ., t k) and Q 1 and Q 2 are (k + 1) × (k + 1) symmetric positive semidefinite matrices. It follows that a piecewise polynomial of degree d with knots t 0,..., t I defined as p (i) (t) for t ∈ [t i−1, t i) for i = 1,..., I is nonnegative if and only if it can be written as: DISPLAYFORM1 for i = 1,..., I, where each Q1 and Q2 are (k + 1) × (k + 1) symmetric positive semidefinite matrices. We can thus parametrize every piecewise nonnegative polynomial on our I intervals with (Q DISPLAYFORM2 . If no constraints are added on these parameters, the ing piecewise polynomial might not be smooth, so certain constraints have to be added in order to guarantee that we are parametrizing a nonnegative spline and not just a nonnegative piecewise polynomial. To that end, we define C 1 as the set of (Q DISPLAYFORM3 DISPLAYFORM4 That is, C 1 is the set of parameters whose ing piecewise polynomial as in equation 2 is continuous. Analogously, let C j for j = 2, 3, . . . be the set of (Q DISPLAYFORM5 such that: DISPLAYFORM6 So that C j is the set of parameters whose corresponding piecewise polynomials have matching left and right (j − 1)-th derivatives. Let C 0 be the set of (Q DISPLAYFORM7 which are symmetric positive semidefinite. We can then parametrize the set of nonnegative splines on DISPLAYFORM8 Note that the case where d is even can be treated analogously (see supplementary material). In order to use a DRS, f θ has to map to Ψ, that is, we need to have a way for a neural network to map to the parameter set corresponding to nonnegative splines. We achieve this by taking f θ (z) = h(f θ (z)), wheref θ is an arbitrary neural network and h is a surjective function onto Ψ. The most natural choice for h is the projection onto Ψ. However, while computing the projection onto Ψ (for Ψ as in section 2.1) can be done by solving a convex optimization problem, it cannot be done analytically. This is an issue because when we train the model, we will need to differentiate f θ with respect to θ. Note that BID2 propose a method to have an optimization problem as a layer in a neural network. One might hope to use their method for our problem, but it cannot be applied due to the semidefinite constraint on our matrices. The method of alternating projections BID28 BID3 allows us to approximately compute such a function h analytically. If C 0,..., C s+1 are closed, convex sets in DISPLAYFORM0, where P j is the projection onto C j for j = 0,..., s + 1. The method of alternating projections then consists on iteratively projecting onto each set in a cyclic fashion. We call computing ψ (k) from ψ (k−1) the k-th iteration of the method of alternating projections. This method can be useful to obtain a point in the intersection if each P j can be easily computed. In our case, projecting onto C 0 can be done by doing eigenvalue decompositions of Q and zeroing out negative elements in the diagonal matrices containing the eigenvalues. While this might seem computationally expensive, the matrices are small and this can be done efficiently. For example, for cubic splines (d = 3), there are 2I matrices each one of size 2 × 2. Projecting onto C j for j = 1,... s + 1 can be done analytically as it can be formulated as a quadratic optimization problem with linear constraints. Furthermore, because of the local nature of the constraints where every interval is only constrained by its neighboring intervals, this quadratic optimization problem can be reduced to solving a tridiagonal system of linear equations of size I − 1 which can be solved efficiently in O(I) time with simplified Gaussian elimination. While the derivation of this fact is a straightforward application of the KKT conditions, the algebra is cumbersome, so we omit it here to include it in the supplementary material. By letting h be the first M iterations of the method of alternating projections, we can ensure that f θ maps (approximately) to Ψ, while still being able to compute ∇ θ f θ (z). Note that we could find such an h function using Dykstra's algorithm (not to be confused with Dijkstra's shortest path algorithm), which is a modification of the method of alternating projections that converges to the projection of BID8 BID4 BID27 ), but we found that the method of alternating projections was faster to differentiate when using reverse mode automatic differentiation packages BID0. DISPLAYFORM1 Another way of finding such an h would be unrolling any iterative optimization method that solves the projection onto Ψ, such as gradient-based methods or Newton methods. We found the alternating projections method more convenient as it does not involve additional hyperparameters such as learning rate that drastically affect performance. Furthermore, the method of alternating projections is known to have a linear convergence rate (as fast as gradient-based methods) that is independent of the starting point BID3. This last observation is important, as the starting point in our case is determined by the output off θ, so that the convergence rate being independent of the starting point ensures thatf θ cannot learn to ignore h, which is not the case for gradient-based and Newton methods (for a fixed number of iterations and learning rate, there might exist an initial point that is too far away to actually reach the projection). Finally, note that if we wanted to enforce, for example, that the spline be monotonic, we could parametrize its derivative and force it to be nonnegative or nonpositive. Convexity or concavity can be enforced analogously. Since we will use DRS as intensity functions for Poisson processes, we begin this section with a brief review of these processes. An inhomogeneous Poisson process in a set S is a random subset of S. The process can (for our purposes) be parametrized by an intensity function g: S → R + and in our case, S = [T 1, T 2). We write S ∼ PP S (g) to denote that the random set S, whose elements we call events, follows a Poisson process on S with intensity g. If S = {x k} K k=1 ∼ PP S (g), then |S ∩ A|, the number of events in any A ⊆ S, follows a Poisson distribution with parameter A g(t)dt and the log likelihood of S is given by: DISPLAYFORM0 Splines have the very important property that they can be analytically integrated (as the integral of polynomials can be computed in closed form), which allows to exactly evaluate the log likelihood in equation 5 when g is a spline. As a consequence, fitting a DRS to observed events is more tractable than fitting models that use GPs to represent g, such as log-Gaussian Cox processes BID22. Inference in the latter type of models is very challenging, despite some efforts by Cunningham et al. FORMULA1; BID1 BID19. Splines also vary smoothly, which incorporates the reasonable assumption that the expected number of events changes smoothly over time. These properties were our main motivations for choosing splines to model intensity functions. Suppose we observe N simultaneous point processes in [T 1, T 2) a total of R repetitions (we will call each one of these repetitions/samples a trial). Let X r,n denote the n-th point process of the r-th trial. Looking ahead to an application we study in the , data of this type is a standard setup for microelectrode array data, where N neurons are measured from time T 1 to time T 2 for R repetitions, and each event in the point processes corresponds to a spike (the time at which the neurons "fired"). Each X r,n is also called a spike train. The model we propose, which we call DRS-VAE, is as follows: DISPLAYFORM0 where each f (n) θ: R m → Ψ is obtained as described in section 2.2. The hidden state Z r for the r-th trial X r:= (X r,1, . . ., X r,N) can be thought as a low-dimensional representation of X r. Note that while the intensity function of every point process and every trial is a DRS, the latent state Z r of each trial is shared among the N point processes. Once again, one might think that our parametrization of nonnegative splines is unnecessarily complicated and that having f (n) θ in equation 6 be a simpler parametrization of an arbitrary spline (e.g. basis coefficients) and using τ (g ψr,n) instead of g ψr,n, where τ is a nonnegative function, might be a better solution to enforcing nonnegativity constraints. The function τ would have to be chosen in such a way that the integral of equation 5 can still be computed analytically, making τ (t) = t 2 a natural choice. While this would avoid having to use the method of alternating projections, we found that squared splines perform very poorly as they oscillate too much. Autoencoding variational Bayes BID15 ) is a technique to perform inference in the following type of model: DISPLAYFORM0 where each Z r ∈ R m is a local hidden variable which we do not observe, θ are the model parameters and X r is the data that we actually observe, whose distribution depends on Z r. A variational autoencoder estimates θ and approximates the posterior p(z|x) by a distribution q φ (z|x) parametrized by φ. Further simplifying assumptions are made and q φ (z|x) is taken such that it respects conditional independence: DISPLAYFORM1 where each q φ (z r |x r) is taken to be normal with mean and variance depending on R m valued nonlinear functions (usually taken to be neural networks) of x r: DISPLAYFORM2 where diag σ 2 φ (x r) is a diagonal matrix whose diagonal elements are given by σ 2 φ (x r). Performing (approximate) Bayesian inference becomes finding values of φ that adequately approximate the true posterior. To achieve this task, the ELBO L, which is given by the following expression, is jointly maximized over (θ, φ): DISPLAYFORM3 Maximizing the ELBO over φ is equivalent to minimizing the KL of the approximate posterior to the true posterior (for a fixed θ), while maximizing it over θ is equivalent to maximizing a lower bound on the log likelihood. This lower bound is close to the actual log likelihood when the true posterior is correctly approximated. Furthermore, the first term in the sum in equation 10 can be written down in closed form as it is just the KL divergence between two normal random variables, while the second term in the sum can be written using the reparametrization trick: DISPLAYFORM4 where refers to coordinate-wise multiplication. This allows for straightforward differentiation with respect to φ, and thus stochastic gradient methods can be used. In order to perform inference, we use autoencoding variational Bayes. Because of the point process nature of the data, µ φ and σ φ require a recurrent architecture, since their input x r = (x r,1, x r,2, . . ., x r,N) consists of N point processes. This is challenging because the input is not just a sequence, but N sequences of different lengths (numbers of events). In order to deal with this, we use N separate LSTMs BID13, one per point process. Each LSTM takes as input the events of the corresponding point process. The final states of each LSTM are then concatenated and transformed through a dense layer (followed by an exponential activation in the case of σ φ to ensure positivity) in order to map to the hidden space R m. We also tried bidirectional LSTMs BID12 but found regular LSTMs to be faster while having similar performance. The architecture is depicted in figure 1. Combining equations 10 and 11 for our model of equation 6, we approximate the ELBO at each stochastic gradient step by: DISPLAYFORM5 DISPLAYFORM6 where B is a randomly selected subset of trials, 1,... L are iid N (0, I m), µ φ,j (x r) and σ 2 φ,j (x r) are, respectively, the j-th coordinates of µ φ (x r) and σ 2 φ (x r), K r,n is the number of events in the n-th point process of the r-th trial, ψ r,n,l = f DISPLAYFORM7 l ) and x r,n,k is the k-th event of the n-th point process of the r-th trial. BID10 have a similar model, where a hidden Markov model is transformed through a neural network to obtain event counts on time bins. The hidden state for a trial in their model is then an entire hidden Markov chain, which will have significantly higher dimension than our hidden state. Also, their model can be recovered from ours if we change the standard Gaussian distribution of Z r in equation 6 to reflect their Markovian structure and choose G to be piecewise constant, nonnegative functions. We also emphasize the fact that our model is very easy to extend: for example, it would be straightforward to extend it to multi-dimensional point processes (not neural data any more) by changing G and its parametrization. It is also straightforward to use a more complicated point process than the Poisson one by allowing the intensity to depend on previous event history. Furthermore, DRS can be used in settings that require random functions, even if no point process is involved. We simulated data with the following procedure: First, we set 2 different types of trials. For each type of trial, we sampled one true intensity function on for each of the N = 2 point processes by sampling from a GP and exponentiating the . We then sampled 600 times from each type of trial, ing in 1200 trials. We randomly selected 1000 trials for training and set aside the rest for testing. We then fit the model described in section 3.2 and compared it against the PfLDS model of BID10 and the GPFA model of BID30. Both of these methods discretize time into B time bins and have a latent variable per time bin and per trial (as opposed to our model which is only per trial). They do this as a way of enforcing temporal smoothness by placing an appropriate prior over their latent trajectories, which we do not have to do as we implicitly enforce temporal smoothness by using splines to model intensity functions. PfLDS uses Gaussian linear dynamics for their latent space and a Poisson distribution on the number of events per time bin, while GPFA places a GP distribution on the latent space and a Gaussian distribution on the square-rooted number of events per time bin. We compare these methods against DRS-VAE since both were designed to analyze the same type of point process data and, inference wise, PfLDS uses a similar autoencoding variational Bayes algorithm whereas GFPA uses GPs to model random functions. We used a uniform grid with 11 knots (ing in I = 10 intervals), picked d = 3, s = 2, used a mini-batch size of 2 and used L = 2. The state of each LSTM has 100 units, andf is a feed-forward neural network with ReLU activations and with 3 hidden layers, each one with 100 units. We apply 102 iterations of the method of alternating projections. Since a twice-differentiable cubic spline on I intervals has I + 3 degrees of freedom, when discretizing time for PfLDS and GPFA we use B = I + 3 = 13 time bins. This way the distribution recovered by PfLDS also has B = 13 degrees of freedom, while the distribution recovered by GPFA has even more, as each Gaussian has a covariance in addition to the mean. We set the latent dimension m in our model to 2 and we also set the latent dimension per time bin in PfLDS and GPFA to 2, meaning that the overall latent dimension for an entire trial was 2B = 26. These two choices make the comparison conservative as they allow more flexibility for the two competing methods than for ours. For the feed-forward architecture in PfLDS, we also used 3 hidden layers, each with 100 units. The left panel of FIG3 shows the posterior means of the hidden variables in our model for each of the 200 test trials. Each posterior mean is colored according to its type of trial. We can see that different types of trials form separate clusters, meaning that our model successfully obtains low-dimensional representations of the trials. Note that the model is trained without having access to the type of each trial; colors are assigned in the figure post hoc. The right panel of FIG3 shows the events (in red) for a particular point process on a particular trial, along with the true intensity (in green) that generated the events and posterior samples from our model (in purple) and from PfLDS (in blue) of the corresponding intensities. Note that since PfLDS parametrizes the number of counts on each time bin, it does not have a corresponding intensity. We plot instead a piecewise constant intensity on each time bin in such a way that the ing Poisson process has the same count distribution as the distribution that is parametrized by PfLDS. We can see that our method recovers a smoother function that is closer to the truth than PfLDS. TAB0 shows performance from our model compared against PfLDS and GPFA. The second column shows the per-trial ELBO on test data, and we can see that our model has a larger ELBO than PfLDS. While having a better ELBO does not imply that our log likelihood is better, it does suggest that it is. Since both PfLDS and GPFA put a distribution on event counts on time bins instead of a distribution on event times as our models does, the log likelihoods are not directly comparable. However, in the case of PfLDS, we can easily convert from the Poisson likelihood on time bins to the piecewise constant intensity Poisson process likelihood, so that the numbers become comparable. In order to get a quantitative comparison between our model and GPFA, we take advantage of the fact that we know the true intensity that generated the data and compare average L 2 distance, across point processes and trials, between posterior intensity samples and actual intensity function. Once again, we can see that our method outperforms the alternatives. Table 1 also includes the standard deviation of these L 2 distances. Since the standard deviations are somewhat large in comparison to the means, for each of the two competing alternatives, we carry out a two sample t-test comparing the L 2 distance means obtained with our method against the alternative. The p-values indicate that our method recovers intensity functions that are closer to the truth in a statistically significant way. We also fit our model to the dataset collected by BID5. The dataset, after preprocessing (see supplementary material for details), consists of measurements of 20 neurons for 3590 trials on the interval [−100, 300) (in ms) of a primate. In each trial, the primate reaches with its arm to a specific location, which changes from trial to trial (we can think of the 40 locations as types of trials), where time 0 corresponds to the beginning of the movement. We randomly split the data into a training set with 3000 trials and a test set with the rest of the trials. We chose d = 3, s = 2, L = 2, applied 102 iterations of the method of alternating projections, set the state of each LSTM to have 25 units andf is a feed-forward network with ReLU activations and with 3 hidden layers, each one with 10 units (we tried more complicated architectures but saw no improvement). We used 18 uniformly spaced knots (that is, 17 intervals). For the comparison against PfLDS, we split time into 20 bins, ing in time bins of 20ms (which is a standard length), once again making sure that the degrees of freedom are comparable. Since we do not have access to the ground truth, we do not compare against GPFA as the L 2 metric computed in the previous section cannot be used here. Again, we used a hidden dimension m = 2 for our model, ing in hidden trajectories of dimension 40 for PfLDS. We experimented with larger values of m but did not observe significant improvements in either model. FIG4 shows the spike train (red) for a particular neuron on a particular trial, along with posterior samples from our model (in purple) and from PfLDS (in blue) of the corresponding intensities. We can see that the posterior samples look like plausible intensities to have generated the corresponding spike trains and that our posterior intensities look smoother than the ones obtained by PfLDS. TAB1 shows the per-trial ELBO on test data for our model and for PfLDS. Again, our model has a larger ELBO than PfLDS, even when PfLDS has access to 20 times more hidden dimensions: our method is more successful at producing low-dimensional representations of trials than PfLDS. The table also shows the percentage of correctly predicted test trial types when using 15-nearest neighbors on the posterior means of train data (the entire trajectories are used for PfLDS). While 23.73% might seem small, it should be noted that it is significantly better than random guessing (which would have 2.5% accuracy) and that the model was not trained to minimize this objective. Regardless, we can see that our method also outperforms PfLDS in this metric, even when using a much lower-dimensional representation of each trial. The last entry in the table shows the percentage of explained variation when doing ANOVA on the test posterior means (denoted SSG/SST), using trial type as groups. Once again, we can see that our model recovers a more meaningful representation of the trials. We also fit our model to a not yet published dataset collected by our collaborators from the Churchland lab at Columbia University. After preprocessing (see supplementary material), it consists of 1300 and 188 train and test trials, respectively. During each trial, 20 neurons of a primate are measured as it pedals for approximately 8s. There are 8 types of trials, based on whether the primate is pedaling forwards or backwards and at which speed. We use the same hyperparameter settings as for the reaching data, except we use 26 uniformly spaced knots (25 intervals) and 28 bins for PfLDS, as well as a hidden dimension m = 10, ing in hidden trajectories of dimension 280 for PfLDS. TAB2 quantitatively compares our method against PfLDS. The ELBO is actually higher for PfLDS, which appears (in preliminary analysis not shown) to be caused by an artifact of preprocessing the data rather than any essential performance loss. While the ELBO was better for PfLDS, the quality of our latent representations is significantly better, as shown by the accuracy of 15-nearest neighbors to predict test trial types (random guessing would have 12.5% accuracy) and the ANOVA percentage of explained variation of the test posterior means. This is particularly impressive as our latent representations have 28 times fewer dimensions. We did experiment with different hyperparameter settings, and found that the ELBO of PfLDS increased slightly when using more time bins (at the cost of even higher-dimensional latent representations), whereas our ELBO remained the same when increasing the number of intervals. However, even in this setting the accuracy of 15-nearest neighbors and the percentage of explained variation did not improve for PfLDS. In this paper we introduced Deep Random Splines, an alternative to Gaussian processes to model random functions. Owing to our key modeling choices and use of from the spline and optimization literatures, fitting DRS is tractable and allows one to enforce shape constraints on the random functions. While we only enforced nonnegativity and smoothness in this paper, it is straightforward to enforce constraints such as monotonicity (or convexity/concavity). We also proposed a variational autoencoder that takes advantage of DRS to accurately model and produce meaningful low-dimensional representations of neural activity. Future work includes using DRS-VAE for multi-dimensional point processes, for example spatial point processes. While splines would become harder to use in such a setting, they could be replaced by any family of easily-integrable nonnegative functions, such as, for example, conic combinations of Gaussian kernels. Another line of future work involves using a more complicated point process than the Poisson, for example a Hawkes process, by allowing the parameters of the spline in a certain interval to depend on the previous spiking history of previous intervals. Finally, DRS can be applied in more general settings than the one explored in this paper since they can be used in any setting where a random function is involved, having many potential applications beyond what we analyzed here. Parametrization for nonnegative splines of even degreeAs mentioned on section 2.1, there is an alternative characterization of nonnegative polynomials of even degree d = 2k on an interval [l, u) that is analogous to equation 1 of the manuscript, which says that the polynomial p is nonnegative on the interval if and only if it can be written as: DISPLAYFORM0 where again, [t] = (1, t, t 2, . . ., t k) and Q 1 is a (k + 1) × (k + 1) symmetric positive semidefinite matrix. In this case Q 2 is now a k × k symmetric positive semidefinite matrix and[t] = (1, t, t 2, . . ., t k−1). Again, it follows that a piecewise polynomial of degree d with knots t 0,..., t I defined as p (i) (t) for t ∈ [t i−1, t i) for i = 1,..., I is nonnegative if and only if it can be written as: DISPLAYFORM1 for i = 1,..., I, where each Q Projecting onto the space of smooth splines As mentioned in section 2.2, mapping to Ψ = ∩ s+1 j=0 C j can be achieved through the method of alternating projections. As mentioned previously, projecting onto C 0 can be easily done through eigen-decomposition. We now go through the details on how to project onto the other C j sets. We will only cover C 1, C 2 and C 3 for odd-degree splines as we used splines of degree 3 and smoothness 2, but projecting onto C j for j ≥ 4 for higher degree splines can be done in an analogous way. Projections for even degree splines can also be derived in an analogous way. Suppose we are given (Q DISPLAYFORM0, which are (k +1)×(k +1) matrices (not necessarily in Ψ), defining a piecewise polynomial as in equation 2 of the manuscript. Computing the projection (X DISPLAYFORM1 onto C 1 can be done by solving the following optimization problem: DISPLAYFORM2 where || · || F denotes the Frobenius norm and each constraint is merely forcing the piecewise function to be continuous at knot i for i = 1, . . ., I − 1. Note that this is a quadratic optimization problem with linear constraints, and can be solved analytically. The corresponding Lagrangian is: DISPLAYFORM3 where λ = (λ 1, . . ., λ I−1) ∈ R I−1. By solving the KKT conditions, it can be verified that: DISPLAYFORM4 Analogously, computing the projection (X DISPLAYFORM0 onto C 2 can be done by solving the following optimization problem: DISPLAYFORM1 where [t] = (0, 0, 2, 6t, . . ., k(k − 1)t k−2 ) and each constraint is now forcing the values of the left and right second derivatives of the piecewise function to match at knot i for i = 1,..., I − 1. Again, this is a quadratic optimization problem with linear constraints. By writing the Lagrangian and solving the KKT conditions, we get: where B i−1 = 2M DISPLAYFORM2 We include only successful trials (i.e. when the primate reaches to the correct location) and use only spikes occurring in a window of −100ms and 300ms from the time that movement starts. We also reduce the total number of neurons as inference with our method requires one LSTM per neuron and having too many neurons renders training slow. In order to do so, we use the following GLM: DISPLAYFORM0 where y r is the trial type of trial r, C = 40 is the number of trial types,K r,· ∈ R N is a vector containing the (centered and standardized) number of spikes in trial r for each of the N = 223 neurons, and β ∈ R N ×C are the GLM parameters. We train the GLM using group lasso BID31, where the groups are defined by neurons. That is, the GLM is trained through maximum likelihood with an added penalty: DISPLAYFORM1 where β n,· is the n th row of β. This makes it so that the coefficients in each group hit zero simultaneously. A neuron n is removed if ||β n,· || = 0. We use a regularization parameter λ such that all but 20 neurons are removed. This provides a principled way of reducing the number of neurons while making sure that the kept neurons are useful. As PfLDS does not require the use of LSTMs, it can be run on the data without removing neurons. While doing this did increase performance of PfLDS, it did so very marginally and our model still heavily outperformed PfLDS. Once again, we only keep successful trials (i.e. when the primate pedals in the correct direction and speed) and reduce the total number of neurons N = 256 to 20 by using group lasso. Since each trial has a different length, we extend every trial to have the same length as the longest trial. We add no spikes to these extended time periods. We also tried running PfLDS with all the neurons and saw only a very marginal improvement, like we did with reaching data.
We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes.
1,743
scitldr
The recent development of Natural Language Processing (NLP) has achieved great success using large pre-trained models with hundreds of millions of parameters. However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like BERT, MobileBERT is task-agnostic; that is, it can be universally applied to various downstream NLP tasks via fine-tuning. MobileBERT is a slimmed version of BERT-LARGE augmented with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT-LARGE teacher to it. Empirical studies show that MobileBERT is 4.3x smaller and 4.0x faster than original BERT-BASE while achieving competitive on well-known NLP benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves 0.6 GLUE score performance degradation, and 367 ms latency on a Pixel 3 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a 90.0/79.2 dev F1 score, which is 1.5/2.1 higher than BERT-BASE. The NLP community has witnessed a revolution of pre-training self-supervised models. These models usually have hundreds of millions of parameters. They are trained on huge unannotated corpus and then fine-tuned for different small-data tasks (; ; ; ;). Among these models, BERT , which stands for Bidirectional Encoder Representations from Transformers , shows substantial accuracy improvements compared to training from scratch using annotated data only. However, as one of the largest models ever in NLP, BERT suffers from the heavy model size and high latency, making it impractical for resource-limited mobile devices to deploy the power of BERT in mobile-based machine translation, dialogue modeling, and the like. There have been some works that task-specifically distill BERT into compact models (; ; ;). To the best of our knowledge, there is not yet any work for building a task-agnostic lightweight pre-trained model, that is, a model that can be fine-tuned on downstream NLP tasks just like what the original BERT does. In this paper, we propose MobileBERT to fill this gap. In practice, task-agnostic compression of BERT is desirable. Task-specific compression needs to first fine-tune the original large BERT model into task-specific teachers and then distill. Such a process is way more complicated and costly than directly fine-tuning a task-agnostic compact model. At first glance, it may seem straightforward to obtain a task-agnostic compact version of BERT. For example, one may just take a narrower or shallower architecture of BERT, and then train it with a prediction loss together with a distillation loss . Unfortunately, empirical show that such a straightforward approach in significant accuracy loss . This may not be that surprising. It aligns with a well-known observation that shallow networks usually do not have enough representation power while narrow and deep networks are difficult to train. Our MobileBERT is designed to be as deep as BERT LARGE while each layer is made much narrower via adopting bottleneck structures and balancing between self-attentions and feed- MobileBERT is trained by progressively transferring knowledge from IB-BERT. forward networks (Figure 1). To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT LARGE (IB-BERT) teacher to it. As a pre-trained NLP model, MobileBERT is both storage efficient (w.r.t model size) and computationally efficient (w.r.t latency) for mobile and resource-constrained environments. Experimental on several NLP tasks show that while being 4.3× smaller and 4.0× faster, MobileBERT can still achieve competitive compared to BERT BASE. On the natural language inference tasks of GLUE, MobileBERT can have only 0.6 GLUE score performance degradation with 367 ms latency on a Pixel 3 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT obtains 90.3/80.2 dev F1 score which is 1.5/2.1 higher than BERT BASE. 2 RELATED WORK 2.1 BERT BERT takes the embedding of source tokens as input. Each building block of BERT contains one Multi-Head self-Attention (MHA) module and one Feed-Forward Network (FFN) module, which are connected by skip connections. The MHA module allows the model to jointly attend to information from different subspaces, while the position-wise FFN consists of a two-layer linear transformation with gelu activation , which increase the representational power of the model. Figure 1 (a) illustrates the original BERT architecture. In the pre-training stage, BERT is required to predict the masked tokens in sentences (mask language modeling task), as well as whether one sentence is the next sentence of the other (next sentence prediction task). In the fine-tuning stage, BERT is further trained on task-specific annotated data. Exploiting knowledge transfer to compress model size was first proposed by. The idea was then adopted in knowledge distillation , which requires the smaller student network to mimic the class distribution output of the larger teacher network. Fitnets make the student mimic the intermediate hidden layers of the teacher to train narrow and deep networks. show that the knowledge of the teacher can also be obtained from the neurons in the top hidden layer. Similar to our proposed progressive knowledge transfer scheme, proposed a sequential knowledge transfer scheme to distill knowledge from a deep teacher into a shallow student in a sequential way. proposed to transfer the attention maps of the teacher on images. proposed to transfer the similarity of hidden states and word alignment from an autoregressive Transformer teacher to a non-autoregressive student. Recently, knowledge transfer for BERT has attracted much attention. Researchers have distilled BERT into smaller pre-trained BERT models , an extremely small bi-directional , and smaller models on sequence labeling tasks . distill BERT into shallower students through knowledge distillation and an additional knowledge transfer of hidden states on multiple intermediate layers. In contrast to these works, we only use knowledge transfer in the pre-training stage and do not require a fine-tuned teacher for task-specific knowledge in the down-stream tasks. Moreover, compared to patient knowledge distillation which transfers knowledge for all intermediate layers simultaneously to alleviate over-fitting in down-stream task fine-tuning, we design a novel progressive knowledge transfer which eases the pre-training of our compact MobileBERT. The pre-training of BERT is challenging. This problem becomes more severe when we pre-train a compact BERT model from scratch . To tackle this problem, we propose a bottom-to-top progressive knowledge transfer scheme. Specifically, we first train a wider teacher network that is easier to optimize, and then progressively train the student network from bottom to top, requiring it to mimic the teacher network layer by layer. In our algorithm, the student and the teacher can be any multi-head attention encoder such as Transformer , BERT or XLNet. We take BERT as an example in the following description. The progressive knowledge transfer is divided into L stages, where L is the number of layers. Figure 2 illustrates the diagram and algorithm of progressive knowledge transfer. The idea of progressive transfer is that when training the (+1) th layer of the student, the th layer is already well-optimized. As there are no soft target distributions that can be used for the intermediate states of BERT, we propose the following two knowledge transfer objectives, i.e., feature map transfer and attention transfer, to train the student network. Particularly, we assume that the teacher and the student have the same 1) feature map size, 2) the number of layers, and 3) the number of attention heads. Since each layer in BERT merely takes the output of the previous layer as input, the most important thing in progressively training the student network is that the feature maps of each layer should be as close as possible to those of the teacher, i.e., well-optimized. In particular, the mean squared error between the normalized feature maps of the student and the teacher is used as the objective: where is the index of layers, T is the sequence length, and N is the feature map size. The layer normalization is added to stabilize the layer-wise training loss. We also minimize two statistics discrepancies on mean and variance in feature map transfer: where µ and σ 2 represents mean and variance, respectively. Our empirical studies show that minimizing the statistics discrepancy is helpful when layer normalization is removed from BERT to reduce inference latency (see more discussions in Section 4.3). The attention mechanism greatly boosts the performance of NLP and becomes a crucial building block in Transformer and BERT. Many papers words. This motivates us to use self-attention maps from the well-optimized teacher to help the training of the student in augmentation to the feature map transfer. In particular, we minimize the KL-divergence between the per-head self-attention distributions of the teacher and the student: where A is the number of attention heads. Our final progressive knowledge transfer loss L P KT for the th stage is a linear combination of the objectives stated above. As shown in the right panel of Figure 2, we progressively train each layer of the student by minimizing the knowledge transfer loss. In other words, when we train the th layer, we freeze all the trainable parameters in the layers below. We can somewhat soften the training process as follows. When training a layer, we further tune the lower layers with a small learning rate rather than entirely freezing them. Freezing the lower layers can be regarded as a special case of this softened process with the learning rate being zero. There is no knowledge transfer for the beginning embedding layer and the final classifier. They are are the same for the student and teacher. After the progressive knowledge transfer, we further pre-train MobileBERT until convergence. We use a linear combination of the original masked language modeling (MLM) loss, next sentence prediction (NSP) loss, and the new knowledge distillation loss as our pre-training distillation loss: where [N] is the set of masked tokens, P tr (i) and P st (i) are two predicted distributions respectively from the teacher and student model on the masked tokens, and α is a hyperparameter in. We do not perform knowledge distillation on the next sentence prediction (NSP) task as it has been shown to be unimportant. In this section, we present the MobileBERT architecture and the underlining design principle, i.e., how to exploit the benefits of the proposed progressive knowledge transfer. MobileBERT is a much slimmed version of BERT LARGE. As illustrated in Figure 1 (c), to align its feature maps with the teacher's, it is augmented with the bottleneck modules , which have additional shortcut connections outside the original non-linear modules. Through the bottleneck modules, MobileBERT can increase the dimension of its block outputs by a linear transformation, while decreasing the dimension of its block inputs by another linear transformation. So the intra-block hidden size (hidden size of the original non-linear modules) stays unchanged. Symmetrically, to align with the student's feature maps, we can also place the inverted bottleneck modules in the BERT LARGE teacher (Figure 1b). We refer this variant of BERT LARGE as IB-BERT. Through the inverted bottleneck modules, we can effectively reduce the feature map size of the teacher without losing its representational power. We may either only use bottleneck for the student or only the inverted bottleneck for the teacher to align their feature maps. However, when using both of them, we have a chance to search for a better feature map size for the teacher and student to obtain a more compact student model while not hurting the performance of the teacher. A problem introduced by the bottleneck structure of MobileBERT is that the balance between selfattentions and feed-forward networks is broken. In original BERT, the ratio of the parameter numbers in self-attentions and feed-forward networks is always 1:2. But in the bottleneck structure, the inputs to the self-attentions are from wider feature maps (of inter-block size), while the inputs to the feed-forward networks are from narrower bottlenecks (of intra-block size). This in that the self-attentions in MobileBERT will contain more parameters than normally. Therefore, we propose to use stacked feed-forward networks in MobileBERT to re-balance it. As illustrated in 1(c), each MobileBERT layer contains one self-attention but several stacked feed-forward networks. By model latency analysis 1, we find that layer normalization and gelu activation accounted for a considerable proportion of total latency. Therefore, we replace them with new operations in our MobileBERT. Remove layer normalization We replace the layer normalization of a n-channel hidden state h with an element-wise linear transformation: where γ, β ∈ R n and • denotes the Hadamard product. Please note that NoNorm has different properties from LayerNorm even in test mode since the original layer normalization is not a linear operation for a batch of vectors. Use relu activation We replace the gelu activation with simpler relu activation. We conduct extensive experiments to search good model settings for the IB-BERT teacher and the MobileBERT student. We replace the original embedding table by a 3-convolution from a smaller embedding table with embedding size 128 to keep the number of embedding parameters in different model settings the same. We start with SQuAD v1.1 dev F1 score as the metric to measure the performance of different model settings. Since BERT pre-training is time and resource consuming, in the architecture search stage, we only train each model for 125k steps with 2048 batch size, which halves the training schedule of original BERT . Architecture Search of the Teacher As shrinking the inter-block size can effectively compress the model while maintaining its representational power , our design philosophy for the teacher model is to use as small inter-block hidden size (feature map size) as possible as long as there is no accuracy loss. Under this guideline, we design experiments to manipulate the inter-block size of a BERT LARGE -sized IB-BERT, and the are shown in the left panel of Table 1 with labels (a)-(e). As can be seen, decreasing the inter-block hidden size doesn't damage the performance of BERT until the inter-block size is smaller than 512. As a , we choose the IB-BERT LARGE with its inter-block hidden size being 512 as the teacher model. One may wonder whether we can also shrink the intra-block hidden size of the teacher, as this may bridge the gap between the student and teacher . We conduct experiments and the are shown in the left panel of Table 1 with labels (f)-(i). We can see that when the intra-block hidden size is reduced, the model performance is dramatically worse. This means that the intra-block hidden size, which represents the representation power of non-linear modules, plays a crucial role in BERT. Therefore, unlike the inter-block hidden size, we do not shrink the intrablock hidden size of our teacher model. Besides, by comparing (a) and (f) in Table 1, we can see that reducing the number of heads from 16 to 4 does not harm the performance of BERT. This is in line with the observation in the recent literature . Architecture Search of the Student We seek a compression ratio of 4× for BERT BASE, so we design a set of MobileBERT models all with approximately 25M parameters but different ratios of the parameter numbers in MHA and FFN to select a good student model. The right part of Table 1 shows our experimental . They have different balances between self-attentions and feedforward networks. From the table, we can see that the model performance reaches the peak when the ratio of parameters in MHA and FFN is 0.4 ∼ 0.6. This may justify why the original Transformer chooses the parameter ratio of self-attention and feed-forward networks to 0.5. We choose the architecture with 128 intra-block hidden size and 4 stacked FFNs as the student model in consideration of model accuracy and training efficiency. We also accordingly set the number of attention heads in the teacher model to 4 in preparation for the progressive knowledge transfer. Table 2 demonstrates the model settings of our IB-BERT LARGE teacher and MobileBERT student. Following BERT , we use the BooksCorpus and English Wikipedia as our pre-training data. To make the IB-BERT LARGE teacher reach the same accuracy as original BERT LARGE, we train IB-BERT LARGE on 256 TPU v3 chips for 500k steps with a batch size of 4096 and LAMB optimizer . For MobileBERT, we also use the same training schedule. Besides, progressive knowledge transfer of MobileBERT over 24 layers takes 240k steps, so that each layer of MobileBERT is trained for 10k steps. For the downstream tasks, all reported are obtained by simply fine-tuning MobileBERT just like what the original BERT does. To fine-tune the pre-trained models, we search the optimization hyperparameters in a search space including different batch sizes (16/32/48), learning rates ( * e-5), and the number of epochs. The search space is different from the original BERT because we find that MobileBERT usually needs a larger learning rate and more training epochs in fine-tuning. We select the model for testing according to their performance on the development (dev) set. The General Language Understanding Evaluation (GLUE) benchmark is a collection of 9 natural language understanding tasks. We briefly describe these tasks in Appendix F. Following BERT , we use the final hidden vector corresponding to the first input token as model output, and introduced a new linear classification layer for the final predictions. We submit the predictions of MobileBERT and MobileBERT without operational optimizations to the online test evaluation system 2 of GLUE to get the test . We compare MobileBERT with BERT BASE and a few other state-of-the-art pre-BERT models on the GLUE leaderboard: OpenAI GPT and ELMo . We also compare with a recent work on compressing BERT: BERT-PKD . The are listed in Table 3. 3 We can see that our MobileBERT is quite competitive with the original BERT BASE. It outperforms BERT BASE a bit on QNLI and RTE tasks, while the overall GLUE score performance gap is only 0.6. Moreover, It outperform the strong OpenAI GPT baseline by 0.8 GLUE score with 4.3× smaller model size. We also find that the introduced operational optimizations hurt the model performance a bit. Without these optimizations, MobileBERT can even outperform BERT BASE by 0.2 GLUE score. SQuAD is a large-scale reading comprehension datasets. SQuAD1.1 only contains questions that always have an answer in the given context, while SQuAD2.0 contains unanswerable questions. Following BERT , we treat questions that do not have an answer as having an answer span with start and end at the sentence classification token to fine-tune a MobileBERT on SQuAD2.0. We evaluate MobileBERT only on the SQuAD dev datasets, as there is nearly no single model submission on SQuAD test leaderboard 4. We compare our MobileBERT with BERT BASE and a strong baseline DocQA . As shown in Table 4, MobileBERT outperforms a large margin over BERT BASE and DocQA. We notice that MobileBERT also outperforms BERT BASE on QNLI, a question-answering GLUE task. This may be due to that since we search the model settings on SQuAD, MobileBERT may be over-fitted to question answering tasks. We perform an ablation study to investigate how each component of MobileBERT contributes to its performance on the dev data of a few GLUE tasks with diverse characteristics. To accelerate the experiment process, we halve the original pre-training schedule in the ablation study. We conduct a set of ablation experiments with regard to Attention Transfer (AT), Feature Map Transfer (FMT) and Pre-training Distillation (PD). The operational OPTimizations (OPT) are removed in these experiments. Moreover, to investigate the effectiveness of the proposed novel architecture of MobileBERT, we compare MobileBERT with two compact BERT models from. For a fair comparison, we also design our own BERT baseline BERT SMALL*, which is the best model setting we can find with roughly 25M parameters under the original BERT architecture. The detailed model setting of BERT SMALL* can be found in Table 2. Besides these experiments, to verify the performance of MobileBERT on real-world mobile devices, we export the models with Tensorflow Lite 5 APIs and measure the inference latencies on a single large core of a Pixel 3 phone with a fixed sequence length of 128. The are listed in Table 5. We first can see that the propose Feature Map Transfer contributes most to the performance improvement of MobileBERT, while Attention Transfer and Pre-training Distillation also play positive roles. As expected, the proposed operational OPTimizations hurt the model performance a bit, but it brings a crucial speedup of 1.68×. In architecture comparison, we find that although specifically designed for progressive knowledge transfer, our MobileBERT architecture alone is still quite competitive. It outperforms BERT SMALL * and BERT SMALL on all compared tasks, while outperforming the 1.7× sized BERT MEDIUM on the SST-2 task. Finally, we can L1 H1 L1 H2 L1 H3 L1 H4 L12 H1 L12 H2 L12 H3 L12 H4 MobileBERT (find that although augmented with the powerful progressive knowledge transfer, our MobileBERT still degrades greatly when compared to the IB-BERT LARGE teacher. We visualize the attention distributions of the 1 st and the 12 th layers of a few models in Figure 3 for further investigation. The proposed attention transfer can help the student mimic the attention distributions of the teacher very well. Surprisingly, we find that the attention distributions in the attention heads of "MobileBERT(bare)+PD+FMT" are exactly a re-order of those of "Mobile-BERT(bare)+PD+FMT+AT" (also the teacher model), even if it has not been trained by the attention transfer objective. This phenomenon indicates that multi-head attention is a crucial and unique part of the non-linearity of BERT. Moreover, it can explain the minor improvements of Attention Transfer in ablation table 5, since the alignment of feature maps lead to the alignment of attention distributions. We have presented MobileBERT which is a task-agnostic compact variant of BERT. It is built upon a progressive knowledge transfer method and a conjugate architecture design. Standard model compression techniques including quantization and pruning can be applied to MobileBERT to further reduce the model size as well as the inference latency. In addition, although we have utilized low-rank decomposition for the embedding layer, it still accounts for a large part in the final model. We believe there is a big room for extremely compressing the embedding table . Layer-wise pre-training of neural networks can be dated back to Deep Belief Networks (DBN) and stacked auto-encoders . showed that the unsupervised pre-training of DBN helps to mitigate the difficult optimization problem of deep networks by better initializing the weights of all layers. Although they made essential breakthrough in the application of neural networks, they are widely considered to be obsolete. A more popular way today is to train deep neural networks in an end-to-end fashion. recently showed that end-to-end learning can sometimes be very inefficient. In this paper, we propose a progressive knowledge transfer scheme to combine the best of both worlds. Compared to previous layer-wise methods, we use a well-optimized wider teacher to guide the layer-wise pre-training of the narrower student, rather than a greedy layer-wise unsupervised way, which makes better use of labels and rewards. Our method also tackle the difficult training problem of end-to-end training from scratch. While much recent research has focused on improving efficient Convolutional Neural Networks (CNN) for mobile vision applications (; ; ; ;, they are usually tailored for CNN. Popular lightweight operations such as depth-wise convolution cannot be directly applied to Transformer or BERT. In the NLP literature, the most relevant work can be group LSTMs , which employs the idea of group convolution (; into Recurrent Neural Networks (RNN). Recently, compressing or accelerating Transformer or BERT has attracted much attention. apply Block-Term Tensor Decomposition on the self-attention modules of Transformer and achieve a compression of 2.5 on the machine translation task, but they don't consider how to compress the feed-forward networks, which constrains the compression ratio. use structured memory layers to replace feed-forward networks in BERT and get better perplexity by half the computation, but they cannot compress the model size. Compared to these work, Mobile-BERT reduces overheads in both self-attentions and feed-forward networks of BERT by bottleneck structure, while achieves efficiency with regard to both storage and computation. We evaluate the effectiveness of our two operational optimizations for MobileBERT introduced in Section 4.3: replacing layer normalization (LayerNorm) with NoNorm and replacing gelu activation with relu activation. We use the same experimental setting as in Section 5.5, where the models are exported to Tensorflow Lite format and evaluated on a single large core of a Pixel 3 phone with a fixed sequence length of 128. From Table 6, we can see that both NoNorm and relu are very effective in reducing the latency of MobileBERT, even if these two operational optimizations do not reduce FLOPS. This reveals the gap between the real-world inference latency and the theoretical computation overhead (i.e., FLOPS). (λ, µ, β, γ) are hyperparameters to balance the different loss terms. Specifically, we use λ = 1, µ = 100, β = 5000, γ = 5 in our all experiments. Pre-train MobileBERT For a fair comparison with original BERT, we follow the same preprocessing scheme as BERT, where we mask 15% of all WordPiece tokens in each sequence at random and use next sentence prediction. Please note that MobileBERT can be potentially further improved by several training techniques recently introduced, such as span prediction or removing next sentence prediction objective. We leave it for future work. In pre-training distillation, the hyperparameter α is used to balance the original masked language modeling loss and the distillation loss. Following , we set α to 0.5. We notice that recently there is an unpublished work 6 that also propose a task-agnosticly compressed BERT, called DistilBERT. Basically, DistilBERT is a 6-layer truncated BERT BASE, which is distilled from BERT BASE on unannotated data with masked language modeling target. The distillation process of DistilBERT is quite similar to the pre-training distillation described in Section 3.4. In comparison, in this paper, we propose a pair of conjugate architectures to help knowledge transfer and design a progressive knowledge transfer scheme which transfers the intrinsic knowledge of intermediate layers from the teacher to the student in a bottom-to-top progressive way. In this section, we provide a brief description of the tasks in the GLUE benchmark . CoLA The Corpus of Linguistic Acceptability is a collection of English acceptability judgments drawn from books and journal articles on linguistic theory. The task is to predict whether an example is a grammatical English sentence and is evaluated by Matthews correlation coefficient . SST-2 The Stanford Sentiment Treebank is a collection of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence and is evaluated by accuracy. MRPC The Microsoft Research Paraphrase Corpus is a collection of sentence pairs automatically extracted from online news sources. They are labeled by human annotations for whether the sentences in the pair are semantically equivalent. The performance is evaluated by both accuracy and F1 score. The Semantic Textual Similarity Benchmark is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. The task is to predict these scores and is evaluated by Pearson and Spearman correlation coefficients.
We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD.
1,744
scitldr
The importance weighted autoencoder (IWAE) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over $K > 1$ Monte Carlo samples. Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as $K$ is increased . This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from ) or through an identity from (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient). In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from is preferable to optimising IWAE-type multi-sample objectives. To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm. We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases. Let x be some observation and let z be some latent variable taking values in some space Z. These are modeled via the generative model p θ (z, x) = p θ (z)p θ (x|z) which gives rise to the marginal likelihood p θ (x) = Z p θ (z, x) dz of the model parameters θ. In this work, we analyse algorithms for variational inference, i.e. algorithms which aim to 1. learn the generative model, i.e. find a value θ which is approximately equal to the maximum-likelihood estimate (MLE) θ ml:= arg max θ p θ (x); 2. construct a tractable variational approximation q φ,x (z) of p θ (z|x) = p θ (z, x)/p θ (x), i.e. find the value φ such that q φ,x (z) is as close as possible to p θ (z|x) in some suitable sense. A few comments about this setting are in order. Firstly, as is common in the literature, we restrict our presentation to a single latent representation-observation pair (z, x) to avoid notational clutter -the extension to multiple independent observations is straightforward. Secondly, we assume that no parameters are shared between the generative model p θ (z, x) and the variational approximation q φ,x (z). This is common in neural-network applications but could be relaxed. Thirdly, our setting is general enough to cover amortised inference. For this reason, we often refer to φ as the parameters of an inference network. Two main classes of stochastic gradient-ascent algorithms for optimising ψ:= (θ, φ) which employ K ≥ 1 Monte Carlo samples ('particles') to reduce errors have been proposed. heuristically drops the problematic score-function terms from the IWAE φ-gradient. This induces bias for the IWAE objective. -IWAE-DREG. The'doubly-reparametrised' IWAE (IWAE-DREG) φ-gradient unbiasedly removes the problematic score-function terms from the IWAE φ-gradient using a formal identity. • RWS. The reweighted wake-sleep (RWS) algorithm optimises two separate objectives for θ and φ. Its gradients are approximated by self-normalised importance sampling with K particles: this induces a bias which vanishes as K → ∞. RWS can be viewed as an adaptive importance-sampling approach which iteratively improves its proposal distribution while simultaneously optimising θ via stochastic approximation. Crucially, the RWS φ-gradients do not degenerate as K → ∞. Of these two methods, the IWAE is the most popular and demonstrated empirically that RWS can break down, conjecturing that this is due to the fact that RWS does not optimise a joint objective (for θ and φ). Meanwhile, the IWAE-STL gradient performed consistently well despite lacking a firm theoretical footing. Yet, IWAE suffers from the above-mentioned φ-gradient breakdown and exhibited inferior empirical performance to RWS . Thus, it is not clear whether the multi-sample objective approach of IWAE or the adaptive importance-sampling approach of RWS is preferable. In this work, we show that directly optimising the proposal distribution, e.g. as done by RWS, is preferable to optimising the IWAE multi-sample objective because (a) the multi-sample objective typically relies on reparametrisations and, even if these are available, leads to the φ-gradient breakdown, (b) modifications of the IWAE φ-gradient which avoid this breakdown (i.e. IWAE-STL and IWAE-DREG) can be justified in a more principled manner by taking an RWS-type adaptive importance-sampling view. This was already reached by based on numerical experiments. They demonstrated that the need for reparametrisations can make IWAE inferrior to RWS e.g. for discrete latent variables. Our work complements theirs by formalising this argument. To this end, we slightly generalise the RWS algorithm to obtain a generic adaptive importance-sampling framework for variational inference which we term adaptive importance sampling for learning (AISLE) for ease of reference. We then show that AISLE admits not only RWS but also the IWAE-DREG and IWAE-STL gradients as special cases. Novel material is presented in Section 3, where we introduce the AISLEframework. From this, most of the previously proposed gradient estimators can be naturally derived in a principled manner. Importantly, the derived gradient estimators are guaranteed to not degenerate as K → ∞. Specifically, we establish the following connections. • We prove that the IWAE-STL gradient can be recovered as a special case of AISLE via a principled and novel application of the'double-reparametrisation' identity from. This indicates that the breakdown of RWS observed in may not be due to its lack of a joint objective as previously conjectured (since IWAE-STL avoided this breakdown despite having the same idealised objective as RWS). Our work also provides a theoretical foundation for IWAE-STL which was hitherto only heuristically justified as a biased IWAE-gradient. • We prove that AISLE also admits the IWAE-DREG gradient as a special case. Our derivation also makes it clear that the learning rate should be scaled as O(K) for the IWAE φ-gradient (and its modified version IWAE-DREG) unless the gradients are normalised as implicitly done by popular optimisers such as ADAM . In contrast, the learning rate for AISLE need not be scaled up with of K. • When applied to the family of α-divergences, AISLE leads to a new family of gradient estimators that generalises some previously derived in the literature. • In the supplementary materials, we provide insights into the impact of the selfnormalisation bias on some of the importance-sampling based gradient approxima-tions (Appendix A) and empirically compare the main algorithms discussed in this work (Appendix B). We stress that the focus of our work is not necessarily to derive new algorithms nor to establish which of the various special cases of AISLE is preferable. Indeed, while we compare all algorithms discussed in this work empirically on Gaussian models in the supplementary materials, we refer the reader to; for an extensive empirical comparisons of all the algorithms discussed in this work. Notation. We repeatedly employ the shorthand p(f): To keep the notation concise, we hereafter suppress dependence on the observation x, i.e. we write q φ (z):= q φ,x (z) as well as where γ θ (z):= p θ (z, x) and where 2 Background The expectation q φ (f) of a test function f: Z → R can be unbiasedly estimated by the φ, which are independent and identically distributed (IID) according to q φ. Similarly, expectations of the type π θ (f) can be approximated by the self-normalised importance sampling estimatê The notation φ, z stresses the dependence of this estimator on φ and z. The quantity w ψ (z k) are called the kth importance weight and s w k ψ is its self-normalised version. For readability, we have dropped the dependence of s w k ψ on z ∈ Z K from the notation. Remark 2. The self-normalised estimateπ θ φ, z (f) is typically not unbiased. Under mild assumptions (e.g. if sup w ψ < ∞), its bias vanishes at rate O(K −1), its standard deviation vanishes at Monte-Carlo rate Objective. The importance weighted autoencoder (IWAE), introduced by , seeks to find a value θ of the generative-model parameters θ which maximises a lower bound L K ψ on the log-marginal likelihood ('evidence'). This bound depends on the inference-network parameters φ and the number of samples, K ≥ 1: where the expectation is w.r.t. z ∼ q ⊗K φ. For any finite K, optimisation of the inferencenetwork parameters φ tightens the evidence bound. prove that for any φ we have that L K ψ ↑ log Z θ as K → ∞. If K = 1, the IWAE reduces to the variational autoencoder (VAE) from. However, for K > 1, as pointed out in; , the IWAE also constitutes another VAE on an extended space based on an auxiliary-variable construction developed in;; Lee (see, e.g. , for a review). The gradient of the IWAE objective from: The intractable quantity E G ψ (z) can be approximated unbiasedly via a vanilla Monte Carlo approach using a single (Unfortunately, this approximation typically has such a large variance that it becomes impracticably noisy . To remove this high-variance term, the well known reparametrisation trick is usually employed. It requires the following assumption. (R1) There exists a distribution q on some space E and a family of differentiable mappings, the gradient can be expressed as Here, the notation ψ indicates that one does not differentiate w ψ w.r.t. ψ. The IWAE then uses a vanilla Monte Carlo estimate of, Before proceeding, we state the following lemma, proved in Tucker et al. (2019, Section 8.1), which generalises of the well-known identity q φ (∇ φ log q φ) = 0. Lemma 1 . Under R1, for suitably integrable f ψ: Z → R, we have We now exclusively focus on the φ-portion of the IWAE gradient, ∇ iwae φ θ, z. Remark 3 (drawbacks of the IWAE φ-gradient). The gradient ∇ iwae φ θ, z has three drawbacks. The last two of these are attributable to the'score-function' terms ∇ φ log q φ (z) in the φ-gradient portion of. • Reliance on reparametrisations. A reparametrisation à la R1 is necessary to remove the high-variance term G ψ (z). For, e.g. discrete, models that violate R1, control-variate approaches or continuous relaxations have been proposed but these incur additional implementation, tuning and computation costs whilst not necessarily reducing the variance . • Vanishing signal-to-noise ratio. The φ-gradient breaks down in the sense that its signal-to-noise ratio vanishes as ). This is because ∇ iwae φ θ, z constitutes a self-normalised importance-sampling approximation of π θ (ψ − ∇ φ log q φ) = 0, an identity which directly follows from Lemma 1 with f ψ = w ψ. • Inability to achieve zero variance. As pointed out in , Two modifications of ∇ iwae φ θ, z have been proposed which (under R1) avoid the scorefunction terms in and hence (a) exhibit a stable signal-to-noise ratio as K → ∞ and (b) can achieve zero variance if q φ = π θ (because then ψ ≡ 0 since w ψ is constant). • IWAE-STL. The'sticking-the-landing' IWAE (IWAE-STL) gradient proposed by heuristically ignores the score function terms, As shown in ), this introduces an additional bias whenever K > 1. • IWAE-DREG. The'doubly-reparametrised' IWAE (IWAE-DREG) gradient proposed by removes the score-function terms through Lemma 1, The quantities ∇ iwae-dreg φ θ, z and ∇ iwae φ φ, z are equal in expectation. The reweighted wake-sleep (RWS) algorithm was proposed in. The θ-and φ-gradients read These quantities are usually intractable and therefore approximated by replacing π θ by the self-normalised importance sampling approximationπ θ φ, z (this does not require R1): Since relies on self-normalised importance sampling, Remark 2 shows that its bias relative to is of order O(1/K). Appendix A discusses the impact of this bias on the φ-gradient in more detail. The optimisation of both θ and φ is carried out simultaneously, allowing both gradients to share the same particles and weights. Nonetheless, the lack of a joint objective (for both θ and φ) is often viewed as the main drawback of RWS. rws φ θ, z in expectation and is derived by applying Lemma 1 to the latter. It reads where the function F(w):= w(1 − w) is used to transform the self-normalised importance weights s w k ψ. In high-dimensional settings, it is typically the case that the ordered selfnormalised importance weights s w are then mainly supported on the two particles with the largest self-normalised weights. and φ simultaneously is that (a) Monte Carlo samples used to approximate the θ-gradient can be re-used to approximate the φ-gradient and (b) optimising φ typically reduces the error (both in terms of bias and variance) of the θ-gradient approximation. However, adapting the proposal distribution q φ in importance-sampling schemes need not necessarily be based on minimising the (inclusive) KL-divergence. Numerous other techniques exist in the literature (e.g. ; ; ; ;) and may sometimes be preferable. Indeed, another popular approach with strong theoretical support is based on minimising the χ 2 -divergence (see, e.g., Deniz Akyildiz & Míguez, 2019). Based on this insight, we slightly generalise the RWS-objective as θ:= arg max θ log Z θ, φ:= arg min φ Dƒ(πθ q φ). Here, Dƒ(p q):= Z ƒ(p(z)/q(z))q(z) dz is some ƒ-divergence from p to q. We reiterate that alternative approaches for optimising φ (which do not minimise ƒ-divergences) could be used. However, we state for concreteness as it suffices for the remainder of this work; we call the ing algorithm adaptive importance sampling for learning (AISLE). As will become clear below, this unified framework permits a straightforward and principled derivation of robust φ-gradient estimators that do not degenerate as K → ∞. Optimisation is again performed via a stochastic gradient-ascent. The intractable θ-gradient The θ-gradient is thus the same for all algorithms discussed in this work although the IWAEparadigm views it as an unbiased gradient of a (biased) lower-bound to the evidence, while AISLE (and RWS) interpret it as a self-normalised importance-sampling (and consequently biased) approximation of the gradient ∇ θ log Z θ for the'exact' objective. In the derivations to follow, integrals of the form π θ ([F • w ψ]∇ φ log q φ ) naturally appear. These can also be expressed as Z Approximating the expectation as well as the normalising constant Z θ on the r.h.s. with the vanilla Monte Carlo method with Remark 2 shows that this approximation has a bias of order O(K −1) and a standarddeviation of order O(K −1/2). Now, most of the ƒ-divergences used for variational inference in intractable models are such that there exists a functionf: for an exponent κ ∈ R and constant C(θ) independent of φ. In other words, for a given value of θ, the optimization of the ƒ-divergence as a function of φ can be carried out without relying on the knowledge of Z θ. Writing g(y):=f (y) −f (y)/y, simple algebra then directly shows that Since the integral in is an expectation with respect to π θ, it can be approximated with selfimportance sampling, possibly multiplied an additional importance-sampling approximation Z θ φ, z of Z θ raised to some power. This leads to, Indeed, Equation applies to, leading to the reparametrised estimator where h(y) = g(y)y and g: R → R given immediately above. We now describe several particular cases. We have KL(In that case, with the notations of Section 3.3.1, we have g(y) = 1 and h (y) = 1. • AISLE-KL-NOREP/RWS. Without relying on any reparametrisation, Equation yields the following gradient, which clearly equals ∇ rws φ θ, z: • AISLE-KL. Using reparametrisation, Equation yields the gradient: We thus arrive at the following which demonstrates that IWAE-STL can be derived in a principled manner from AISLE, i.e. without the need for a multi-sample objective. θ, z. Proposition 1 is notable because it shows that IWAE-STL (which avoids the breakdown highlighted in and which can also achieve zero variance) can be derived in a principled manner from AISLE, i.e. without relying on a multi-sample objective. Proposition 1 thus provides a theoretical basis for IWAE-STL which was previously viewed as an alternative gradient for IWAE for which it is biased and only heuristically justified. Furthermore, the fact that IWAE-STL exhibited good empirical performance in even in an example in which RWS broke down, suggests that this breakdown may not be due to RWS' lack of optimising a joint objective as previously conjectured. by first replacing the exact (but intractable) φ-gradient by the self-normalised importance-sampling approximation ∇ rws φ θ, z and then applying the identity from Lemma 1. Note that this may in a variance reduction but does not change the bias of the gradient estimator. In contrast, AISLE-KL is derived by first applying Lemma 1 to the exact (RWS) φ-gradient and then approximating the ing expression. This can potentially reduce both bias and variance. Up to some irrelevant additive constant, the α-divergence between two distributions p and q is given by Z (p(z)/q(z)) α q(z) dz for some α > 1. This can also be expressed as Z κ θ Zf (w ψ (z))q φ (z) dz with κ = −α andf (y) = y α. In this case, with the notation from Section 3.3.1, we have g(y) = (α − 1)y α−1 and h (y) = α(α − 1) y α−1. Note that the case α = 2 is equivalent, up to an irrelevant additive constant, to a standard χ 2 -divergence. Minimising this divergence is natural in importance sampling since χ 2 (π θ q φ) = var z∼q φ [w ψ /Z θ] is the variance of the importance weights. • AISLE-α-NOREP. Without relying on any reparametrisation, Equation yields with the following special case which is also proportional to the'score gradient' from Dieng et al. (2017, Appendix G): • AISLE-α. Using reparametrisation, Equation becomes again with the special case ∇ This demonstrates that IWAE-DREG can be derived (up to the proportionality factor 2K) in a principled manner from AISLE, i.e. without the need for a multi-sample objective. θ, z. Note that if the implementation normalises the gradients, e.g. as effectively done by ADAM , the constant factor cancels out and AISLE-χ 2 becomes equivalent to IWAE-DREG. Otherwise (e.g. in plain stochastic gradient-ascent) this shows that the learning rate needs to be scaled as O(K) for the IWAE or IWAE-DREG φ-gradients. For the'exclusive' KL-divergence, we have KL(q φ π θ) = f (w ψ (z))q φ (z) dz + C(θ) with f (y) = log(y). In that case, with the notation from Section 3.3.1, we have h (y) = 1/y. This directly leads to the following approximation, This can be recognised as a simple average over K independent replicates of the'stickingthe-landing' estimator for VAEs proposed in Roeder et al. (2017, Equation 8). As we discuss in Appendix A, optimising this'exclusive' KL-divergence can sometimes lead to faster convergence of φ than optimising the'inclusive' KL-divergence KL(π θ q φ). However, care must be taken because minimising the exclusive divergence does not necessarily lead to well behaved or even well-defined importance weights and thus can negatively affect learning of θ (whose gradient is a self-normalised importance-sampling approximation which makes use of those weights). We have shown that the adaptive-importance sampling paradigm of the reweighted wake-sleep (RWS) is preferable to the multi-sample objective paradigm of importance weighted autoencoders (IWAEs) because the former achieves all the goals of the latter whilst avoiding its drawbacks. A On the rôle of the self-normalisation bias within RWS/AISLE Within the self-normalised importance-sampling approximation, the number of particles, K, interpolates between two extremes: • As K ↑ ∞,π θ φ, z (f) becomes an increasingly accurate approximation of π θ (f). • For K = 1, however,π θ φ, z (f) = f (z 1) reduces to a vanilla Monte Carlo approximation of q φ (f) (because the single self-normalised importance weight is always equal to 1). This leads to the following insight about the estimators ∇ aisle-kl φ θ, z and ∇ aisle-χ 2 φ θ, z. • As K ↑ ∞, these two estimators become increasingly accurate approxi-, respectively. • For K = 1, however, these two estimators reduce to vanilla Monte Carlo ap- This is similar to the standard IWAE φ-gradient which also represents a vanilla Monte Carlo approximation of −∇ φ KL(q φ π θ) if K = 1 as IWAE reduces to a VAE in this case. Characterising the small-K self-normalisation bias of the reparametrisation-free AISLE φ gradients, AISLE-KL-NOREP and AISLE-χ 2 -NOREP, is more difficult because if K = 1, they constitute vanilla Monte Carlo approximations of q φ (∇ φ log q φ) = 0. Nonetheless, Le et al. (2019, Figure 5) lends some support to the hypothesis that the small-K self-normalisation bias of these gradients also favours a minimisation of the exclusive KL-divergence. Recall that the main motivation for use of IWAEs (instead of VAEs) was the idea that we could use self-normalised importance-sampling approximations with K > 1 particles to reduce the bias of the θ-gradient relative to ∇ θ log Z θ. The error of such (self-normalised) importance-sampling approximations can be controlled by ensuring that q φ is close to π θ (in some suitable sense) in any part of the space Z in which π θ has positive probability mass. For instance, it is well known that the error will be small if the'inclusive' KL-divergence KL(π θ q φ) is small as this implies well-behaved importance weights. In contrast, a small'exclusive' KL-divergence KL(q φ π θ) is not sufficient for well-behaved importance weights because the latter only ensures that q φ is close to π θ in those parts of the space Z in which q φ has positive probability mass. Let Q:= {q φ} (which is indexed by φ) be the family of proposal distributions/the variational family. Then we can distinguish two scenarios. 1. Sufficiently expressive Q. For the moment, assume that the family Q is flexible ('expressive') enough in the sense that it contains a distribution q φ which is (at least approximately) equal to π θ and that our optimiser can reach the value φ of φ. In this case, minimising the exclusive KL-divergence can still yield well-behaved importance weights because in this case, φ:= arg min φ KL(π θ q φ) is (at least approximately) equal to arg min φ KL(q φ π θ). 2. Insufficiently expressive Q. In general, the family Q is not flexible enough in the sense that all of its members are'far away' from π θ, e.g. if the is fully factorised. In this case, minimising the exclusive KL-divergence could lead to poorly-behaved importance weights and we should optimise φ:= arg min φ KL(π θ q φ) as discussed above. Remark 4. In Scenario 1 above, i.e. for a sufficiently flexible Q, using a gradient-descent algorithm which seeks to minimise the exclusive divergence can sometimes be preferable to a gradient-descent algorithm which seeks to minimise the inclusive divergence. This is because both find (approximately) the same optimum but the latter may exhibit faster convergence in some applications. In such scenarios, the discussion in Subsection A.1 indicates that a smaller number of particles, K, could then be preferable for some of the φ-gradients because (a) the O(K −1) self-normalisation bias outweighs the O(K −1/2) standard deviation and (b) the direction of this bias may favour faster convergence. Unfortunately, simply setting K = 1 for the approximation of the φ-gradients 2 is not necessarily optimal because • even in the somewhat idealised scenario 1 above and even if the direction of the self-normalisation bias encourages faster convergence, increasing K is still desirable to reduce the variance of the gradient approximations and furthermore, even in this scenario, seeking to optimise the exclusive KL-divergence could lead to poorly behaved importance-sampling approximations of the θ-gradient whenever φ is still far away from optimal; • not using the information contained in all K particles and weights (which have already been sampled/calculated to approximate the θ-gradient) seems wasteful; • if K = 1, the reparametrisation-free AISLE φ-gradients, AISLE-KL-NOREP and AISLE-χ 2 -NOREP are simply vanilla Monte Carlo estimates of 0 and the RWS-DREG φ-gradient is then equal to 0. In these supplementary materials, we illustrate the different φ-gradient estimators (recall that all algorithms discussed in this work share the same θ-gradient estimator). Specifically, we compare the following approximations. • AISLE-KL-NOREP. The gradient for AISLE based on the KL-divergence without any further reparametrisation from i.e. this coincides with the standard RWSgradient from. This gradient does not require R1 but does not achieve zero variance even if q φ = π θ. • AISLE-KL. The gradient for AISLE based on the KL-divergence after reparametrising and exploiting the identity from Lemma 1; it is given by and coincides with the IWAE-STL-gradient from. • AISLE-χ 2 -NOREP. The gradient for AISLE based on the χ 2 -divergence without any reparametrisation given in. This gradient again does not require R1 but does not achieve zero variance even if q φ = π θ. • AISLE-χ 2. The gradient for AISLE based on the χ 2 -divergence after reparametrising and exploiting the identity from Lemma 1; it is given by and is alsow proportional to IWAE-DREG from which was stated in. When normalising the gradients (as, e.g. implicitly done by optimisers such as) the proportionality constant cancels out so that both these gradient approximations lead to computationally the same algorithm. • IWAE. The gradient for IWAE employing the reparametrisation trick from. Its sampling approximation is given in. Recall that this is the φ-gradient whose signal-to-noise ratio degenerates with K as pointed out in (and which also cannot achieve zero variance even if q φ = π θ). • IWAE-DREG. The'doubly-reparametrised' IWAE gradient from which was proposed in. It is proportional to AISLE-χ 2. • RWS-DREG. The'doubly-reparametrised' RWS φ-gradient from Hereafter, wherever necessary, we add an additional subscript to make the dependence on the observations explicit. The joint law (the 'generative model'), parametrised by θ, of the observations and latent variables then factorises as We model each latent variable-observation pair (z, x) as..,D} ∈ R D×D is assumed to be known and where I denotes the D × D-identity matrix. For any θ, with P:= (Σ −1 + I) −1 and ν θ,x:= P (Σ −1 µ + x). In particular, implies that Proposal/variational approximation. We take the proposal distributions as a fullyfactored Gaussian: where. The parameters to optimise are thus where denotes the column vector formed by the elements in the dth row of A. Furthermore, for the reparametrisation trick, we take q:= N(; 0, I), where 0 ∈ R D is a vector whose elements are all 0, so that Note that the mean of the proposal in coincides with the mean of the posterior in if A = P and b This model is similar to the one used as a benchmark in Rainforth et al. (2018, Section 4) and also in Tucker et al. (2019, Section 6.1) who specified both the generative model and the variational approximation to be isotropic Gaussians. Specifically, their setting can be recovered by taking Σ:= I and fixing c d = log(2/3)/2 so that C = 2 3 I throughout. Here, in order to investigate a slightly more realistic scenario, we also allow for the components of the latent vectors z to be correlated/dependent under the generative model. However, as the variational approximation remains restricted to being fully factored, it may fail to fully capture the uncertainty about the latent variables. φ,x (z), we then have Note that the only source of randomness in this expression is the multivariate normal random variable. Thus, by and, for any values of A and b and any K ≥ 1, the variance of the A-and b-gradient portion of AISLE-KL/IWAE-STL and AISLE-χ 2 /IWAE-DREG goes to zero as C → C = 1 2 I. In other words, in this model, these'score-function free' φ-gradients achieve (near) zero variance for the parameters governing the proposal mean as soon as the variance-parameters fall within a neighbourhood of their optimal values. Furthermore, combined with shows that for any K ≥ 1, the variance of the C-gradient portion also goes to zero as (A, b, C) → (A, b, C). A more thorough analysis of the benefits of reparametrisation-trick gradients in Gaussian settings is carried out in. Setup. We end this section by empirically comparing the algorithms from Subsection B.1. We run each of these algorithms for a varying number of particles, K ∈ {1, 10, 100}, and varying model dimensions, D ∈ {2, 5, 10}. Each of these configurations is repeated independently 100 times. Each time using a new synthetic data set consisting of N = 25 observations sampled from the generative model after generating a new'true' prior mean vector as µ ∼ N(0, I). Since all the algorithms share the same θ-gradient, we focus only on the optimisation of φ and thus simply fix θ:= θ ml throughout. We show for the following model settings. • Figure 1. The generative model is specified via Σ = I. In this case, there exists a value φ of φ such that q φ,x (z) = π θ,x (z). Note that this corresponds to Scenario 1 in Subsection A.2. • Figure 2. The generative model is specified via Σ = (0.95 |d−d |+1) (d,d)∈{1,...,D} 2. Note that in this case, the fully-factored variational approximation cannot fully mimic the dependence structure of the latent variables under the generative model. That is, in this case, q φ,x (z) = π θ,x (z) for any values of φ. Note that this corresponds to Scenario 2 in Subsection A.2. To initialise the gradient-ascent algorithm, we draw each component of the initial values φ 0 of φ IID according to a standard normal distribution. We use both plain stochastic gradient-ascent with the gradients normalised to have unit L 1 -norm (Figures 1a, 2a) and ADAM with default parameter values (Figures 1b, The total number of iterations is 10, 000; in each case, the learning-rate parameters at the ith step are i −1/2 . Figure 1 except that here, the covariance matrix Σ = (0.95 |d−e|+1) (d,e)∈{1,...,D} 2 is not a diagonal matrix. Again, note the logarithmic scaling on the second axis.
We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm.
1,745
scitldr
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf. Deep learning is booming thanks to enormous datasets and very large models, leading to the fact that the largest datasets and models can no longer be trained on a single machine. One common solution to this problem is to use distributed systems for training. The most common algorithms underlying deep learning are stochastic gradient descent (SGD) and its variants, which led to a significant amount of research on building and understanding distributed versions of SGD. Implementations of SGD on distributed systems and data-parallel versions of SGD are scalable and take advantage of multi-GPU systems. Data-parallel SGD, in particular, has received significant attention due to its excellent scalability properties (; ; ; ; ; ; ; ; . In data-parallel SGD, a large dataset is partitioned among K processors. These processors work together to minimize an objective function. Each processor has access to the current parameter vector of the model. At each SGD iteration, each processor computes an updated stochastic gradient using its own local data. It then shares the gradient update with its peers. The processors collect and aggregate stochastic gradients to compute the updated parameter vector. Increasing the number of processing machines reduces the computational costs significantly. However, the communication costs to share and synchronize huge gradient vectors and parameters increases dramatically as the size of the distributed systems grows. Communication costs may thwart the anticipated benefits of reducing computational costs. Indeed, in practical scenarios, the communication time required to share stochastic gradients and parameters is the main performance bottleneck (; ; ; . Reducing communication costs in data-parallel SGD is an important problem. One promising solution to the problem of reducing communication costs of data-parallel SGD is gradient compression, e.g., through gradient quantization (; ; ; ; ; ;). (This should not be confused with weight quantization/sparsification, as studied by ; ; ;, which we do not discuss here.) Unlike full-precision data-parallel SGD, where each processor is required to broadcast its local gradient in full-precision, i.e., transmit and receive huge full-precision vectors at each iteration, quantization requires each processor to transmit only a few communication bits per iteration for each component of the stochastic gradient. One popular such proposal for communication-compression is quantized SGD (QSGD), due to. In QSGD, stochastic gradient vectors are normalized to have unit L 2 norm, and then compressed by quantizing each element to a uniform grid of quantization levels using a randomized method. While most lossy compression schemes do not provide convergence guarantees, QSGD's quantization scheme, is designed to be unbiased, which implies that the quantized stochastic gradient is itself a stochastic gradient, only with higher variance determined by the dimension and number of quantization levels. As a , are able to establish a number of theoretical guarantees for QSGD, including that it converges under standard assumptions. By changing the number of quantization levels, QSGD allows the user to trade-off communication bandwidth and convergence time. Despite their theoretical guarantees based on quantizing after L 2 normalization, Alistarh et al. opt to present empirical using L ∞ normalization. We call this variation QSGDinf. While the empirical performance of QSGDinf is strong, their theoretical guarantees on the number of bits transmitted no longer apply. Indeed, in our own empirical evaluation of QSGD, we find the variance induced by quantization is substantial, and the performance is far from that of SGD and QSGDinf. Given the popularity of this scheme, it is natural to ask one can obtain guarantees as strong as those of QSGD while matching the practical performance of the QSGDinf heuristic. In this work, we answer this question in the affirmative by providing a new quantization scheme which fits into QSGD in a way that allows us to establish stronger theoretical guarantees on the variance, bandwidth, and cost to achieve a prescribed gap. Instead of QSGD's uniform quantization scheme, we use an unbiased nonuniform logarithmic scheme, similar to those introduced in telephony systems for audio compression . We call the ing algorithm nonuniformly quantized stochastic gradient descent (NUQSGD). Like QSGD, NUQSGD is a quantized data-parallel SGD algorithm with strong theoretical guarantees that allows the user to trade off communication costs with convergence speed. Unlike QSGD, NUQSGD has strong empirical performance on deep models and large datasets, matching that of QSGDinf. In particular, we provide a new efficient implementation for these schemes using a modern computational framework (Pytorch), and benchmark it on classic large-scale image classification tasks. The intuition behind the nonuniform quantization scheme underlying NUQSGD is that, after L 2 normalization, many elements of the normalized stochastic gradient will be near-zero. By concentrating quantization levels near zero, we are able to establish stronger bounds on the excess variance. In the overparametrized regime of interest, these bounds decrease rapidly as the number of quantization levels increases. Combined with a bound on the expected code-length, we obtain a bound on the total communication costs of achieving an expected suboptimality gap. The ing bound is slightly stronger than the one provided by QSGD. To study how quantization affects convergence on state-of-the-art deep models, we compare NUQSGD, QSGD, and QSGDinf, focusing on training loss, variance, and test accuracy on standard deep models and large datasets. Using the same number of bits per iteration, experimental show that NUQSGD has smaller variance than QSGD, as expected by our theoretical . This smaller variance also translates to improved optimization performance, in terms of both training loss and test accuracy. We also observe that NUQSGD matches the performance of QSGDinf in terms of variance and loss/accuracy. Further, our distributed implementation shows that the ing algorithm considerably reduces communication cost of distributed training, without adversely impacting accuracy. Our empirical show that NUQSGD can provide faster end-to-end parallel training relative to data-parallel SGD, QSGD, and Error-Feedback SignSGD on the ImageNet dataset. • We establish stronger theoretical guarantees for the excess variance and communication costs of our gradient quantization method than those available for QSGD's uniform quantization method. • We then establish stronger convergence guarantees for the ing algorithm, NUQSGD, under standard assumptions. • We demonstrate that NUQSGD has strong empirical performance on deep models and large datasets, both in terms of accuracy and scalability. Thus, NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf. proposed signSGD, an efficient heuristic scheme to reduce communication costs drastically by quantizing each gradient component to two values. later provided convergence guarantees for signSGD. Note that the quantization employed by signSGD is not unbiased, and so a new analysis was required. As the number of levels is fixed, SignSGD does not provide any trade-off between communication costs and convergence speed. introduced Buckwild!, a lossy compressed SGD with convergence guarantees. The authors provided bounds on the error probability of SGD, assuming convexity and gradient sparsity. proposed TernGrad, a stochastic quantization scheme with three levels. TernGrad also significantly reduces communication costs and obtains reasonable accuracy with a small degradation to performance compared to full-precision SGD. Convergence guarantees for TernGrad rely on a nonstandard gradient norm assumption. As discussed, proposed QSGD, a more general stochastic quantization scheme, for which they provide both theoretical guarantees and experimental validation (although for different variants of the same algorithm). We note that their implementation was only provided in Microsoft CNTK; by contrast, here we provide a more generic implementation in Horovod , a communication back-end which can support a range of modern frameworks such as Tensorflow, Keras, Pytorch, and MXNet. NUQSGD uses a logarithmic quantization scheme. Such schemes have long been used in telephony systems for audio compression . Logarithmic quantization schemes have appeared in other contexts recently: studied weight distributions of long short-term memory networks and proposed to use logarithm quantization for network compression. proposed a gradient compression scheme and introduced an optimal quantization scheme, but for the setting where the points to be quantized are known in advance. As a , their scheme is not applicable to the communication setting of quantized data-parallel SGD. We consider a high-dimensional machine learning model, parametrized by a vector w ∈ R d. Let Ω ⊆ R d denote a closed and convex set. Our objective is to minimize f: Ω → R, which is an unknown, differentiable, convex, and β -smooth function. The following summary is based on. where · denotes the Euclidean norm. Let (S, Σ, µ) be a probability space (and let E denote expectation). Assume we have access to stochastic gradients of f, i.e., we have access to a function In the rest of the paper, we let g(w) denote the stochastic gradient for notational simplicity. The update rule for conventional full-precision projected SGD is w t+1 = P Ω (w t − αg(w t)), where w t is the current parameter input, α is the learning rate, and P Ω is the Euclidean projection onto Ω. We say the stochastic gradient has a second-moment upper bound B when E[g(w) 2 ] ≤ B for all w ∈ Ω. Similarly, the stochastic gradient has a variance upper bound Note that a second-moment upper bound implies a variance upper bound, because the stochastic gradient is unbiased. We have classical convergence guarantees for conventional full-precision SGD given access to stochastic gradients at each iteration: Theorem 1 (Bubeck 2015, Theorem 6.3). Let f: Ω → R denote a convex and β -smooth function and let R 2 sup w∈Ω w − w 0 2. Suppose that the projected SGD update is executed for T iterations with α = 1/(β + 1/γ) where γ = r 2/T /σ. Given repeated and independent access to stochastic gradients with a variance upper bound σ 2, projected SGD satisfies Minibatched (with larger batch sizes) and data-parallel SGD are two common SGD variants used in practice to reduce variance and improve computational efficiency of conventional SGD. At each iteration, each processor computes its own stochastic gradient based on its local data and then broadcasts it to all peers. Each processor receives and aggregates the stochastic gradients from all peers to obtain the updated parameter vector. In detail, the update rule for full-precision dataparallel SGD is where g l (w t) is the stochastic gradient computed and broadcasted by processor l. Provided that g l (w t) is a stochastic gradient with a variance upper bound σ 2 for all l, then Data-parallel SGD is described in Algorithm 1. Full-precision data-parallel SGD is a special case of Algorithm 1 with identity encoding and decoding mappings. Otherwise, the decoded stochastic gradientĝ i (w t) is likely to be different from the original local stochastic gradient g i (w t). By Theorem 1, we have the following convergence guarantees for full-precision data-parallel SGD: Corollary 1 (Alistarh et al. 2017, Corollary 2.2). Let f, R, and γ be as defined in Theorem 1 and let ε > 0. Suppose that the projected SGD update is executed for T iterations with α = 1/(β + √ K/γ) on K processors, each with access to independent stochastic gradients of f with a second-moment bound B. The smallest T for the full-precision data-parallel SGD that guarantees Data-parallel SGD reduces computational costs significantly. However, the communication costs of broadcasting stochastic gradients is the main performance bottleneck in large-scale distributed systems. In order to reduce communication costs and accelerate training, introduced a compression scheme that produces a compressed and unbiased stochastic gradient, suitable for use in SGD. At each iteration of QSGD, each processor broadcasts an encoding of its own compressed stochastic gradient, decodes the stochastic gradients received from other processors, and sums all the quantized vectors to produce a stochastic gradient. In order to compress the gradients, every coordinate (with respect to the standard basis) of the stochastic gradient is normalized by the Euclidean norm of the gradient and then stochastically quantized to one of a small number quantization levels distributed uniformly in the unit interval. The stochasticity of the quantization is necessary to not introduce bias. give a simple argument that provides a lower bound on the number of coordinates that are quantized to zero in expectation. Encoding these zeros efficiently provides communication savings at each iteration. However, the cost of their scheme is greatly increased variance in the gradient, and thus slower overall convergence. In order to optimize overall performance, we must balance communication savings with variance. By simple counting arguments, the distribution of the (normalized) coordinates cannot be uniform. Indeed, this is the basis of the lower bound on the number of zeros. These arguments make no assumptions on the data distribution, and rely entirely on the fact that the quantities being quantized are the coordinates of a unit-norm vector. Uniform quantization does not capture the properties of such vectors, leading to substantial gradient variance. In this paper, we propose and study a new scheme to quantize normalized gradient vectors. Instead of uniformly distributed quantization levels, as proposed by, we consider quantization levels that are nonuniformly distributed in the unit interval, as depicted in Figure 1. In order to obtain a quantized gradient that is suitable for SGD, we need the quantized gradient to remain unbiased. achieve this via a randomized quantization scheme, which can be easily generalized to the case of nonuniform quantization levels. Using a carefully parametrized generalization of the unbiased quantization scheme introduced by Alistarh et al., we can control both the cost of communication and the variance of the gradient. Compared to a uniform quantization scheme, our scheme reduces quantization error and variance by better matching the properties of normalized vectors. In particular, by increasing the number of quantization levels near zero, we obtain a stronger variance bound. Empirically, our scheme also better matches the distribution of normalized coordinates observed on real datasets and networks. We now describe the nonuniform quantization scheme: Let s ∈ {1, 2, · · ·} be the number of internal quantization levels, and let L = (l 0, l 1, · · ·, l s+1) denote the sequence of quantization levels, where, lets(r) and p(r) satisfy ls (r) ≤ r ≤ ls (r)+1 and r = 1 − p(r) ls (r) + p(r)ls (r)+1, respectively. Define τ(r) = ls (r)+1 − ls (r). Note thats(r) ∈ {0, 1, · · ·, s}. where, letting r i = |v i |/ v, the h i (v, s)'s are independent random variables such that h i (v, s) = ls (r i) with probability 1 − p(r i) and h i (v, s) = ls (r i)+1 otherwise. We note that the distribution of h i (v, s) satisfies E[h i (v, s)] = r i and achieves the minimum variance over all distributions that satisfy E[h i (v, s)] = r i with support L. In the following, we focus on a special case of nonuniform quantization withL = (0, 1/2 s, · · ·, 2 s−1 /2 s, 1) as the quantization levels. The intuition behind this quantization scheme is that it is very unlikely to observe large values of r i in the stochastic gradient vectors of machine learning models. Stochastic gradients are observed to be dense vectors . Hence, it is natural to use fine intervals for small r i values to reduce quantization error and control the variance. After quantizing the stochastic gradient with a small number of discrete levels, each processor must encode its local gradient into a binary string for broadcasting. We describe this encoding in Appendix A. In this section, we provide theoretical guarantees for NUQSGD, giving variance and code-length bounds, and using these in turn to compare NUQSGD and QSGD. Please note that the proofs of Theorems 2, 3, 4, and 5 are provided in Appendices B, C, D, and E respectively. where The in Theorem 2 implies that if g(w) is a stochastic gradient with a second-moment bound η, then Q s (g(w)) is a stochastic gradient with a variance upper bound ε Q η. In the range of interest where d is sufficiently large, i.e., s = o(log(d)), the variance upper bound decreases with the number of quantization levels. To obtain this data-independent bound, we establish upper bounds on the number of coordinates of v falling into intervals defined byL. where Theorem 3 provides a bound on the expected number of communication bits to encode the quantized stochastic gradient. Note that 2 2s + √ d2 s ≤ d/e is a mild assumption in practice. As one would expect, the bound,, increases monotonically in d and s. In the sparse case, if we choose s = o(log d) levels, then the upper bound on the expected code-length is Combining the upper bounds above on the variance and code-length, Corollary 1 implies the following guarantees for NUQSGD: Theorem 4 (NUQSGD for smooth convex optimization). Let f and R be defined as in Theorem 1, let ε Q be defined as in Theorem 2, let ε > 0,B = (1 + ε Q)B, and let γ > 0 be given by γ 2 = 2R 2 /(BT). With ENCODE and DECODE defined as in Appendix A, suppose that Algorithm 1 is executed for T iterations with a learning rate α = 1/(β + √ K/γ) on K processors, each with access to independent stochastic gradients of f with a second-moment bound B. Then In addition, NUQSGD requires at most N Q communication bits per iteration in expectation. On nonconvex problems, (weaker) convergence guarantees can be established along the lines of, e.g., (, Theorem 2.1). How do QSGD and NUQSGD compare in terms of bounds on the expected number of communication bits required to achieve a given suboptimality gap ε? The quantity that controls our guarantee on the convergence speed in both algorithms is the variance upper bound, which in turn is controlled by the quantization schemes. Note that the number of quantization levels, s, is usually a small number in practice. On the other hand, the dimension, d, can be very large, especially in overparameterized networks. In Figure 2, we show that the quantization scheme underlying NUQSGD in substantially smaller variance upper bounds for plausible ranges of s and d. Note that these bounds do not make any assumptions on the dataset or the structure of the network. For any (nonrandom) number of iterations T, an upper bound, N A, holding uniformly over iterations k ≤ T on the expected number of bits used by an algorithm A to communicate the gradient on iteration k, yields an upper bound T N A, on the expected number of bits communicated over T iterations by algorithm A. Taking T = T A,ε to be the (minimum) number of iterations needed to guarantee an expected suboptimality gap of ε based on the properties of A, we obtain an upper bound, ζ A,ε = T A,ε N A, on the expected number of bits of communicated on a run expected to achieve a suboptimality gap of at most ε. Theorem 5 (Expected number of communication bits). Provided that s = o(log(d)) and Focusing on the dominant terms in the expressions of overall number of communication bits required to guarantee a suboptimality gap of ε, we observe that NUQSGD provides slightly stronger guarantees. Note that our stronger guarantees come without any assumption about the data. In this section, we examine the practical performance of NUQSGD in terms of both convergence (accuracy) and speedup. The goal is to empirically show that NUQSGD can provide the same performance and accuracy compared to the QSGDInf heuristic, which has no theoretical compression guarantees. For this, we implement and test these three methods (NUQSGD, QSGD, and QSGDInf), together with the distributed full-precision SGD baseline, which we call SuperSGD. We split our study across two axes: first, we examine the convergence of the methods and their induced variance. Second, we provide an efficient implementation of all four methods in Pytorch using the Horovod communication back-end , adapted to efficiently support quantization, and examine speedup relative to the full-precision baseline. We investigate the impact of quantization on training performance by measuring loss, variance, accuracy, and speedup for ResNet models applied to ImageNet and CIFAR10 (Krizhevsky). We evaluate these methods on two image classification datasets: ImageNet and CIFAR10. We train ResNet110 on CIFAR10 and ResNet18 on ImageNet with mini-batch size 128 and base learning rate 0.1. In all experiments, momentum and weight decay are set to 0.9 and 10 −4, respectively. The bucket size and the number of quantization bits are set to 8192 and 4, respectively. We observe similar in experiments with various bucket sizes and number of bits. We simulate a scenario with k GPUs for all three quantization methods by estimating the gradient from k independent mini-batches and aggregating them after quantization and dequantization. In Figure 3 (left and middle), we show the training loss with 8 GPUs. We observe that NUQSGD and QSGDinf improve training loss compared to QSGD on ImageNet. We observe significant gap in training loss on CIFAR10 where the gap grows as training proceeds. We also observe similar performance gaps in test accuracy (provided in Appendix F). In particular, unlike NUQSGD, QSGD does not achieve test accuracy of full-precision SGD. Figure 3 (right) shows the mean normalized variance of the gradient (defined in Appendix F) versus training iteration on the trajectory of single-GPU SGD on CIFAR10. These observations validate our theoretical that NUQSGD has smaller variance for large models with small number of quantization bits. Efficient Implementation and Speedup. To examine speedup behavior, we implemented all quantization methods in Horovod , a communication back-end supporting Pytorch, Tensorflow and MXNet. Doing so efficiently requires non-trivial refactoring of this framework, since it does not support communication compression-our framework will be open-sourced upon publication. Our implementation diverges slightly from the theoretical analysis. First, Horovod applies "tensor fusion" to multiple layers, by merging the ing gradient tensors for more efficient transmission. This causes the gradients for different layers to be quantized together, which can lead to loss of accuracy (due to e.g. different normalization factors across the layers). We addressed this by tuning the way in which tensor fusion is applied to the layers such that it minimizes the accuracy loss. Second, we noticed that quantizing the gradients corresponding to the biases has a significant adverse effect on accuracy; since the communication impact of biases is negligible, we transmit them at full precision. We apply this for all methods considered. Finally, for efficiency reasons, we directly pack the quantized values into 32-bit numbers, without additional encoding. We implemented compression and de-compression via efficient CUDA kernels. Our baselines are full-precision SGD (SuperSGD), Error-Feedback SignSGD , and the QSGDinf heuristic, which we compare against the 4-bit and 8-bit NUQSGD variants executing the same pattern. The implementation of the QSGDinf heuristic provides almost identical convergence numbers, and is sometimes omitted for visibility. (QSGD yields inferior convergence on this dataset and is therefore omitted.) All variants are implemented using a standard all-to-all reduction pattern. Figures 4 (left), (middle) show the execution time per epoch for ResNet34 and ResNet50 models on ImageNet, on a cluster machine with 8 NVIDIA 2080 Ti GPUs, for the hyperparameter values quoted above. The confirm the efficiency and scalability of the compressed variant, mainly due to the reduced communication volume. We note that the overhead of compression and decompression is less than 1% of the batch computation time for NUQSGD. Figure 4 (right) presents end-to-end speedup numbers (time versus accuracy) for ResNet50/ImageNet, executed on 4 GPUs, under the same hyperparameter settings as the full-precision baseline, with bucket size 512. First, notice that NUQSGD variants match the target accuracy of the 32-bit model, with non-trivial speedup over the standard data-parallel variant, directly proportional to the perepoch speedup. The QSGDinf heuristic yields similar accuracy and performance, and is therefore omitted. Second, we found that unfortunately EF-SignSGD does not converge under these standard hyperparameter settings. To address this issue, we performed a non-trivial amount of hyperparameter tuning for this algorithm: in particular, we found that the scaling factors and the bucket size must be carefully adjusted for convergence on ImageNet. We were able to recover full accuracy with EF-SignSGD on ResNet50, but that the cost of quantizing into buckets of size 64. Unfortunately, in this setting the algorithm transmits a non-trivial amount of scaling data, and the GPU implementation becomes less efficient due to error computation and reduced parallelism. The end-to-end speedup of this tuned variant is inferior to NUQSGD-4bit, and only slightly superior to that of NUQSGD-8bit. Please see Figure 9 in the Appendix and the accompanying text for details. We study data-parallel and communication-efficient version of stochastic gradient descent. Building on QSGD, we study a nonuniform quantization scheme. We establish upper bounds on the variance of nonuniform quantization and the expected code-length. In the overparametrized regime of interest, the former decreases as the number of quantization levels increases, while the latter increases with the number of quantization levels. Thus, this scheme provides a trade-off between the communication efficiency and the convergence speed. We compare NUQSGD and QSGD in terms of their variance bounds and the expected number of communication bits required to meet a certain convergence error, and show that NUQSGD provides stronger guarantees. Experimental are consistent with our theoretical and confirm that NUQSGD matches the performance of QSGDinf when applied to practical deep models and datasets including ImageNet. Thus, NUQSGD closes the gap between the theoretical guarantees of QSGD and empirical performance of QSGDinf. One limitation of our study which we aim to address in future work is that we focus on all-to-all reduction patterns, which interact easily with communication compression. In particular, we aim to examine the interaction between more complex reduction patterns, such as ring-based reductions , which may yield superior performance in bandwidthbottlenecked settings, but which interact with communication-compression in non-trivial ways, since they may lead a gradient to be quantized at each reduction step. Read that bit plus N following bits; The encoding, ENCODE(v), of a stochastic gradient is as follows: We first encode the norm v using b bits where, in practice, we use standard 32-bit floating point encoding. We then proceed in rounds, r = 0, 1, · · ·. On round r, having transmitted all nonzero coordinates up to and including t r, we transmit ERC(i r) where t r+1 = t r + i r is either (i) the index of the first nonzero coordinate of h after t r (with t 0 = 0) or (ii) the index of the last nonzero coordinate. In the former case, we then transmit one bit encoding the sign ρ t r+1, transmit ERC(log(2 s+1 h t r+1)), and proceed to the next round. In the latter case, the encoding is complete after transmitting ρ t r+1 and ERC(log(2 s+1 h t r+1)). The DECODE function (for Algorithm 1) simply reads b bits to reconstruct v. Using ERC −1, it decodes the index of the first nonzero coordinate, reads the bit indicating the sign, and then uses ERC −1 again to determines the quantization level of this first nonzero coordinate. The process proceeds in rounds, mimicking the encoding process, finishing when all coordinates have been decoded. Like, we use Elias recursive coding (, ERC) to encode positive integers. ERC is simple and has several desirable properties, including the property that the coding scheme assigns shorter codes to smaller values, which makes sense in our scheme as they are more likely to occur. Elias coding is a universal lossless integer coding scheme with a recursive encoding and decoding structure. The Elias recursive coding scheme is summarized in Algorithm 2. For any positive integer N, the following are known for ERC We first find a simple expression of the variance of Q s (v) for every arbitrary quantization scheme in the following lemma: ), and fix s ≥ 1. The variance of Q s (v) for general sequence of quantization levels is given by where r i = |v i |/ v and p(r),s(r), τ(r) are defined in Section 3.1. Proof. Noting the random quantization is i.i.d over elements of a stochastic gradient, we can decom- where In the following, we consider NUQSGD algorithm withL = (0, 1/2 s, · · ·, 2 s−1 /2 s, 1) as the quantization levels. Then, h i (v, s)'s are defined in two cases based on which quantization interval r i falls into: where p 1 r, s = 2 s r. where p 2 r, s = 2 s− j r − 1. Note that Q s = 0. Let S j denote the coordinates of vector v whose elements fall into the (j + 1)-th bin, i.e., S 0 {i : Applying the of Lemma 1, we have where τ j l j+1 − l j for j ∈ {0, · · ·, s}. Substituting τ 0 = 2 −s and τ j = 2 j−1−s for j ∈ {1, · · ·, s} into, we have We first note that ∑ i∈S 0 p 1 (r i, s) ≤ d and ∑ i∈S j+1 p 2 (r i, s) ≤ d for all j, i.e., an upper bound on the variance of Substituting the upper bounds in and into, an upper bound on the variance of Q s (v) is given by The upper bound in cannot be used directly as it depends on {d 0, · · ·, d s}. Note that d j's depend on quantization intervals. In the following, we obtain an upper bound on E[Q s (v) − v 2 ], which depends only on d and s. To do so, we need to use this lemma inspired by (, Lemma A.5): Let · 0 count the number of nonzero components. Lemma 2. Let v ∈ R d. The expected number of nonzeros in Q s (v) is bounded above by For each i ∈ S 0, Q s (v i) becomes zero with probability 1 − 2 s r i, which in Using a similar argument as in the proof of Lemma 2, we have We defined...... Noting that the coefficients of the additive terms in the upper bound in are monotonically increasing with j, we can find an upper bound on, which gives and completes the proof. Let | · | denote the length of a binary string. In this section, we find an upper bound on E[|ENCODE(v)], i.e., the expected number of communication bits per iteration. Recall from Appendix A that the quantized gradient Q s (v) is determined by the tuple (v, ρ ρ ρ, h). Write i 1 < i 2 < · · · < i h 0 for the indices of the h 0 nonzero entries of h. Let i 0 = 0. The encoding produced by ENCODE(v) can be partitioned into two parts, R and E, such that, for j = 1,..., h 0, • R contains the codewords ERC(i j − i j−1) encoding the runs of zeros; and • E contains the sign bits and codewords ERC(log{2 s+1 h i j}) encoding the normalized quantized coordinates. (, Lemma A.3), the properties of Elias encoding imply that We now turn to bounding |E|. The following in inspired by (, Lemma A.3). Lemma 3. Fix a vector q such that q p p ≤ P, let i 1 < i 2 <... i q 0 be the indices of its q 0 nonzero entries, and assume each nonzero entry is of form of 2 k, for some positive integer k. Then Proof. Applying property for ERC (end of Appendix A), we have where the last bound is obtained by Jensen's inequality. Taking q = 2 s+1 h, we note that q 2 = 2 2s+2 h 2 and By Lemma 3 applied to q and the upper bound, Combining and, we obtain an upper bound on the expected code-length: where It is not difficult to show that, for all k > 0, g 1 (x) x log k x is concave. Note that g 1 is an increasing function up to x = k/e. Defining g 2 (x) x log log C x and taking the second derivative, we have Hence g 2 is also concave on x < C. Furthermore, g 2 is increasing up to some C/5 < x * < C/4. We note that E[h 0] ≤ 2 2s + √ d2 s following Lemma 2. By assumption 2 2s + √ d2 s ≤ d/e, and so, Jensen's inequality and lead us to. Let g(w) andĝ(w) denote the full-precision and decoded stochastic gradients, respectively. Then By Theorem 2, The follows by Corollary 1. Notice that the variance for NUQSGD and QSGDinf is lower than SGD for almost all the training and it decreases after the learning rate drops. All methods except SGD simulate training using 8 GPUs. SuperSGD applies no quantization to the gradients and represents the lowest variance we could hope to achieve. Ignoring all but terms depending on d and s, we have T ε = O(B/ε 2). Following Theorems 2 and 3 for NUQSGD, ζ NUQSGD,ε = O(N Q ε Q B/ε 2). For QSGD, following the of In overparameterized networks, where d ≥ 2 2s+1, we have Figure 6: Accuracy on the hold-out set on CIFAR10 (left) and on ImageNet (right) for training ResNet models from random initialization until convergence. For CIFAR10, the hold-out set is the test set and for ImageNet, the hold-out set is the validation set. In this section, we present further experimental in a similar setting to Section 5. In Figure 6, we show the test accuracy for training ResNet110 on CIFAR10 and validation accuracy for training ResNet34 on ImageNet from random initialization until convergence (discussed in Section 5). Similar to the training loss performance, we observe that NUQSGD and QSGDinf outperform QSGD in terms of test accuracy in both experiments. In both experiments, unlike NUQSGD, QSGD does not recover the test accuracy of SGD. The gap between NUQSGD and QSGD on ImageNet is significant. We argue that this is achieved because NUQSGD and QSGDinf have lower variance relative to QSGD. It turns out both training loss and generalization error can benefit from the reduced variance. For different methods, the variance is measured on their own trajectories. Note that the normalized variance of NUQSGD and QSGDinf is lower than SGD for almost the entire training. It decreases on CIFAR10 after the learning rate drops and does not grow as much as SGD on ImageNet. Since the variance depends on the optimization trajectory, these curves are not directly comparable. Rather the general trend should be studied. We also measure the variance and normalized variance at fixed snapshots during training by evaluating multiple gradient estimates using each quantization method. All methods are evaluated on the same trajectory traversed by the single-GPU SGD. These plots answer this specific question: What would the variance of the first gradient estimate be if one were to train using SGD for any number of iterations then continue the optimization using another method? The entire future trajectory may change by taking a single good or bad step. We can study the variance along any trajectory. However, the trajectory of SGD is particularly interesting because it covers a subset of points in the parameter space that is likely to be traversed by any first-order optimizer. For multi-dimensional parameter space, we average the variance of each dimension. Figure 5 (left), shows the variance of the gradient estimates on the trajectory of single-GPU SGD on CIFAR10. We observe that QSGD has particularly high variance, while QSGDinf and NUQSGD have lower variance than single-GPU SGD. We also propose another measure of stochasticity, normalized variance, that is the variance normalized by the norm of the gradient. The mean normalized variance can be expressed as where l(w; z) denotes the loss of the model parametrized by w on sample z and subscript A refers to randomness in the algorithm, e.g., randomness in sampling and quantization. Normalized variance can be interpreted as the inverse of Signal to Noise Ratio (SNR) for each dimension. We argue that the noise in optimization is more troubling when it is significantly larger than the gradient. For sources of noise such as quantization that stay constant during training, their negative impact might only be observed when the norm of the gradient becomes small. Figure 5 (right) shows the mean normalized variance of the gradient versus training iteration. Observe that the normalized variance for QSGD stays relatively constant while the unnormalized variance of QSGD drops after the learning rate drops. It shows that the quantization noise of QSGD can cause slower convergence at the end of the training than at the beginning. In Figure 7, we show the mean normalized variance of the gradient versus training iteration on CIFAR10 and ImageNet. For different methods, the variance is measured on their own trajectories. Since the variance depends on the optimization trajectory, these curves are not directly comparable. Rather the general trend should be studied. ResNet152 Weak Scaling. In Figure 8, we present the weak scaling for ResNet152/ImageNet. Each of the GPUs receives a batch of size 8, and we therefore scale up the global batch size by the number of nodes. The exhibit the same superior scaling behavior for NUQSGD relative to the uncompressed baseline. EF-SignSGD Convergence. In Figure 9, we present a performance comparison for NUQSGD variants (bucket size 512) and a convergent variant of EF-SignSGD with significant levels of parameter tuning for convergence. We believe this to be the first experiment to show convergence of the latter method at ImageNet scale, as the original paper only considers the CIFAR dataset. For convergence, we have tuned the choice of scaling factor and the granularity at which quantization is applied (bucket size). We have also considered learning rate tuning, but that did not appear to prevent divergence in the early stages of training for this model. We did not attempt warm start, since that would significantly decrease the practicality of the algorithm. We have found that bucket size 64 is the highest at which the algorithm will still converge on this model and dataset, and found 1-bit SGD scaling , which consists of taking sums over positives and over negatives for each bucket, to yield good . The experiments are executed on a machine with 8 NVIDIA Titan X GPUs, and batch size 256, and can be found in Figure 9. Under these hyperparameter values the EF-SignSGD algorithm sends 128 bits per each bucket of 64 values (32 for each scaling factor, and 64 for the signs), doubling its baseline communication cost. Moreover, the GPU implementation is not as efficient, as error feedback must be computed and updated at every step, and there is less parallelism to leverage inside each bucket. This explains the fact that the end-to-end performance is in fact close to that of the 8-bit NUQSGD variant, and inferior to 4-bit NUQSGD. In the following theorem, we show that for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Ω(√ d), and so our bound is tight in d. The variance optimization problem R 2 is an integer nonconvex problem. We can obtain an upper bound on the optimal objective of problem R 2 by relaxing the integer constraint as follows. The ing QSQP is shown as follows: Note that problem Q 1 can be solved efficiently using standard standard interior point-based solvers, e.g., CVX . In the following, we develop a coarser analysis that yields an upper bound expressed as the optimal value to an LP. Theorem 8 (LP bound). Let v ∈ R d. An upper bound on the nonuniform quantization of v is given by ε LP v 2 where ε LP is the optimal value of the following LP: Corollary 2 (Optimal level). For the special case with s = 1, the optimal level to minimize the worst-case bound obtained from problem P 1 is given by l * 1 = 1/2. Proof. For s = 1, problem P 1 is given by P 0: max Note that the objective of P 0 is monotonically increasing in (d 0, d 1). It is not difficult to verify that the optimal (d * 0, d * 1) is a corner point on the boundary line of the feasibility region of P 0. Geometrical representation shows that that candidates for an optimal solution are (d − (1/l 1) 2, (1/l 1) 2 ) and (d, 0). Substituting into the objective of P 0, the optimal value of P 0 is given by Finally, note that τ 0 = τ 1 = 1/2 minimizes the optimal value of P 0. In this section, we focus on the special case of exponentially spaced collection of levels of the form L p = (0, p s, · · ·, p 2, p, 1) for p ∈ and an integer number of levels, s. In this case, we have τ 0 = p s and τ j = (1 − p)p s− j for j = 1, · · ·, s. For any given s and d, we can solve the corresponding quadratic and linear programs efficiently to find the worst-case variance bound. As a bonus, we can find the optimal value of p that minimizes the worst-case variance bound. In Figure 10, we show the numerical obtained by solving QCQP Q 1 with L p versus p using CVX . In Figure 10 (left), we fix d and vary s, while in Figure 10 (right), we fix s and vary d. As expected, we note that the variance upper bound increases as d increases and the variance upper bound decreases as s increases. We observe that our current scheme is nearly optimal (in the worst-case sense) in some cases. Further, the optimal value of p shifts to the right as d increases and shifts to the left as s increases. We can obtain convergence guarantees to various learning problems where we have convergence guarantees for SGD under standard assumptions. On nonconvex problems, (weaker) convergence guarantees can be established along the lines of, e.g., (, Theorem 2.1). In particular, NUQSGD is guaranteed to converge to a local minima for smooth general loss functions. Theorem 9 (NUQSGD for smooth nonconvex optimization). Let f: Ω → R denote a possibly nonconvex and β -smooth function. Let w 0 ∈ Ω denote an initial point, ε Q be defined as in Theorem 2, T ∈ Z >0, and f * = inf w∈Ω f (w). Suppose that Algorithm 1 is executed for T iterations with a learning rate α = O(1/β) on K processors, each with access to independent stochastic gradients of f with a second-moment bound B. Then there exists a random stopping time R ∈ {0, · · ·, T} such that NUQSGD guarantees E[∇ f (w R) 2 ] ≤ ε where ε = O β (f (w 0) − f * )/T + (1 + ε Q)B.
NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf.
1,746
scitldr
The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain. The ing self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning. Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent. Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity. We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks. In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task (controlling for the number of parameters). We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks. Neural networks that deal with temporally extended tasks must be able to store traces of past events. Often this memory of past events is maintained by neural activity reverberating through recurrent connections; other methods for handling temporal information exist, including memory networks BID36 or temporal convolutions BID21. However, in nature, the primary basis for long-term learning and memory in the brain is synaptic plasticity -the automatic modification of synaptic weights as a function of ongoing activity BID16 BID14. Plasticity is what enables the brain to store information over the long-term about its environment that would be impossible or impractical for evolution to imprint directly into innate connectivity (e.g. things that are different within each life, such as the language one speaks).Importantly, these modifications are not a passive process, but are actively modulated on a momentto-moment basis by dedicated systems and mechanisms: the brain can "decide" where and when to modify its own connectivity, as a function of its inputs and computations. This neuromodulation of plasticity, which involves several chemicals (particularly dopamine; BID1 He et al. 2015; BID13 BID43, plays an important role in learning and adaptation BID22 BID32). By allowing the brain to control its own modification as a function of ongoing states and events, the neuromodulation of plasticity can filter out irrelevant events while selectively incorporating important information, combat catastrophic forgetting of previously acquired knowledge, and implement a self-contained reinforcement learning algorithm by altering its own connectivity in a reward-dependent manner BID31 BID24 BID7; BID19 BID4 BID38.The complex organization of neuromodulated plasticity is not accidental: it from a long process of evolutionary optimization. Evolution has not only designed the general connection pattern of the brain, but has also sculpted the machinery that controls neuromodulation, endowing the brain with carefully tuned self-modifying abilities and enabling efficient lifelong learning. In effect, this coupling of evolution and plasticity is a meta-learning process (the original and by far most powerful example of meta-learning), whereby a simple but powerful optimization process (evolution guided by natural selection) discovered how to arrange elementary building blocks to produce remarkably efficient learning agents. Taking inspiration from nature, several authors have shown that evolutionary algorithms can design small neural networks (on the order of hundreds of connections) with neuromodulated plasticity (see the "Related Work" section below). However, many of the spectacular recent advances in machine learning make use of gradient-based methods (which can directly translate error signals into weight gradients) rather than evolution (which has to discover the gradients through random weight-space exploration). If we could make plastic, neuromodulated networks amenable to gradient descent, we could leverage gradient-based methods for optimizing and studying neuromodulated plastic networks, expanding the abilities of current deep learning architectures to include these important biologically inspired self-modifying abilities. Here we build on the differentiable plasticity framework BID19 BID20 to implement differentiable neuromodulated plasticity. As a , for the first time to our knowledge, we are able to train neuromodulated plastic networks with gradient descent. We call our framework backpropamine in reference to its ability to emulate the effects of natural neuromodulators (like dopamine) in artificial neural networks trained by backpropagation. Our experimental establish that neuromodulated plastic networks outperform both non-plastic and non-modulated plastic networks, both on simple reinforcement learning tasks and on a complex language modeling task involving a multi-million parameter network. By showing that neuromodulated plasticity can be optimized through gradient descent, the backpropamine framework potentially provides more powerful types of neural networks, both recurrent and feedforward, for use in all the myriad domains in which neural networks have had tremendous impact. Neuromodulated plasticity has long been studied in evolutionary computation. Evolved networks with neuromodulated plasticity were shown to outperform both non-neuromodulated and non-plastic networks in various tasks (e.g. BID34 BID26 see Soltoggio et al. 2017 for a review). A key focus of neuromodulation in evolved networks is the mitigation of catastrophic forgetting, that is, allowing neural networks to learn new skills without overwriting previously learned skills. By activating plasticity only in the subset of neural weights relevant for the task currently being performed, knowledge stored in other weights about different tasks is left untouched, alleviating catastrophic forgetting BID4 BID38. However, evolved networks were historically relatively small and operated on low-dimensional problem spaces. The differentiable plasticity framework BID18 BID20 allows the plasticity of individual synaptic connections to be optimized by gradient descent, in the same way that standard synaptic weights are. However, while it could improve performance in some tasks over recurrence without plasticity, this method only facilitated passive, non-modulated plasticity, in which weight changes occur automatically as a function of pre-and post-synaptic activity. Here we extend this framework to implement differentiable neuromodulated plasticity, in which the plasticity of connections can be modulated moment-to-moment through a signal computed by the network. This extension allows the network itself to decide over its lifetime where and when to be plastic, endowing the network with true self-modifying abilities. There are other conceivable though more complex approaches for training self-modifying networks. For example, the weight modifications can themselves be computed by a neural network BID30 BID28 BID23 BID42. However, none so far have taken the simple approach of directly optimizing the neuromodulation of plasticity itself within a single network, through gradient descent instead of evolution, as investigated here. The present work builds upon the existing differentiable plasticity framework BID18 BID20, which allows gradient descent to optimize not just the weights, but also the plasticity of each connection. In this framework, each connection in the network is augmented with a Hebbian plastic component that grows and decays automatically as a of ongoing activity. In effect, each connection contains a fixed and a plastic component: DISPLAYFORM0 DISPLAYFORM1 where x i (t) is the output of neuron i at time t, σ is a nonlinearity (we use tanh in all experiments), w i,j is the baseline (non-plastic) weight of the connection between neurons i and j, and α i,j is the plasticity coefficient that scales the magnitude of the plastic component of the connection. The plastic content is represented by the Hebbian trace Hebb i,j, which accumulates the product of pre-and post-synaptic activity at connection i, j, as shown in Eq. 2.Hebb i,j is initialized to zero at the beginning of each episode/lifetime, and is updated automatically according to Eq. 2: it is a purely episodic/intra-life quantity. By contrast, w i,j, α i,j and η are the structural components of the network, which are optimized by gradient descent between episodes/lifetimes to minimize the expected loss over an episode. The function Clip(x) in Eq. 2 is any function or procedure that constrains Hebb i,j to the [−1, 1] range, to negate the inherent instability of Hebbian learning. In previous work BID20, this function was either a simple decay term, or a normalization implementing Oja's rule BID25. In the present paper it is simply a hard clip (x ← 1 if x > 1; x ← −1 if x < −1). Compared to previously used operations, this simple operation turned out to produce equal or superior performance on the tasks in this paper. Note the distinction between the η and α i,j parameters: η is the intra-life "learning rate" of plastic connections, which determines how fast new information is incorporated into the plastic component, while α i,j is a scale parameter, which determines the maximum magnitude of the plastic component (since Hebb i,j is constrained to the [-1,1] range).Importantly, in contrast to other approaches using uniform plasticity BID29, including "fast weights" BID0, the amount of plasticity in each connection (represented by α i,j) is trainable, allowing the meta-optimizer to design complex learning strategies (see BID20 for a discussion of this point, and experimental comparisons that demonstrate and explain superior performance of differentiable plasticity over uniform-plastic networks).An important aspect of differentiable plasticity is extreme ease of implementation: implementing a plastic recurrent network only requires less than four additional lines of code over a standard recurrent network implementation BID20 ). The Backpropamine framework described below inherits this simplicity; in particular, the "simple neuromodulation" approach does not require any additional code over differentiable plasticity, but merely a modification of it. Two methods are proposed to introduce neuromodulated plasticity within the differentiable plasticity framework. In both cases, plasticity is modulated on a moment-to-moment basis by a networkcontrolled neuromodulatory signal M (t). The computation of M (t) could be done in various ways; at present, it is simply a single scalar output of the network, which is used either directly (for the simple RL tasks) or passed through a meta-learned vector of weights (one for each connection, for the language modeling task). We now explain how the equations of differentiable plasticity are modified to make use of this neuromodulatory signal. The simplest way to introduce neuromodulation of plasticity in this framework is to make the (global) η parameter depend on the output of one or more neurons in the network. Because η essentially determines the rate of plastic change, placing it under network control allows the network to determine how plastic connections should be at any given time. Thus, the only modification to the equations above in this simple neuromodulation variant is to replace η in Eq. 2 with the network-computed, time-varying neuromodulatory signal M (t). That is, Eq. 2 is replaced with DISPLAYFORM0 More complex schemes are possible. In particular, we introduce an alternative neuromodulation scheme that takes inspiration from the short-term retroactive effects of neuromodulatory dopamine on Hebbian plasticity in animal brains. In several experiments, dopamine was shown to retroactively gate the plasticity induced by past activity, within a short time window of about 1s BID43; BID6 BID2. Thus, Hebbian plasticity does not directly modify the synaptic weights, but creates a fast-decaying "potential" weight change, which is only incorporated into the actual weights if the synapse receives dopamine within a short time window. As a , biological Hebbian traces essentially implement a so-called eligibility trace BID37, keeping memory of which synapses contributed to recent activity, while the dopamine signal modulates the transformation of these eligibility traces into actual plastic changes. Such mechanisms have been modelled in computational neuroscience studies, e.g. (; ; BID5 BID33 BID19) (see BID11 for a recent review of this concept).Our framework easily accommodates this more refined model of dopamine effects on plasticity. We simply replace Eq. 2 above with the two equations, DISPLAYFORM0 Here E i,j (t) (the eligibility trace at connection i, j) is a simple exponential average of the Hebbian product of pre-and post-synaptic activity, with trainable decay factor η. Hebb i,j (t), the actual plastic component of the connection (see Eq. 1), simply accumulates this trace, but gated by the current value of the dopamine signal M (t). Note that M (t) can be positive or negative, approximating the effects of both rises and dips in the baseline dopamine levels BID31. Our first test task is a simple meta-learning problem that emulates an animal behavioral learning task, as described in FIG2.1 (Left). In each episode, one of four input cues is arbitrarily chosen as the Target cue. Repeatedly, the agent is shown two cues in succession, randomly chosen from the possible four, then a Response cue during which the agent must respond 1 if the Target cue was part of the pair, or 0 otherwise. A correct response produces a reward of 1.0, while an incorrect response returns reward -1.0 (this is a two-alternative forced choice task: a response of either 1 or 0 is always produced). This process iterates for the duration of the episode, which is 200 time steps. The cues are binary vectors of 20 bits, randomly generated at the beginning of each episode. To prevent simple time-locked scheduling strategies, a variable number of zero-input time steps are randomly inserted, including at least one after each presentation of the Go cue; as a , the length of each trial varies, and the number of trials per episode is somewhat variable (the mean number of trials per episode is 15).The architecture is a simple recurrent network with 200 neurons in the hidden recurrent layer. Only the recurrent layer is plastic: input and output weights are non-plastic, having only w i,j coefficients. There are 24 inputs: 20 binary inputs for the current cue and one input providing the time elapsed since the start of the episode, as well as two binary inputs for the one-hot encoded response at the previous time step and one real-valued channel for the reward received at the previous time step, in accordance with common meta-learning practice BID39 BID3. There are four outputs: two binary outputs for the one-hot encoded response, plus an output neuron that predicts the sum of future discounted rewards V (t) over the remainder of the episode (as mandated by the A2C algorithm that we use for meta-training, following Wang et al. FORMULA0), and the neuromodulatory signal M (t). The two response outputs undergo a softmax operation to produce probabilities over the response, while the M (t) signal is passed through a tanh nonlinearity and the V (t) output is a pure linear output. All gradients are clipped at norm 7.0, which greatly improved stability. Training curves are shown in FIG2.1 (Right; each curve shows the median and inter-quartile range over 10 runs). Neuromodulatory approaches succeed in learning the task, while non-neuromodulatory networks BID18 BID20 ) and non-plastic, simple recurrent networks fail to learn it. We hypothesize that this dramatic difference is related to the relatively high dimensionality of the input cues: just as non-modulated plastic networks seemed to outperform non-plastic networks specifically when required to memorize arbitrary high-dimensional stimuli BID20, neuromodulation seems to specifically help memorizing reward associations with such arbitrary high-dimensional stimuli (see Appendix).To illustrate the behavior of neuromodulation, we plotted the output of the neuromodulator neuron for several trials. These graphs reveal that neuromodulation reacts to reward in a complex, time-dependent manner (see Appendix). For a more challenging problem, we also tested the approach on the grid maze exploration task introduced by BID20. Here, the maze is composed of 9 × 9 squares, surrounded by walls, in which every other square (in either direction) is occupied by a wall. Thus the maze contains 16 wall squares, arranged in a regular grid except for the center square (Figure 2, left). The shape of the maze is fixed and unchanging over the whole task. At each episode, one non-wall square is randomly chosen as the reward location. When the agent hits this location, it receives a reward and is immediately transported to a random location in the maze. Each episode lasts 200 time steps, during which the agent must accumulate as much reward as possible. The reward location is fixed within an episode and randomized across episodes. Note that the reward is invisible to the agent, and thus the agent only knows it has hit the reward location by the activation of the reward input at the next step (and possibly by the teleportation, if it can detect it).The architecture is the same as for the previous task, but with only 100 recurrent neurons. The outputs consist of 4 action channels (i.e. one for each of the possible actions: left, right, up or down) passed through a softmax, as well as the pure linear V (t) output and the M (t) neuromodulatory signal passed through a tanh nonlinearity. Inputs to the agent consist of a binary vector describing the 3 × 3 Figure 2: Maze navigation task. Left: layout of the maze, including an example agent location (yellow) and reward location (green, for illustration only: the reward is not visible to the agent). Right: Training curves for the maze exploration task: median and inter-quartile range of reward over 9 runs for each episode. Cyan stars (bottom) indicate statistically significant difference between simple neuromodulation and non-modulated plasticity at p < 0.05 (Wilcoxon rank-sum test).neighborhood centered on the agent (each element being set to 1 or 0 if the corresponding square is or is not a wall), plus four additional inputs for the one-hot encoded action taken at the previous time step, and one input for the reward received at the previous time step, following common practice BID39. Again, only recurrent weights are plastic: input-to-recurrent and recurrent-to-output weights are non-plastic. Results in Figure 2 show that modulatory approaches again outperform non-modulated plasticity. Word-level language modeling is a supervised learning sequence problem, where the goal is to predict the next word in a large language corpus. Language modeling requires storing long term context, and therefore LSTM models generally perform well on this task BID44. The goal of this experiment is to study the benefits of adding plasticity and neuromodulation to LSTMs. The Penn Tree Bank corpus (PTB), a well known benchmark for language modeling BID15, is used here for comparing different models. The dataset consists of 929k training words, 73k validation words, and 82k, test words, with a vocabulary of 10k words. For this task, we implemented neuromodulated plasticity in two different models: a basic model with 4.8 million parameters inspired from BID44, and a much larger and more complex model with 24.2 million parameters, forked from. The smaller model allowed for more experimentation, while the larger model showcases the of neuromodulation on a complex model with (at the time of writing) state-of-the-art performance. Detailed experimental descriptions are provided in the Appendix, summarized here: For the basic model, each network consists of an embedding layer, followed by two LSTM layers (approximately of size 200). The size of the LSTM layers is adjusted to ensure that the total number of trainable parameters remains constant across all experiments; note that this includes all plasticity-related additional parameters, i.e. α i,j as well as the additional parameters related to neuromodulation (see Appendix). The final layer is a softmax layer of size 10k. The network is unrolled for 20 time steps during backpropagation through time BID41. The norm of the gradient is clipped at 5. This setup is similar to the non-regularized model described by BID44. One difference is that an extra L2 penalty is added to the weights of the network here (adding this penalty consistently improves for all the models). FORMULA0 is that we do not implement recurrent dropout (feedforward dropout is preserved) and reduce batch size to 7 due to computational limitations. Other hyperparameters are taken "as is" without any tuning. See Appendix for details. Four versions of the smaller model are evaluated here TAB0. FORMULA0 The Baseline LSTM model (described in the previous paragraph) 1. LSTM with Differentiable Plasticity: there are four recurrent connections in each LSTM node and here, plasticity is added to one of them (see A.1 for details) as per equations 1 and 2. Because the number of plastic connections is large, each plastic connection has its own individual η so that their values can be individually tuned by backpropagation. LSTM with Simple Neuromodulation: here simple neuromodulation is introduced following equation 3. The η parameters are replaced by the output of a neuron M (t). M (t) itself receives as input a weighted combination of the hidden layer's activations, where the weights are learned in the usual way. There is one M (t) associated with each LSTM layer. LSTM with Retroactive Neuromodulation: this model is the same as the LSTM with Simple Neuromodulation, except it uses the equations that enable eligibility traces (equations 4 and 5). Additional details for the plastic and neuromodulated plastic LSTMs are described in the Appendix. For each of the four models, we separately searched for the best hyperparameters with equallypowered grid-search. Each model was then run 16 times with its best hyperparameter settings. The mean test perplexity of these 16 runs along with the 95% confidence interval is presented in TAB0. Results show that adding differentiable plasticity to LSTM provides slightly, but statistically significantly better than the Baseline LSTM (Wilcoxon rank-sum test, p = 0.0044). Adding neuromodulation further (and statistically significantly) lowers the perplexity over and above the LSTM with differential plasticity (p = 1e − 6). Overall, retroactive neuromodulation provides about 1.7 perplexity improvement vs. the Baseline LSTM (statistically significant, p = 1e − 7). Retroactive neuromodulation (i.e. with eligibility traces) does outperform simple neuromodulation, but the improvement is just barely not statistically significant at the traditional p < 0.05 cutoff (p = 0.066). Note that while these figures are far from state-of-the-art (which use considerably larger, more complex architectures), they all still outperform published work using similar architectures BID44.For the larger model, we compare a version in which the core LSTM module has been reimplemented to have neuromodulated plastic connections (simple neuromdulation only; no retroactive modulation was implemented), and a baseline model that uses the same LSTM reimplementation but without the plasticity and modulation, in order to make the comparison as equal as possible. Note that in this model, plasticity coefficients are attributed "per neuron": there is only one α i for each neuron i (as opposed to one per connection), which is applied to all the Hebbian traces of the connections incoming to this neuron. This helps limit the total number of parameter. See the Appendix for a more complete description. The modulated plastic model shows a small improvement over the non-plastic version TAB0, confirming the obtained with the smaller model. This paper introduces a biologically-inspired method for training networks to self-modify their weights. Building upon the differentiable plasticity framework, which already improved performance (sometimes dramatically) over non-plastic architectures on various supervised and RL tasks BID18 BID20, here we introduce neuromodulated plasticity to let the network control its own weight changes. As a , for the first time, neuromodulated plastic networks can be trained with gradient descent, opening up a new research direction into optimizing large-scale self-modifying neural networks. As a complement to the benefits in the simple RL domains investigated, our finding that plastic and neuromodulated LSTMs outperform standard LSTMs on a benchmark language modeling task (importantly, a central domain of application of LSTMs) is potentially of great importance. LSTMs are used in real-world applications with massive academic and economic impact. Therefore, if plasticity and neuromodulation consistently improve LSTM performance (for a fixed search space size), the potential benefits could be considerable. We intend to pursue this line of investigation and test plastic LSTMs (both neuromodulated and non) on other problems for which LSTMs are commonly used, such as forecasting. Conceptually, an important comparison point is the "Learning to Reinforcement Learn" (L2RL) framework introduced by BID39. In this meta-learning framework, the weights do not change during episodes: all within-episode learning occurs through updates to the activity state of the network. This framework is explicitly described BID40 as a model of the slow sculpting of prefrontal cortex by the reward-based dopamine system, an analogy facilitated by the features of the A2C algorithm used for meta-training (such as the use of a value signal and modulation of weight changes by a reward prediction error). As described in the RL experiments above, our approach adds more flexibility to this model by allowing the system to store state information with weight changes, in addition to hidden state changes. However, because our framework allows the network to update its own connectivity, we might potentially extend the L2RL model one level higher: rather than using A2C as a hand-designed reward-based weight-modification scheme, the system could now determine its own arbitrary weight-modification scheme, which might make use of any signal it can compute (reward predictions, surprise, saliency, etc.) This emergent weight-modifying algorithm (designed over many episodes/lifetimes by the "outer loop" meta-training algorithm) might in turn sculpt network connectivity to implement the meta-learning process described by BID40. Importantly, this additional level of learning (or "meta-meta-learning") is not just a pure flight of fancy: it has undoubtedly taken place in evolution. Because humans (and other animals) can perform meta-learning ("learning-to-learn") during their lifetime (; BID40, and because humans are themselves the of an optimization process (evolution), then meta-meta-learning has not only occurred, but may be the key to some of the most advanced human mental functions. Our framework opens the tantalizing possibility of studying this process, while allowing us to replace evolution with any gradient-based method in the outermost optimization loop. To investigate the full potential of our approach, the framework described above requires several improvements. These include: implementing multiple neuromodulatory signals (each with their own inputs and outputs), as seems to be the case in the brain BID12; BID27; introducing more complex tasks that could make full use of the flexibility of the framework, including the eligibility traces afforded by retroactive modulation and the several levels of learning mentioned above; and addressing the pitfalls in the implementation of reinforcement learning with reward-modulated Hebbian plasticity (e.g. the inherent interference between the unsupervised component of Hebbian learning and reward-based modifications; BID9 BID8, so as to facilitate the automatic design of efficient, selfcontained reinforcement learning systems. Finally, it might be necessary to allow the meta-training algorithm to design the overall architecture of the system, rather than simply the parameters of a fixed, hand-designed architecture. With such a rich potential for extension, our framework for neuromodulated plastic networks opens many avenues of exciting research. DISPLAYFORM0 j t, f t and o t are used for controlling the data-flow through the LSTM and i t is the actual data. Therefore, plasticity is introduced in the path that goes through i t (adding plasticity to the control paths of LSTM is for future-work). The corresponding pre-synaptic and post-synaptic activations (denoted by x i (t − 1) and x j (t) respectively in equations 1 and 2) are h t−1 and i t. A layer of size 200 has 40k (200×200) plastic connections. Each plastic connection has its own individual η (used in equation 2) that is learned through backpropagation. The plasticity coefficients (α i,j) are used as shown in equation 1. As shown in equation 3, for simple neuromodulation, the η is replaced by the output of a network computed neuron M (t). For neuromodulated LSTMs, individual η for each plastic connection is replaced by the output of a neuron (M (t)) that has a fan-out equal to the number of plastic connections. The input to this neuron is the activations h t−1 of the layer from the previous time-step. Each LSTM layer has its dedicated neuromodulatory neuron. Other variations of this setting include having one dedicated neuromodulatory neuron per node or having one neuromodulatory neuron for the whole network. Preliminary experiments showed that these variations performed worse and therefore they were not further evaluated. All the four models presented in TAB0 are trained using SGD. Initial learning rate was set 1.0. Each model is trained for 13 epochs. The hidden states of LSTM are initialized to zero; the final hidden states of the current minibatch are used as the initial hidden states of the subsequent minibatch. In addition to the previous model, we also applied the Backpropamine framework to the much larger, state-of-the-art model described by. This model consists of three stacked LSTMs with 115, 1150 and 400 cells each, with an input embedding of size 400 and an output softmax layer that shares weights with the input embedding. The model makes use of numerous optimization and regularization techniques. Connections between successive LSTMs implement "variational" dropout, in which a common dropout mask is used for the entire forward and backward pass BID10. Backpropagation through time uses a variable horizon centered on 70 words. After 45 epochs, the optimizer switches from SGD (without momentum) to Averaged-SGD, which consists in computing standard SGD steps but taking the average of the ing successive updated weight vectors. This is all in accordance with. The only differences are that we do not implement weight-dropout in recurrent connections, force the switch to ASGD at 45 epochs for all runs of all models, and limit batch size to 7 due to computational restrictions. Plasticity coefficients are attributed "per neuron": rather than having and independent α i,j for each connection, each neuron i has a plasticity coefficient α i that is applied to all its incoming connection (note that Hebbian traces Hebb i,j are still individually maintained for each connection). This reduces the number of trainable parameters, since α is now a vector of length N rather than a matrix of size N × N (where N is the number of recurrent neurons).We implement simple neuromodulation as described in Equation 3. A single neuromodulator neuron with tanh nonlinearity receives input from all recurrent neurons. This neuromodulator input is then passed through a vector of weights, one per neuron, to produce a different η i for each neuron. In other words, different neurons i have different η i, but these are all fixed multiples of a common value. This is an intermediate solution between having a single η(t) for the whole network, and independently computing a separate η i for each neuron, each with its own input weights (which would require N × N weights, rather than 2 × N for the current solution). Neuromodulation is computed separately to each of the three LSTMs in the model. For the non-plastic network, the total number of trainable parameters is 24 221 600. For the neuromodulated plastic version, we reduce the number of hidden cells in LSTMs from 1150 to 1149, which suffices to bring the total number of parameters down to 24 198 893 trainable parameters (rather than 24 229 703 for 1150-cell LSTMs).All other hyperparameters are taken from , using the instructions provided on the code repository for their model, available at https://github.com/salesforce/ awd-lstm-lm. We did not perform any hyperparameter tuning due to computational constraints. To illustrate the behavior of neuromodulation, we plot the output of the neuromodulator neuron for random trials from several runs of Task 1 (Figure 3). All runs are from well-trained, highly successful network, as seen by the low proportion of negative rewards. For each run, we plot both the value of the neuromodulator output at each time step, and the reward being currently perceived by the network (i.e. the one produced by the response at the previous time step).The plots reveal rich, complex dynamics that vary greatly between runs. The modulator neuron clearly reacts to reward; however, this reaction is complex, time-dependent and varies from run to run. The topmost run for retroactive modulation tends to produce negative neuromodulation in response to positive reward, and vice-versa; while the second-to-last run for simple neuromodulation tends to to the opposite. A common pattern is to produce negative neuromdulation on the time step just following reward perception (especially for simple neuromodulation). Two of the runs for retroactive modulation exhibit a pattern where reward perception is followed by highly positive, then highly negative neuromodulation. Understanding the mechanism by which these complex dynamics perform efficient within-episode learning is an important direction for future work. In the cue-reward association learning task described above, neuromodulated plasticity was able to learn a task that non-modulated plasticity simply could not. What might be the source of this difference? In a previous experiment, we implemented the same task, but using only four fixed 4-bit binary cues for the entire task, namely,'1000','0100','0010' and'0001'. In this simplified version of the task, there is no need to memorize the cues for each episode, and the only thing to be learned for each episode is which of the four known cues is associated with reward. This is in contrast with the version used in the paper above, in which the cues are arbitrary 20-bits vectors randomly generated for each episode. With the fixed, four-bit cues, non-modulated plasticity was able to learn the task, though somewhat more slowly than neuromodulated plasticity (see FIG2).This suggests neuromodulated plasticity could have a stronger advantage over non-modulated plasticity specifically in situations where the association to be learned involves arbitrary high-dimensional cues, which must be memorized jointly with the association itself. This echoes the of BID20, who suggest that plastic networks outperform non-plastic ones specifically on tasks requiring the fast memorization of high-dimensional inputs (e.g. image memorization and reconstruction task in BID20).Clearly, more work is needed to investigate which problems benefit most from neuromodulated plasticity, over non-modulated or non-plastic approaches. We intend to pursue this line of research in future work. Figure 3: Dynamics of neuromodulation. For both simple and retroactive modulation, we show one trial from each of 4 runs. At each time step, gray squares indicate which cue was shown; orange curves indicates reward at the previous time step (i.e. the one currently fed to the network), which is always -1, 0 or 1; and blue curves indicates the value of the modulator output at the current time step. Notice the diversity of dynamics. Training curves for the cue-reward association task with fixed, binary four-bit cues (medians and inter-quartile ranges of rewards per episode over 10 runs). " Soft clip" refers to a different clipping operation used in Equation 2; "Hard clip" is the same as used in the present paper, i.e. the simple clipping described in Methods. Note that non-modulated plastic network succeed in solving this task.
Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks.
1,747
scitldr
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for fewshot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning. Deep learning has exhibited outstanding ability in various disciplines including computer vision, natural language processing and speech recognition BID10. For instance, AlexNet has made a breakthrough on recognizing millions of imagery objects by means of deep Convolutional Neural Network (CNN) BID8. In the past five years, the algorithmic capability of comprehending visual concepts has been significantly improved by elaborately well-designed deep learning architectures BID4 BID22. However, training deep neural networks such as the widely employed CNNs of AlexNet BID8, Inception BID23, VGG BID20, and ResNet BID5, needs the supervision of many class labels which are handcrafted. For example, the number of samples of each class in the ImageNet of object recognition benchmark BID17 is more than one thousand. In fact, the number of labelled samples used for learning parameters of CNNs is far more than that because data augmentation is usually applied. This kind of learning obviously deviates from the manner of human cognition. A child can recognize a new object that she/he has never seen only by several examples, from simple shapes like rectangles to highly semantic animals like tigers. However, deep learning algorithms encounter difficulty in such scenarios where only very sparse data are available for learning to recognize a new category, thus raising the research topic of one-shot learning or few-shot learning BID1 BID25.The seminal work BID2 models few-shot learning with the Bayesian framework. Empirical knowledge of available categories is learned and parameterized as a probability density function. The unseen class with a handful of examples is modeled as the posterior by updating the prior. Bayesian theory provides a simple and elegant idea for solving learning problems with little data. If decomposed into parts or programs, an object can be described by the joint distribution of Bayesian criterion. In this manner, human-level performance on one-shot learning has been derived for discovering simple visual concepts such as ancient handwritten characters BID9.With the prevalence of deep learning, the recent work for few-shot learning focuses on the application of deep neural networks that have more capacity to accommodate the complexity of object representations. Siamese neural network facilitates the performance of few-shot recognition by means of twin networks of sharing parameters, optimizing the distances of representative features in intraclasses BID7. The counterpart of learning data structures by distance is also formulated by triplet loss in BID11. Researchers in BID11 assert that the distance metrics can learn the intrinsic manifold structures of training data such that the network is more general and robust when employed for untrained objects. A very recent work pertaining to distance-based optimization, named Prototypical Networks BID21, significantly improves the capability of few-shot recognition. Prototypical Networks attempt to minimize the distance of the test sample to the center of each class and are learned in the end-to-end manner. Memory-augmented architectures are also proposed to help assimilate new classes with more accurate inference BID18. Matching network embeds metric learning in neural network in the light of attention mechanism which is embodied by softmax BID26. In a very recent work, the large-scale memory without the need of resetting during training is formulated as an embedded module for arbitrary neural networks to remember the information of rare events BID6. In order to obtain rapid learning with limited samples, meta learning is exploited both in memory network and matching network. This "learning to learn" technique is extended to deal with few-shot learning from the point of view of optimization BID15. To be specific, a LSTM-based meta learner learns to mimic the exact optimization algorithm and then harnesses the acquired capability to train the learner applied for the few-shot cases. The latest meta learning algorithms also deal with few-shot learning from different angles, e.g. the fast adaptation of neural networks BID3, and temporal convolution BID13.In addition to the application of memory module or attention model in LSTM, there is another type of algorithms digging the effective way of transferring the discriminative power of pre-trained models to few-shot circumstances. Resorting to the correlation between the activations in the last feature layers and the associated parameters for softmax, a transformation is learned to derive the parameters for predicting new classes from corresponding activations BID14.The algorithms based on deep learning can learn more expressive representations for objects, essentially boosting the quality of feature extraction. However, the softmax classifier discriminates all categories by class boundaries, bypassing the steps that carefully characterize the structure of each class. Thus the algorithmic performance will deteriorate grossly if the distribution of new class cannot be accurately modeled by trained networks. Besides softmax, another commonly applied method, k nearest neighbors (KNN), is a point-to-point measurement and is incapable of conveying global structural information. To address this issue, we propose a geometric method for few-shot learning. Our perspective is that accurate geometric characterization for each class is essential when only a handful of samples are available, because such sparse data are usually insufficient to fit well-converged parameterized classifier. To this end, we harness convex polytope to fit a class, in the sense that we construct a convex polytope by selecting the samples in the class as the vertices of the polytope. The volume of the polytope is taken as the measurement of class scatter. Thus the polytopal volume may be improved after including the query sample in the test set during the testing trial. The normalized volume with respect to the original counterpart is applied to compute the distance from the test sample to the test set. To highlight the structural details of object parts, we present the construction of polytope based on convolutional feature maps as well. To the best of our understanding, however, there is no exact formula to calculating the volume of general convex polytope. To make our algorithm feasible, therefore, we use the simplest convex polytope -simplex instead. The volume of a simplex can be expressed by the Cayley-Menger determinant BID0, thus casting the problem of few-shot recognition as a simple calculation of linear algebra. Experiments on Omniglot and miniImageNet datasets verify the effectiveness of our simple algorithm. It is known that by proper embedding, the feature representations of images or documents in the same class spatially cluster together. Each feature vector with a length of d, in its nature, corresponds to a point in d-dimensional space. We model each class as a polytope, with feature representation of each data point in this class as its vertex. Our approach is based on the idea that feature vectors of the test sample will be close to the polytope of its own class, formed by feature vectors of the corresponding examples, and distant from the others. Hence, we can perform the classification by finding the'nearest' class polytope to which the test point belongs, using carefully designed distance metrics associated with the content of polytope. As we point out in the introduction, there is no exact solution to computing the volume of a polytope. Therefore, we resort to the simplex to accomplish our idea. The simplex is the simplest counterpart of convex polytope and its volume admits a closed form expression. So we focus our attention on simplex to develop our algorithm. A simplex is the conceptual extension of a triangle in high-dimensional spaces. To be formal, let Y = {y 0, y 1, . . ., y n} denote a set of points in R d. A simplex is the convex polytope with the condition of n = d, implying that there needs exact d + 1 points to constitute a simplex in the d-dimensional space. For convenience, we call such a simplex the d-simplex. For instance, a line is a 1-simplex, a triangle is a 2-simplex, and a tetrahedron is a 3-simplex. Moreover, a line has the length, a triangle has the area, and a tetrahedron has the volume. In convention, we use the contents to represent the length, the area, and the volume BID27.A particularly intriguing property of the simplex is that its content can be written in a closed form by virtue of the Cayley-Menger determinant BID0 ). To show this, let A = [y 1 −y 0, . . ., y n −y 0] and the Cayley-Menger matrixP DISPLAYFORM0 where e denotes the all-one column vector of length n + 1, T presents the transpose of a matrix or a vector, and the entry P ij of the distance matrix P is of form P ij = y i − y j 2. The content of simplexŶ has two expressions that coincide with each other, showing that DISPLAYFORM1 where det(A T A) is the Gram determinant and det(P) is the Cayley-Menger determinant. Our analysis is based on the application of formula. Let Y = {y 1, . . ., y n} be the feature set of an arbitrary class. These features can be derived from outputs of deep neural networks, e.g., CNN. It is clear thatŶ = {Y ∪y 0}. Let t denote a test sample. It is clear the content C(Y) of the corresponding simplex 1 will be large if data points in Y are sparse and small if compact. Therefore, C(Y) is a plausible measurement for the class scatter. An exact d-simplex will be formed during testing process if the test sample is merged into Y. Then the associated content will be improved from C(Y) to C(Y ∪ t). The incremental content will be marginal if the feature point of the test sample is close to the class simplex, meaning the high correlation of the spatial proximity. Then the dissimilarity measurement of one test sample to one class can be written as DISPLAYFORM0 Here the numerator C(Y) serves to eliminating the quantitative influence of the class scatter. The normalization is indispensable because for a large C(Y), the incremental content C(Y ∪ t) − C(Y) will be prone to be relatively large even if the test sample is close to the simplex. To make it clear, we explicitly write (t, Y). Let the Cayley-Menger matrix pertaining to simplex Y beQ With the same embedding function, the test sample point t is also mapped to this space. Combined with one class, a new simplex (tetrahedron in the 3-dimensional space) is obtained, located in the space of one dimension higher than the original one. It is not surprising that the test sample point t is close to its own class while distant from the others, representing a certain degree of intra-class similarities and inter-class diversities. DISPLAYFORM1 where Q ij = y i − y j 2 and i, j = 1,..., n. Then the content of the simplex formed by Y can be written as DISPLAYFORM2 Substituting FORMULA1 and FORMULA4 into FORMULA2, we derive DISPLAYFORM3 where y 0 = t is implicitly assumed forP. It is straightforward to know that the smaller (t, Y) is, the closer the test point to the class simplex. To help intuitively understand our algorithm, a visual schematic example is shown in FIG0, where the complete procedure is figuratively demonstrated. It is obvious that Y cannot form a d-simplex due to the number |Y| of data points in Y satisfies |Y| = d, violating the definition of the d-simplex. However, our model for the few-shot learning can proceed without any modification. To make this clear, we need to introduce the isometric embedding of the simplex. The points on a geometric entity R de (manifolds or polytopes) can be expressed with coordinates when the entity is placed in an ambient space R da. Usually, the intrinsic dimension d e is much less than the ambient dimension d a, especially when d a is large. Formally, there exists a function For the question that we are interested in, R de is a simplex. Both R de and R da are Euclidean. The isometric embedding means τ i − τ j = f (τ i) − f (τ j) for an arbitrary pair of vertices. For a simplex S in R de, therefore, the contents of C(S) and C(f (S)) satisfy the identity C(S) = C(f (S)) under isometric embedding. That is to say, we can correlate the dimension of the simplex with the number of vertices under isometric embedding in the case where the number of vertices is insufficient to construct a d a -simplex. This reduced simplex in the R da is obvious a low-dimensional one. DISPLAYFORM0 Two visual instances are illustrated in FIG1, where a segment line of fixed length is a 1-simplex. It can be embedded in the 1-dimensional, 2-dimensional and 3-dimensional spaces, respectively. Certainly, we can apply the coordinates of different dimensions for these two endpoints. However, its length is constant in different ambient spaces due to the isometric embedding. When a simplex is mapped into a higher dimensional space by an isometric embedding, the coordinates of its vertices vary, whereas its geometric properties pertaining to distance remain unchanged. Therefore we can employ formula to perform our algorithm. For few-shot classification tasks, such reduced simplices are ubiquitous because feature representations of data points embedded by variously parameterized functions are always of high dimension while the number of available examples for a class is quite few. For example, images classification tasks generally require deep convolutional networks (such as VGGs or Inception) as the embedding function. The VGG16 network produces a long vector of 4096 dimensions after the final fully-connected layer. For a five-shot learning, however, there are only five examples in each class. Hence, the reduced simplices we actually exploit are of dimension four for the computation of C(Y) and dimension five for that of C(Y ∪ t).The above geometric analysis is reminiscent of the theoretic assumption in manifold learning BID24 BID16, especially the Isomap algorithm BID24. Interested readers may refer to these two seminal papers in manifold learning for further understanding. It is known that matching network is the extension of metric learning via attention model. Here we analyze the metric characteristic of our algorithm. This angle of analysis may be useful for digging deep insights of few-shot learning. To this end, we need to reveal the details of formula, which is described in Theorem 1. Theorem 1. The geometric dissimilarity measurement (t, Y) from point t to Y can be expanded as the following form DISPLAYFORM0 where p i = y i − t 2 and DISPLAYFORM1 Proof. For matrices S 11, S 12, S 21, and S 22, the Schur's determinant identity is the form DISPLAYFORM2 Applying the Schur's determinant identity twice, we can obtain the expanded form of det(P) DISPLAYFORM3 By the same way, we can also obtain the analogous form of det(Q) DISPLAYFORM4 Substituting equations and into equation FORMULA5, we achieve the expansion of the measurement (t, Y) in FORMULA7. This concludes the proof of Theorem 1.It is readily to know that both Q and P are not positive definite matrices. Hence Q −1 and P −1 cannot be viewed as metric matrices in because there are negative eigenvalues. However, there still exists the explicit algebraic meaning. The first factor w 1 = e T P −1 e/e T Q −1 e is an incremental ratio with respect to the class and the second factor w 2 = p T Q −1 p is equivalent to the summation of a kind of scale-normalized distance transformation. Motivated by the success of applying the decomposed parts in , we present an approach to constructing a simplex using the spatial feature maps of CNNs. Moreover, the convolutional feature maps have been effectively employed for visual relational reasoning in BID19.While applying deep convolutional networks in classification tasks, the tensor (feature maps) will be flatten to be a 1-D feature vector in the last convolution layer and then it is fed into the fully connected layers. This process makes the feature maps lose adjacency information of spatial structures in the 2-D image. Although flattening seems inevitable in conventional classification pipeline, for few-shot tasks, it is necessary to collect information more effectively from multiple local regions for making an accurate classification decision. In order to play down the negative effect of flattening operation, besides the flattened feature vectors, we take fully advantage of feature maps by applying various scales of attention windows for the last convolution layer. Within each local region, a 3 × 3 region on 5 × 5 feature maps for instance, we perform the same flattening operation, generating a feature vector containing local information. Using this strategy, we are able to generate more feature vectors for simplex construction with few samples. Figure 3 clearly displays the operation of sampling feature vectors on feature maps with local regions. To further carry out this idea of preserving the adjacency information, we tag the l 2 d-dimensional cells with their corresponding coordinates indicating their relative spatial locations. With the extra 2 dimensions, we are able to restrain these cells on an underlying manifold, retaining their relative positions as in the original image. With such a simple manipulation, our model shows an appreciable improvement in performance. In regard to the experiment setup, the number of examples in each class should be constrained so as to fit the"few-shot" scenario. A typical experiment setting is the N -way k-shot task BID26: for each of the N new categories, k examples are provided. Given a set of unlabeled test samples, the model needs to classify them into these N categories. Since the number of available examples is limited (e.g. 1 or 5), training deep convolutional networks either from the scratch or with fine-tuning on new class data will generally lead to over-fitting. We performed our model on two different datasets: miniImageNet BID26 and Omniglot .The algorithms to be compared fall into two categories: conventional methods and deep learning methods. For deep learning methods, we compare our simplex algorithm with three state-of-the-art ones: matching networks BID26, Meta-Learner LSTM BID15, and prototypical networks BID21. Essentially, our algorithm is to measure the distance between a data point and a data set. For conventional methods, therefore, we take the Mahalanobis distance BID12 and Minimum Incremental Coding Length (MICL) BID28 for comparison. The MICL algorithm can be used to measure the structural similarity by coding theory of multivariate Gaussian data. A simple four-block CNN is employed to learn the representations of data both for miniImageNet and Omniglot. The architecture follows the learner network proposed by BID15, which contains four 3 × 3 convolutional layers with 64 filters. Each is followed by batch normalization, ReLU activation and 2 × 2 max-pooling. Following all the above layers is one fully connected layer, and lastly a softmax layer with the number equal to the number of classes being trained upon. The output is optimized with a cross-entropy loss function by the Adam optimizer with a learning rate of 0.001. The architecture is shown in Figure 4. The traditional algorithms compared are all performed on features extracted by this four-layer CNN. Size of local regions 5-way 1-shot Acc.5-way 5-shot Acc. DISPLAYFORM0 42.66% ± 0.48% 58.21% ± 0.47% 3 × 3 42.34% ± 0.42% 58.98% ± 0.40% 4 × 4 40.76% ± 0.43% 58.02% ± 0.44% 5 × 5 32.85% ± 0.70% 58.63% ± 0.68% Table 2: Few-shot accuracy on miniImageNet on 95% confidence interval. *Reported by BID15 Model Fine Tune 5-way 1-shot Acc. 5-way 5-shot Acc. Baseline KNN N 32.69% ± 0.75% 40.32% ± 0.68% Mahalanobis Distance N 32.39% ± 0.70% 60.09% ± 0.39% MICL BID28 N 42.59% ± 0.46% 61.06% ± 0.36%Matching networks* N 43.40% ± 0.78% 51.09% ± 0.71% Matching networks FCE* N 43.56% ± 0.84% 55.31% ± 0.73% Meta-Learner LSTM* N 43.44% ± 0.77% 60.60% ± 0.71% Prototypical networks BID21 N 49.42% ± 0.78% 68.20% ± 0.66% Simplex (ours) N 46.93% ± 0.43% 62.00% ± 0.26% ImageNet is a large-scale image database designed for multiple computer vision tasks. Since it would be extremely time-consuming to test the few-shot performance on the full ImageNet, we turn to miniImageNet instead BID26, the subset of ImageNet with 100 categories selected. For each category, 600 images with size 84 × 84 are provided. Following the same split as BID15, miniImageNet is divided into a 64-class training set, a 16-class validation set and a 20-class test set. We train the embedding network on the 64-class training set and validate the based on the 16-class validation set. The 20-class test set is only for the few-shot experiments. For each image fed into the 4-block CNN, the last feature map (64@5 × 5) is retrieved for simplex modeling. In order to choose a proper local region for conducting the few-shot experiments, we first test the N -way k-shot accuracy of different sizes of regions on the validation set. The size of the local regions varies from 2 × 2 to 5 × 5. 2 The is shown in TAB0. On the validation set, the models with 2 × 2 and 3 × 3 regions perform the best on the 1-shot and 5-shot tasks, respectively. We take them to compare with other models on the test set. Following the same splits proposed by BID15, we compare the performance of our model on miniImageNet directly with other models. We also conduct the K-nearest neighbor on the feature vectors generated from the 4-block model as a comparison (Baseline KNN). The are shown in Table 2. Using the same pre-trained 4-block CNN, our model performs much better than the baseline KNN, meanwhile outperforms the matching networks and Meta-learner LSTM. However, the prototypical networks are better than our simplex algorithm. Omniglot dataset for one-shot learning BID9 contains characters from 50 alphabets ranging from Korean to ancient Greek. Each character is provided with 20 examples handwritten by BID28 95.4% 99.1% 87.3% 96.9%Siamese networks BID7 97.3% 98.4% 88.2% 97.0% Matching networks BID26 98.1% 98.9% 93.8% 98.5% Prototypical networks BID21 Following the training setting in BID26, we split Omniglot into two parts: 1200 characters for training and the rest for validation and few-shot testing. The embedding 4-layer CNN is almost the same as used for training miniImageNet, except that the output feature map is changed to 64@1 × 1 due to the decrease in image size. We compare the N -way k-shot performance of our model with others. The are shown in TAB1. Our models are overall comparable to the other state-of-the-art works. Besides, we conduct more experiments on MICL and our model for a further comparison on robustness. MICL has a distortion parameter 2 in the coding length, i.e. BID28 ) DISPLAYFORM0 where Σ(X) is the corresponding covariance matrix and µ is the center of X. Through adjusting the parameter, the optimal performance of the model can be obtained on the validation set. We follow the same way as MICL and set up the free parameter in our simplex model. To be specific, let λ 1,..., λ n denote the eigenvalues ofQ. It is easy to write det(Q) = n i=1 λ i. To include the free parameter, we employ the following expression instead DISPLAYFORM1 The same computation is also performed for det(P).The experiment is conducted on the different values of distortion 2. According to the in FIG5, it is clear that our model is far more robust than MICL. Although for some value of 2, the accuracy of MICL is close to our model, the overall performance of MICL is instable with respect to 2. On the contrary, the performance of our model almost keeps invariant in a wide spectrum of 2. It is worth noting that our simplex metric can also be combined with very recently published works including prototypical networks BID21 and meta-learning methods, such as BID3 and BID13. For example, the distance measure in prototypical networks can be replaced with our simplex metric and the associated networks can be learned in the end-to-end manner by the supervision of simplex volumes. For few-shot cases, the number of examples in each class is quite limited. So the computational complexity can be well managed. Besides, the meta learning methods are to learn models between tasks instead of data points. Therefore, they are applicable to improve performance based on the simplex metric. We leave these for further explorations. On the other hand, MICL BID28 and our algorithm directly exploit the features of new classes yielded by CNNs that are not retrained, fine-tuned, or performed any relevant refinement of model parameters on new classes. Even so, these two approaches achieve better performance on 5-shot recognition than matching network BID26 and one of meta-learning algorithms BID15 ) that apply more information of new classes and high-level learning techniques. Both MICL and our simplex algorithm harness the geometric characterization of class structures 3. Therefore, our work might inspire the interest of exploring geometry to solve few-shot learning problems, which is paid little attention in the field. In this paper, we designed a novel method to deal with few-shot learning problems. Our idea was from the point of view of high dimensional convex geometry and transformed the learning problem to the study of volumes of simplices. The relation between a test sample and a class was investigated via the volumes of different polytopes. By harnessing the power of simplex, we gave a rigorous mathematical formulation for our approach. We also conduced extensive simulations to validate our method. The on various datasets showed the accuracy and robustness of the geometry-based method, compared to the state-of-the-art in the literature.
A simplex-based geometric method is proposed to cope with few-shot learning problems.
1,748
scitldr
Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance. We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir. In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm. Learning complex temporal sequences that extend over a few seconds -such as a movement to grab a bottle or to write a number on the blackboard -looks easy to us but is challenging for computational brain models. A common framework for learning temporal sequences is reservoir computing (alternatively called liquid computing or echo-state networks). It combines a reservoir, a recurrent network of rate units with strong, but random connections, with a linear readout that feeds back to the reservoir. Training of the readout weights with FORCE, a recursive least-squares estimator, leads to excellent performance on many tasks such as motor movements. The FORCE rule is, however, biologically implausible: update steps of synapses are rapid and large, and require an immediate and precisely timed feedback signal. A more realistic alternative to FORCE is the family of reward-modulated Hebbian learning rules, but plausibility comes at a price: when the feedback (reward minus expected reward) is given only after a long delay, reward-modulated Hebbian plasticity is not powerful enough to learn complex tasks. Here we combine the reservoir network with a second, more structured network that stores and updates a two-dimension continuous variable as a "bump" in an attractor. The activity of the attractor network acts as a dynamic working memory and serves as input to the reservoir network (fig. 1). Our approach is related to that of feeding an abstract oscillatory input or a "temporal backbone signal" into the reservoir in order to overcome structural weaknesses of reservoir computing that arise if large time spans need to be covered. In computational experiments, we show that a dynamic working memory that serves as an input to a reservoir network facilitates reward-modulated Hebbian learning in multiple ways: it makes a biologically plausible three-factor rule as efficient as FORCE; it admits a delay in the feedback signal; and it allows a single reservoir network to learn and perform multiple tasks. Our architecture is simple: the attractor network (the "memory") receives some task-specific input and produces a robust two-dimensional neural trajectory; the reservoir network (the "motor cortex") shapes its dynamics with this trajectory, and produces a potentially high-dimensional output (fig. 1). Figure 1: Model architecture: a moving 2D bump (left: activity bump (red) surrounded by inactive neurons (blue)) in an attractor network (left circle with two bump trajectories) projects to a reservoir (right circle); the output z(t) is read out from the reservoir and approximates the target function. Attractor network. Following, the bump attractor consists of 2500 neurons evolving as where x is the vector of firing rates evolving with time constant τ m, e is the task-specific external input, h is an adaptation variable with time constant τ a and s is the strength of adaptation. The weight matrix J = J s + J h has two parts. The symmetric part J s creates a two-dimensional translationinvariant structure ing in bump-like stable activity patterns, whereas J h represents structural noise. Due to the adaptation h, the bump moves across a path defined by the initial conditions and structural noise, creating long-lasting reliable activity patterns which also depend on the input e. Reservoir network. The reservoir learns to approximate a target function f (t) with the output z(t) by linearly combining the firing rate r with readout weights W ro: z = W ro r + η η η ≡ẑ + η η η with readout noise η η η. We use the same number of neurons and parameters as, where u is the membrane potential, ξ ξ ξ is the firing rate noise, W attr scales attractor input with coupling c, W rec and λ regulate chaotic activity, and W fb implements the feedback loop. Learning rule. We use the reward-modulated Hebbian rule of for the readout weights W ro, wherex denotes low-pass filtering of x, such that z(t) −z(t) ≈ η η η(t). The reward modulation M (t) tracks performance P (t) as The update rule is an example of a NeoHebbian three-factor learning rule and mimics gradient descent if we ignore the feedback loop. For model details, see appendix A. In fig. 2, the learning rules are compared on 50 target functions sampled from a Gaussian Process (GP) with exponential squared kernel (σ 2 = 10 4 to match the complexity of hand-picked functions from ). After each training period, we measure performance with normalized cross-correlation between the output and the target (ranging from -1 to 1, where 1 is a perfect match) on a single trial with frozen weights. Details are provided in appendix A; code: https://github.com/neuroai-workshopanon-1224113/working-memory-facilitating-reservoir-learning. When tested on one-second signals similar to those of (two insets in fig. 2), the full network with attractor input and reward-modulated Hebbian learning learns faster and more reliably than reward-modulated Hebbian learning without the input from the attractor network. After about 90 training trials, the full network achieves the performance of the FORCE rule (for which training error approaches one in the first trial,, while test error does so after 30-50 trials,; fig. 2A ). For target signals that extend over 10 seconds (same smoothness of the target functions, two insets in fig. 2B), the reward-modulated Hebbian rule achieves a performance of 1 after 200 trials if combined with input from the attractor network (fig. 2B) but fails completely without the attractor network (tuning of the hyperparameters on a logarithmic scale did not help; data not shown). Thus a threefactor learning rule succeeds to learn complex tasks if combined with a temporally structured input from the attractor network. FORCE learning needs a feedback signal at every time step. Standard reward-modulated Hebbian learning can support very small delays, but fails if updates are less frequent than every few ms. In our approach (fig. 2C), proposed updates are summed up in the , but applied only at the end of a one-second trial. We find that even with such a temporally sparse update, learning is still possible. The input from the dynamic working memory is necessary to achieve this task: when the strength of the input from the attractor network gradually decreases, performance drops; in the total absence of attractor input (c = 0.0; note that the reservoir still receives weak input noise) learning completely fails. Strikingly, delayed updates do not hurt performance, and the system achieves high (> 0.9) cross-correlation in fewer than 100 training trials if the input from the attractor network is strong enough. The transient drop in performance shortly after the start in fig. 2C is likely due to W ro = 0 in the beginning, meaning that the output is uncorrelated with the firing rates, and therefore the cumulative weight update does not approximate gradient information. It is well known that reservoir networks can learn multiple tasks given different initial conditions with both FORCE and the reward-modulated Hebbian rule. We want to check whether this also holds for our approach. We conjecture that different inputs to the attractor network generate unique neural trajectories that can be exploited by the reservoir network. To test this hypothesis, we train the network to produce hand-written digits. The static input to the attractor comes from the pre-processed MNIST dataset (network inputs are taken from one of the last layers of a deep network trained to classify MNIST) in order to provide a realistic input to the attractor network which transforms the static input into dynamic trajectories (noiseless, fig. 3B, and noisy, fig. 3D). We record 50 attractor trajectories used for training (used 4 times each, ing in 2000 training trials) and 50 for testing of each digit (1 second each), where each trajectory corresponds to a distinct input pattern. The reservoir learns a single drawing for each class. The variance of the structural noise in the attractor network is 3 times larger compared to the previous experiments in order to produce more robust bump trajectories (fig. 3D). The reward-modulated Hebbian rule masters 10 out of 10 digits when driven by a noiseless input from the attractor network (fig. 3A). In the presence of noise in the attractor network (fig. 3D), the performance is imperfect for "five" and "six" (fig. 3C). We checked that FORCE learning with the same noisy input did not improve the performance (data not shown). Note that a linear readout of the attractor (without the reservoir) would be insufficient: first, sometimes single digit trajectories are very dissimilar (e.g. the different zero's in fig. 3D); second, at points where trajectories cross each other, a delay-less linear readout must produce the same output, no matter what the digit is. We showed that a dynamic working memory can facilitate learning of complex tasks with biologically plausible three-factor learning rules. Our indicate that, when combined with a bump attractor, reservoir computing with reward-modulated learning can be as efficient as FORCE, a widely used but biologically unrealistic rule. The proposed network relies on a limited number of trajectories in the attractor network. To increase its capacity, a possible future direction would be to combine input from the attractor network with another, also input-specific, but transient input that would bring the reservoir into a different initial state. In this case the attractor network would work as a time variable (as in ), and the other input as the control signal (as in ). Apart from the biological relevance, the proposed method might be used for real-world applications of reservoir computing (e.g. wind forecasting ) as it is computationally less expensive than FORCE. It might also be an interesting alternative for learning in neuromorphic devices. Simulation details. Both networks were simulated with the Euler method with the step size dt = 1 ms. The attractor network dynamics was recorded after a 100 ms warm up period to allow creation of the bump solution, during which it received additional input from the images in section 3.3. Training was done consequently, without breaks in the dynamics between trials. For testing, the network started from the preceding training state and continued with frozen weights. After testing, the pre-training activity was restored. The code for experiments is available at https://github.com/neuroai-workshopanon-1224113/working-memory-facilitating-reservoir-learning. Test functions. Gaussian process test function were drawn from Forcing both ends of the function to be zero and denoting x = (0, T − 1), z = (1, . . ., T − 2), we sample test functions as where T is either 10 3 (short tasks) or 10 4 (long tasks). We chose σ 2 to roughly match the complexity of targets from (σ 2 = 10 4). 50 random functions were tested on 50 random reservoirs that nevertheless received the same attractor input (W attr was not resampled). In section 3.3, the same reservoir was used for all runs. The noisy input for section 3.3 was taken from an intermediate layer of a deep network trained to classify MNIST, and the noiseless input stimulated only a 5 by 5 square of neurons (unique for each digit). Attractor network parameters. The time constants were τ m = 30 ms, τ a = 400 ms. Adaptation strength was s = 1.5. The external input e was drawn independently for each neuron from a Gaussian distribution N (1, 0.0025 2). In section 3.3, the task-specific input was added to the noisy one. For the connectivity matrix J = J s + J h, the noisy part was drawn independently as (J h) ij ∼ N (0, σ 2 /N attr), with N attr = 2500 and σ = 2 in all experiments except for section 3.3, where we used σ = 6 for more robust trajectories. The symmetric part arranged the neurons on a 2D grid, such that every neuron i had its coordinates x i and y i ranging from 0 to 49. The connectivity led to mutual excitation of nearby neurons and inhibition of the distant ones, The bump center (used in fig. 3B and D) corresponded to the mean of the activity on the torus. Denoting activity of each neuron as r(x, y), the center on the x axis was calculated as where "angle" computes the counterclockwise angle of a complex variable (ranging from 0 to 2π). Reservoir network parameters. The time constant was τ = 50 ms, and total coupling strength was λ = 1.5. The readout weights W ro were initialized to zero. The feedback weights were drawn independently from a uniform distribution as (W fb) ij ∼ U(−1, 1). Both the recurrent connections and the weights from the attractor to the reservoir were drawn independently as (W rec) ij, (W attr) ij ∼ N (0, 1/pN res) · Be(p), with p = 0.1, N res = 1000, and Be being the Bernoulli distribution. A new reservoir, and thus W rec, was sampled for each new test function. The matrix W rec was the same for all tasks except the last one in section 3.3. State noise ξ ξ ξ and exploratory noise η η η were generated independently from the uniform distribution as ξ i ∼ U(−0.05, 0.05), η i ∼ U(−0.5, 0.5). When attractor was present, the reservoir neurons also received weak independent noise drawn from N (0, 0.0025). Learning rule. Low-pass filtering was done as x(t + dt) =x(t) + dt (x(t) −x(t))/τ f, τ f = 5 ms,x = 0. The learning rate η(t) was computed as η(t) = η 0 /(1 + t/τ l) (η 0 = 5 · 10 −4, τ l = 2 · 10 4 ms) and held at η 0 in section 3.2 to make independent of the decay.
We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)
1,749
scitldr
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism. However, currently, there remains a performance gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables. In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales. The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers. We show that the proposed architecture achieves state of the art log-likelihoods across several tasks. Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text. Generative modeling of sequence data requires capturing long-term dependencies and learning of correlations between output variables at the same time-step. Recurrent neural networks (RNNs) and its variants have been very successful in a vast number of problem domains which rely on sequential data. Recent work in audio synthesis, language modeling and machine translation tasks BID8 BID9 BID13 has demonstrated that temporal convolutional networks (TCNs) can also achieve at least competitive performance without relying on recurrence, and hence reducing the computational cost for training. Both RNNs and TCNs model the joint probability distribution over sequences by decomposing the distribution over discrete time-steps. In other words, such models are trained to predict the next step, given all previous time-steps. RNNs are able to model long-term dependencies by propagating information through their deterministic hidden state, acting as an internal memory. In contrast, TCNs leverage large receptive fields by stacking many dilated convolutions, allowing them to model even longer time scales up to the entire sequence length. It is noteworthy that there is no explicit temporal dependency between the model outputs and hence the computations can be performed in parallel. The TCN architecture also introduces a temporal hierarchy: the upper layers have access to longer input sub-sequences and learn representations at a larger time scale. The local information from the lower layers is propagated through the hierarchy by means of residual and skip connections BID2.However, while TCN architectures have been shown to perform similar or better than standard recurrent architectures on particular tasks BID2, there currently remains a performance gap to more recent stochastic RNN variants BID3 BID7 BID11 BID12 BID14 BID25. Following a similar approach to stochastic RNNs, BID21 present a significant improvement in the log-likelihood when a TCN model is coupled with latent variables, albeit at the cost of limited receptive field size. The computational graph of generative (left) and inference (right) models of STCN. The approximate posterior q is conditioned on dt and is updated by the prior p which is conditioned on the TCN representations of the previous time-step dt−1. The random latent variables at the upper layers have access to a long history while lower layers receive inputs from more recent time steps. In this work we propose a new approach for augmenting TCNs with random latent variables, that decouples deterministic and stochastic structures yet leverages the increased modeling capacity efficiently. Motivated by the simplicity and computational advantages of TCNs and the robustness and performance of stochastic RNNs, we introduce stochastic temporal convolutional networks (STCN) by incorporating a hierarchy of stochastic latent variables into TCNs which enables learning of representations at many timescales. However, due to the absence of an internal state in TCNs, introducing latent random variables analogously to stochastic RNNs is not feasible. Furthermore, defining conditional random variables across time-steps would in breaking the parallelism of TCNs and is hence undesirable. In STCN the latent random variables are arranged in correspondence to the temporal hierarchy of the TCN blocks, effectively distributing them over the various timescales (see FIG0 . Crucially, our hierarchical latent structure is designed to be a modular add-on for any temporal convolutional network architecture. Separating the deterministic and stochastic layers allows us to build STCNs without requiring modifications to the base TCN architecture, and hence retains the scalability of TCNs with respect to the receptive field. This conditioning of the latent random variables via different timescales is especially effective in the case of TCNs. We show this experimentally by replacing the TCN layers with stacked LSTM cells, leading to reduced performance compared to STCN.We propose two different inference networks. In the canonical configuration, samples from each latent variable are passed down from layer to layer and only one sample from the lowest layer is used to condition the prediction of the output. In the second configuration, called STCN-dense, we take inspiration from recent CNN architectures BID18 and utilize samples from all latent random variables via concatenation before computing the final prediction. Our contributions can thus be summarized as: 1) We present a modular and scalable approach to augment temporal convolutional network models with effective stochastic latent variables. 2) We empirically show that the STCN-dense design prevents the model from ignoring latent variables in the upper layers BID32. 3) We achieve state-of-the-art log-likelihood performance, measured by ELBO, on the IAM-OnDB, Deepwriting, TIMIT and the Blizzard datasets. 4) Finally we show that the quality of the synthetic samples matches the significant quantitative improvements. Auto-regressive models such as RNNs and TCNs factorize the joint probability of a variable-length sequence x = {x 1, . . ., x T} as a product of conditionals as follows: DISPLAYFORM0 where the joint distribution is parametrized by θ. The prediction at each time-step is conditioned on all previous observations. The observation model is frequently chosen to be a Gaussian or Gaussian mixture model (GMM) for real-valued data, and a categorical distribution for discrete-valued data. In TCNs the joint probabilities in Eq. are parametrized by a stack of convolutional layers. Causal convolutions are the central building block of such models and are designed to be asymmetric such that the model has no access to future information. In order to produce outputs of the same size as the input, zero-padding is applied at every layer. In the absence of a state transition function, a large receptive field is crucial in capturing long-range dependencies. To avoid the need for vast numbers of causal convolution layers, typically dilated convolutions are used. Exponentially increasing the dilation factor in an exponential growth of the receptive field size with depth BID31 BID2. In this work, without loss of generality, we use the building blocks of Wavenet as gated activation units have been reported to perform better. A deterministic TCN representation d l t at time-step t and layer l summarizes the input sequence x 1:t: DISPLAYFORM0 where the filter width is 2 and j denotes the dilation step. In our work, the stochastic variables z l, l = 1... L are conditioned on TCN representations d l that are constructed by stacking K Wavenet blocks over the previous d l−1 (for details see FIG6 in Appendix). VAEs BID20 BID24 introduce a latent random variable z to learn the variations in the observed non-sequential data where the generation of the sample x is conditioned on the latent variable z. The joint probability distribution is defined as: DISPLAYFORM0 and parametrized by θ. Optimizing the marginal likelihood is intractable due to the non-linear mappings between z and x and the integration over z. Instead the VAE framework introduces an approximate posterior q φ (z|x) and optimizes a lower-bound on the marginal likelihood: DISPLAYFORM1 where KL denotes the Kullback-Leibler divergence. Typically the prior p θ (z) and the approximate q φ (z|x) are chosen to be in simple parametric form, such as a Gaussian distribution with diagonal covariance, which allows for an analytical calculation of the KL-term in Eq.. An RNN captures temporal dependencies by recursively processing each input, while updating an internal state h t at each time-step via its state-transition function: DISPLAYFORM0 where f (h) is a deterministic transition function such as LSTM BID17 or GRU BID6 cells. The computation has to be sequential because h t depends on h t−1.The VAE framework has been extended for sequential data, where a latent variable z t augments the RNN state h t at each sequence step. The joint distribution p θ (x, z) is modeled via an auto-regressive model which in the following factorization: DISPLAYFORM1 In contrast to the fixed prior of VAEs, N (0, I), sequential variants define prior distributions conditioned on the RNN hidden state h and implicitly on the input sequence x BID7., and the inference model (right), which is shared by both variants. Diamonds represent the outputs of deterministic dilated convolution blocks where the dependence of dt on the past inputs is not shown for clarity (see Eq. FORMULA1). xt and zt are observable inputs and latent random variables, respectively. The generative task is to predict the next step in the sequence, given all past steps. Note that in the STCN-dense variant the next step is conditioned on all latent variables z l t for l = 1... L. The mechanics of STCNs are related to those of VRNNs and LVAEs. Intuitively, the RNN state h t is replaced by temporally independent TCN layers d l t. In the absence of an internal state, we define hierarchical latent variables z l t that are conditioned vertically, i.e., in the same time-step, but independent horizontally, i.e., across time-steps. We follow a similar approach to LVAEs in defining the hierarchy in a top-down fashion and in how we estimate the approximate posterior. The inference network first computes the approximate likelihood, and then this estimate is corrected by the prior, ing in the approximate posterior. The TCN layers d are shared between the inference and generator networks, analogous to VRNNs BID7. To preserve the parallelism of TCNs, we do not introduce an explicit dependency between different time-steps. However, we suggest that conditioning a latent variable z l−1 t on the preceding variable z l t implicitly introduces temporal dependencies. Importantly, the random latent variables in the upper layer have access to a larger receptive field due to its deterministic input d l t−1, whereas latent random variables in lower layers are updated with different, more local information. However, the latent variable z l−1 t may receive longer-range information from z l t. The generative and inference models are jointly trained by optimizing a step-wise variational lower bound on the log-likelihood BID20 BID24. In the following sections we describe these components and build up the lower-bound for a single time-step t. Each sequence step x t is generated from a set of latent variables z t, split into layers as follows: DISPLAYFORM0 where DISPLAYFORM1 Here the prior is modeled by a Gaussian distribution with diagonal covariance, as is common in the VAE framework. The subscript p denotes items of the generative distribution. For the inference distribution we use the subscript q. The distributions are parameterized by a neural network f We propose two variants of the observation model. In the non-sequential scenario, the observations are defined to be conditioned on only the last latent variable in the hierarchy, i.e., p θ (x t |z 1 t), following Sønderby et al. FORMULA0; BID16 and BID24 our STCN variant uses the same observation model, allowing for an efficient optimization. However, latent units are likely to become inactive during training in this configuration BID5 BID4 BID32 ing in a loss of representational power. The latent variables at different layers are conditioned on different contexts due to the inputs d l t. Hence, the latent variables are expected to capture complementary aspects of the temporal context. To propagate the information all the way to the final prediction and to ensure that gradients flow through all layers, we take inspiration from BID18 and directly condition the output probability on samples from all latent variables. We call this variant of our architecture STCN-dense. The final predictions are then computed by the respective observation functions: DISPLAYFORM0 where f (o) corresponds to the output layer constructed by stacking 1D convolutions or Wavenet blocks depending on the dataset. In the original VAE framework the inference model is defined as a bottom-up process, where the latent variables are conditioned on the stochastic layer below. Furthermore, the parameterization of the prior and approximate posterior distributions are computed separately BID5 BID24. In contrast, propose a top-down dependency structure shared across the generative and inference models. From a probabilistic point of view, the approximate Gaussian likelihood, computed bottom-up by the inference model, is combined with the Gaussian prior, computed top-down from the generative model. We follow a similar procedure in computing the approximate posterior. First, the parameters of the approximate likelihood are computed for each stochastic layer l: DISPLAYFORM0 followed by the downward pass, recursively computing the prior and approximate posterior by precision-weighted addition: DISPLAYFORM1 Finally, the approximate posterior has the same decomposition as the prior (see Eq. FORMULA6): DISPLAYFORM2 DISPLAYFORM3 Note that the inference and generative network share the parameters of dilated convolutions Conv (l). The variational lower-bound on the log-likelihood at time-step t can be defined as follows: DISPLAYFORM0 Using the decompositions from Eq. FORMULA6 and FORMULA0, the Kullback-Leibler divergence term becomes: DISPLAYFORM1 The KL term is the same for the STCN and STCN-dense variants. The reconstruction term L Recon t, however, is different. In STCN we only use samples from the lowest layer of the hierarchy, whereas in STCN-dense we use all latent samples in the observation model: DISPLAYFORM2 DISPLAYFORM3 In the dense variant, samples drawn from the latent variables z l t are carried over the dense connections. Similar to, the expectation over z One alternative option to use the latent samples could be to sum individual samples before feeding them into the observation model, i.e., sum([z . We empirically found that this does not work well in STCN-dense. Instead, we concatenate all samples [z DISPLAYFORM4 DISPLAYFORM5 analogously to DenseNet BID18 and BID19 . We evaluate the proposed variants STCN and STCN-dense both quantitatively and qualitatively on modeling of digital handwritten text and speech. We compare with vanilla TCNs, RNNs, VRNNs and state-of-the art models on the corresponding tasks. In our experiments we use two variants of the Wavenet model: the original model proposed in and a variant that we augment with skip connections analogously to STCN-dense. This additional baseline evaluates the benefit of learning multi-scale representations in the deterministic setting. Details of the experimental setup are provided in the Appendix. Our code is available at https://ait.ethz.ch/projects/2019/stcn/.Handwritten text: The IAM-OnDB and Deepwriting datasets consist of digital handwriting sequences where each time-step contains real-valued (x, y) pen coordinates and a binary pen-up event. The IAM-OnDB data is split and pre-processed as done in BID7. BID1 extend this dataset with additional samples and better pre-processing. BID7 26643 7413 1358 528 * VRNN (Normal) BID7 ≈ 30235 ≈ 9516 ≈ 1354 ≥ 495 * VRNN (GMM) BID7 ≈ 29604 ≈ 9392 ≈ 1384 ≥ 673 * SRNN (Normal) BID12 ≥ 60550 ≥ 11991 n/a n/a Z-forcing (Normal) BID14 ≥ 70469 ≥ 15430 n/a n/a Var. Bi-LSTM (Normal) BID25 ≥ 73976 ≥ 17319 n/a n/a SWaveNet (Normal) BID21 ≥ 72463 ≥ 15708 DISPLAYFORM0 ≥ 77438 ≥ 17670 n/a n/a the STCN-dense version. The same relative ordering is maintained on the Deepwriting dataset, indicating that the proposed architecture is robust across datasets. Fig. 3 compares generated handwriting samples. While all models produce consistent style, our model generates more natural looking samples. Note that the spacing between words is clearly visible and most of the letters are distinguishable. Speech modeling: TIMIT and Blizzard are standard benchmark dataset in speech modeling. The models are trained and tested on 200 dimensional real-valued amplitudes. We apply the same pre-processing as BID7. For this task we introduce STCN-dense-large, with increased model capacity. Here we use 512 instead of 256 convolution filters. Note that the total number of model parameters is comparable to SWaveNet and other SOA models. On TIMIT, STCN-dense TAB0 significantly outperforms the vanilla TCN and RNN, and stochastic models. On the Blizzard dataset, our model is marginally better than the Variational Bi-LSTM. Note that the inference models of SRNN BID12, Z-forcing BID14, and Variational Bi-LSTM BID25 receive future information by using backward RNN cells. Similarly, SWaveNet BID21 applies causal convolutions in the backward direction. Hence, the latent variable can be expected to model future dynamics of the sequence. In contrast, our models have only access to information up to the current time-step. These indicate that the STCN variants perform very well on the speech modeling task. Latent Space Analysis: BID32 observe that in hierarchical latent variable models the upper layers have a tendency to become inactive, indicated by a low KL loss BID10. TAB2 shows the KL loss per latent variable and the corresponding log-likelihood measured by ELBO in our models. Across the datasets it can be observed that our models make use of many of the latent variables which may explain the strong performance across tasks in terms of log-likelihoods. Note that STCN uses a standard hierarchical structure. However, individual latent variables have different information context due to the corresponding TCN block's receptive field. This observation suggests that the proposed combination of TCNs and stochastic variables is indeed effective. Furthermore, in STCN we see a similar utilization pattern of the z variables across tasks, whereas STCN-dense may have more flexibility in modeling the temporal dependencies within the data due to its dense connections to the output layer. Replacing TCN with RNN: To better understand potential symergies between dilated CNNs and the proposed latent variable hierarchy, we perform an ablation study, isolating the effect of TCNs and the latent space. To this end the deterministic TCN blocks are replaced with LSTM cells by keeping the latent structure intact. We dub this condition LadderRNN. We use the TIMIT and IAM-OnDB datasets for evaluation. TAB3 summarizes performance measured by the ELBO.The most direct translation of the the STCN architecture into an RNN counterpart has 25 stacked LSTM cells with 256 units each. Similar to STCN, we use 5 stochastic layers (see Appendix 7.1). Note that stacking this many LSTM cells is unusual and ed in instabilities during training. Hence, the performance is similar to vanilla RNNs. The second LadderRNN configuration uses 5 stacked LSTM cells with 512 units and a one-to-one mapping with the stochastic layers. On the TIMIT dataset, all LadderRNN configurations show a significant improvement. We also observe a pattern of improvement with densely connected latent variables. This experiments shows that the proposed modular latent variable design does allow for the usage of different building blocks. Even when attached to LSTM cells, it boosts the log-likelihood performance (see 5x512-LadderRNN), in particular when used with dense connections. However, the empirical suggest that the densely connected latent hierarchy interacts particularly well with dilated CNNs. We suggest this is due to the hierarchical nature on both sides of the architecture. On both datasets STCN models achieved the best performance and significantly improve with dense connections. This supports our contribution of a latent variable hierarchy, which models different aspects of information from the input time-series. DISPLAYFORM1 5 RELATED WORK BID24 propose Deep Latent Gaussian Models (DLGM) and propose the Ladder Variational Autoencoder (LVAE). In both models the latent variables are hierarchically defined and conditioned on the preceding stochastic layer. LVAEs improve upon DLGMs via implementation of a top-down hierarchy both in the generative and inference model. The approximate posterior is computed via a precisionweighted update of the approximate likelihood (i.e., the inference model) and prior (i.e., the generative model). Similarly, the PixelVAE BID16 incorporates a hierarchical latent space decomposition and uses an autoregressive decoder. BID32 show under mild conditions that straightforward stacking of latent variables (as is done e.g. in LVAE and PixelVAE) can be ineffective, because the latent variables that are not directly conditioned on the observation variable become inactive. Due to the nature of the sequential problem domain, our approach differs in the crucial aspects that STCNs use dynamic, i.e., conditional, priors BID7 at every level. Moreover, the hierarchy is not only implicitly defined by the network architecture but also explicitly defined by the information content, i.e., receptive field size. Dieng et al. FORMULA0 both theoretically and empirically show that using skip connections from the latent variable to every layer of the decoder increases mutual information between the latent and observation variables. Similar to BID10 in STCN-dense, we introduce skip connections from all latent variables to the output. In STCN the model is expected to encode and propagate the information through its hierarchy. BID30 suggest using autoregressive TCN decoders to remedy the posterior collapse problem observed in language modeling with LSTM decoders BID4. BID29 and BID9 use TCN decoders conditioned on discrete latent variables to model audio signals. Stochastic RNN architectures mostly vary in the way they employ the latent variable and parametrize the approximate posterior for variational inference. BID7 and BID3 use the latent random variable to capture high-level information causing the variability observed in sequential data. Particularly BID7 shows that using a conditional prior rather than a standard Gaussian distribution is very effective in sequence modeling. In BID12 BID14 BID25 ), the inference model, i.e., the approximate posterior, receives both the past and future summaries of the sequence from the hidden states of forward and backward RNN cells. The KL-divergence term in the objective enforces the model to learn predictive latent variables in order to capture the future states of the sequence. BID21's SWaveNet is most closely related to ours. SWaveNet also introduces latent variables into TCNs. However, in SWaveNet the deterministic and stochastic units are coupled which may prevent stacking of larger numbers of TCN blocks. Since the number of stacked dilated convolutions determines the receptive field size, this directly correlates with the model capacity. For example, the performance of SWaveNet on the IAM-OnDB dataset degrades after stacking more than 3 stochastic layers BID21, limiting the model to a small receptive field. In contrast, we aim to preserve the flexibility of stacking dilated convolutions in the base TCN. In STCNs, the deterministic TCN units do not have any dependency on the stochastic variables (see FIG0) and the ratio of stochastic to deterministic units can be adjusted, depending on the task. In this paper we proposed STCNs, a novel auto-regressive model, combining the computational benefits of convolutional architectures and expressiveness of hierarchical stochastic latent spaces. We have shown the effectivness of the approach across several sequence modelling tasks and datasets. The proposed models are trained via optimization of the ELBO objective. Tighter lower bounds such as IWAE BID5 or FIVO may further improve modeling performance. We leave this for future work. The network architecture of the proposed model is illustrated in FIG6. We make only a small modification to the vanilla Wavenet architecture. Instead of using skip connections from Wavenet blocks, we only use the latent sample zt in order to make a prediction of xt. In STCN-dense configuration, zt is the concatenation of all latent variables in the hierarchy, i.e., zt = [z Output layer f (o): For the IAM-OnDB and Deepwriting datasets we use 1D convolutions with ReLU nonlinearity. We stack 5 of these layers with 256 filters and filter size 1. DISPLAYFORM0 For TIMIT and Blizzard datasets Wavenet blocks in the output layer perform significantly better. We stack 5 Wavenet blocks with dilation size 1. For each convolution operation in the block we use 256 filters. The filter size of the dilated convolution is set to 2. The STCN-dense-large model is constructed by using 512 filters instead of 256. l t: The number of Wavenet blocks is usually determined by the desired receptive field size.• For the handwriting datasets K = 6 and L = 5. In total we have 30 Wavenet blocks where each convolution operation has 256 filters with size 2.• For speech datasets K = 5 and L = 5. In total we have 25 Wavenet blocks where each convolution operation has 256 filters with size 2. The large model configuration uses 512 filters. p and f q: The number of stochastic layers per task is given by L. We used dimensional latent variables for the handwriting tasks. It is for speech datasets. Note that the first entry of the list corresponds to z 1.The mean and sigma parameters of the Normal distributions modeling the latent variables are calculated by the f (l) p and f (l) q networks. We stack 2 1D convolutions with ReLU nonlinearity and filter size 1. The number of filters are the same as the number of Wavenet block filters for the corresponding task. Finally, we clamped the latent sigma predictions between 0.001 and 5. In all STCN experiments we applied KL annealing. In all tasks, the weight of the KL term is initialized with 0 and increased by 1 × e −4 at every step until it reaches 1.The batch size was 20 for all datasets except for Blizzard where it was 128.We use the ADAM optimizer with its default parameters and exponentially decay the learning rate. For the handwriting datasets the learning rate was initialized with 5 × e −4 and followed a decay rate of 0.94 over 1000 decay steps. On the speech datasets it was initialized with 1 × e −3 and decayed with a rate of 0.98. We applied early stopping by measuring the ELBO performance on the validation splits. We implement STCN models in Tensorflow BID0. Our code and models achieving the SOA are available at https://ait.ethz.ch/projects/2019/stcn/. Here we provide the extended table with Normal observation model entries for available models. Table 4: Average log-likelihood per sequence on TIMIT, Blizzard, IAM-OnDB and Deepwriting datasets. (Normal) and (GMM) stand for unimodal Gaussian or multi-modal Gaussian Mixture Model (GMM) as the observation model BID15 BID7. Asterisks * indicate that we used our re-implementation only for the Deepwriting dataset.
We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables.
1,750
scitldr
The weak contraction mapping is a self mapping that the range is always a subset of the domain, which admits a unique fixed-point. The iteration of weak contraction mapping is a Cauchy sequence that yields the unique fixed-point. A gradient-free optimization method as an application of weak contraction mapping is proposed to achieve global minimum convergence. The optimization method is robust to local minima and initial point position. Many gradient-based optimization methods, such as gradient descent method, Newton's method and so on, face great challenges in finding the global minimum point of a function. As is known, searching for the global minimum of a function with many local minima is difficult. In principle, the information from the derivative of a single point is not sufficient for us to know the global geometry property of the function. For a successful minimum point convergence, the initial point is required to be sufficiently good and the derivative calculation need to be accurate enough. In the gradientbased methods, the domain of searching area will be divided into several subsets with regards to local minima. And eventually it will converge to one local minimum depends on where the initial point locates at. Let (X,d) be a metric space and let T:X → X be a mapping. For the inequality that, d(T (x), T (y)) ≤ qd(x, y), ∀x, y ∈ X.if q ∈, T is called contractive; if q ∈, T is called nonexpansive; if q < ∞, T is called Lipschitz continuous(1; 2). The gradient-based methods are usually nonexpansive mapping the solution exists but is not unique for general situation. For instance, if the gradient descent method is written as a mapping T and the objective function has many local minima, then there are many fixed points accordingly. From the perspective of spectra of bounded operator, for a nonexpansive mapping any minima of the objective function is an eigenvector of eigenvalue equation T (x) = λx,in which λ = 1. In the optimization problem, nonexpansive mapping sometimes works but their disadvantages are obvious. Because both the existence and uniqueness of solution are important so that the contractive mapping is more favored than the nonexpansive mapping(3; 4).Banach fixed-point theorem is a very powerful method to solve linear or nonlinear system. But for optimization problems, the condition of contraction mapping T: X → X that d(T (x), T (y)) ≤ qd(x, y) is usually too strict and luxury. In the paper, we are trying to extend the Banach fixedpoint theorem to an applicable method for optimization problem, which is called weak contraction mapping. In short, weak contraction mapping is a self mapping that always map to the subset of its domain. It is proven that weak contraction mapping admits a fixed-point in the following section. How to apply the weak contraction mapping to solve an optimization problem? Geometrically, given a point, we calculate the height of this point and utilize a hyperplane at the same height to cut the objective function, where the intersection between the hyperplane and the objective function will form a contour or contours. And then map to a point insider a contour, which the range of this mapping is always the subset of its domain. The iteration of the weak contraction mapping yields a fixed-point, which coincides with the global minimum of the objective function. In this section, the concept of weak contraction mapping and its fixed-point will be discussed in detail. Definition 1. Let (X, d and D) be a metric space. Both the metric measurement d and D are defined in the space. And the metric measurement D(X) refers to the maximum distance between two points in the vector space X: DISPLAYFORM0 Definition 2. Let (X, d and D) be a complete metric space. Then a mapping T: X → X is called weak contraction mapping on X if there exists DISPLAYFORM1 The weak contraction mapping is an extension of contraction map with a looser requirement that DISPLAYFORM2 and D) be a non-empty complete metric space with weak contraction mapping T: X → X. Then T admits a unique fixed-point x * in X when X 0 is decided. Let x 0 ∈ X be arbitrary and define a sequence {x n} be setting: x n = T (x n−1). The Theorem.1 is proven in the following lemmas. By definition, there exists q DISPLAYFORM3 Cauchy sequence in (X, d and D) and hence converges to a limit x * in X 0.Proof. Let m, n ∈ N such that m > n. DISPLAYFORM4 Let > 0 be arbitrary, since q ∈, we can find a large N ∈ N such that DISPLAYFORM5 Hence, by choosing m, n large enough: DISPLAYFORM6 Thus, {x n} is Cauchy and converges to a point x * ∈ X 0. DISPLAYFORM7 Proof. DISPLAYFORM8 ) with regards to a specific X 0.Proof. Suppose there exists another fixed-point y that T (y) = y, then choose the subspace X i that both the x * and y are the only elements in X i. By definition, X i+1 = R(T (X i)) so that, both the x * and y are elements in X i+1, namely, DISPLAYFORM9 Let a hyperplane L cut the objective function f(x), the intersection of L and f(x) forms a contour (or contours). Observing that the contour (or contours) will divide X into two subspaces the higher subspace X >:= {x | f (x) > h, ∀x ∈ X} and the lower subspace DISPLAYFORM10 Geometrically, the range of weak contraction mapping shrinks over iterates, such that, DISPLAYFORM11. Based on lemma.1.3, the D(X i) measurement converges to zero as i goes to infinity, namely, lim DISPLAYFORM12 And the sequence of iteration DISPLAYFORM13 is Cauchy sequence that converge to the global minimum of objective function f(x) if the f(x) has a unique global minimum point. Lemma 1.5. Provided there is a unique global minimum point of an objective function, then x * is the global minimum point of the function. Proof. The global minimum point must be insider the lower space {X ≤ i, ∀i ∈ N 0}. Similar to the proof of uniqueness of fixed-point, suppose the global minimum point x min of objective function is different from x *. By measuring the distance between fixed-point X * and the global minimum point DISPLAYFORM14 The inequality above indicates d(x *, x min) = 0, thus x * = x min.Compared with contraction map, the weak contraction map is much easier to implement in the optimization problem as the requirement DISPLAYFORM15 t require x i in sequence {x n} must move closer to each other for every step but confine the range of x i to be smaller and smaller. Therefore, the sequence {x n} can still be a Cauchy and has the asymptotic behavior to converge to the fixed-point. Given the objective function f (x) has a unique global minimum point, the task is to find a weak contraction mapping T: X → X such that the unique fixed-point of mapping is the global minimum point of the function. The weak contraction map for the optimization problem can be implemented in following way. First, provide one arbitrary initial point x 0 to the function and calculate the height L = f (x 0) of the point and this height is the corresponding contours' level; Second, given the initial point map to another point inside the contour. One practical way is to solve the equation f (x) = L and get n number of roots which locate on a contour(or contours) and then the average of these roots is the updated searching point. And then repeat these process until the iteration of searching point converge. This contour-based optimization algorithm utilizes the root-finding algorithm to solve the equation f (x) = L and get n number of roots. The starting point for the root-finding algorithm is generated by a random number generator. This stochastic process will help the roots to some extent widely distribute over the contour rather than concentrate on somewhere. The inequality d(x m, x n) ≤ q n D(X 0) indicates the rate of convergence, namely, the smaller q is the high rate of convergence will be achieved. Geometrically, the equation f (x) = L is the intersection of the objective function and a hyperplane whose height is L. We hope the hyperplane move downward in a big step during each iterate and the centroid of the contour refers to the most likely minimum position. Therefore, averaging the roots as an easy and effective way to map somewhere near the centroid. And there is a trade-off between the number of roots on the contour and the rate of convergence. The larger amount of roots on the contour, the more likely the average locates closer to the centroid of the contour, and then the less iterates are required for convergence. In another words, the more time spend on finding roots on a contour, the less time spend on the iteration, vice verse. The global minimum point x * is the fixed-point of the iteration x i+1 = T x i and solves the equation It is worth noting that the size of contour become smaller and smaller during the iterative process and eventually converge to a point, which is the minimum point of the function. DISPLAYFORM0 As shown in the previous examples, averaging the roots on the contour is an effective approach to map a point inside the interior of the lower space X ≤ when it is convex. However, in general situation, the space X ≤ is not guaranteed to be convex. In that case, it is important to decompose the lower space X ≤ into several convex subsets. In this study, the key intermediate step is to check whether two roots belong to the same convex subset and decompose all roots into several convex subsets accordingly. One practical way to achieve that is to pair each two roots and scan function's value along the segment between the two roots and check whether there exists a point higher than contour's level. Loosely speaking, if two roots belong to the same convex subset, the value of function along the segment is always lower than the contour's level. Otherwise, the value of function at somewhere along the segment will be higher than the contour's level. Traverse all the roots and apply this examination on them, then we can decompose the roots with regards to different convex subsets. This method is important to map a point insider interior of a contour and make hyperplane move downwards. To check whether two roots belong to the same convex subset, N number of random points along the segment between two roots are checked whether higher than the contour's level or not. When we want to check the function's value along the segment between r m and r n. The vector k = r m − r n is calculated so that the random point p i locate on the segment can be written as p i = r n + (r m − r n), ∈, where the is a uniform random number from 0 to 1. Then check whether the inequality holds for all random point such that f (p i) < f (r m), ∀i ≤ N. Obviously, the more random points on the segment are checked, the less likely the point higher than contour's level is missed(9; 10).After the set of roots are decomposed into several convex subsets, the averages of roots with regards to each subsets are calculated and the lowest one is returned as an update point from each iterate. Thereafter, the remaining calculation is repeat the iterate over and over until convergence and return the converged point as the global minimum. Nevertheless, the algorithm has been tested on Ackley function where the global minimum locates at that f = 0. And the first 6 iterates of roots and contours is shown FIG2 and the minimum point (-0.00000034,0.00000003) return by algorithm is shown in TABLE. 3. The test shows that the optimization algorithm is robust to local minima and able to achieve the global minimum convergence. The quest to find to the global minimum pays off handsomely. When the optimization method is tested on Ackley function, the average of roots and the level of contour for each iteration is shown above. In summary, the main procedure of the stochastic contour-based optimization method is decomposed into following steps: 1. Given the initial guess point x for the objective function and calculate the contour level L; 2. Solve the equation f (x) = L and get n number of roots. Decompose the set of roots into several convex subsets,return the lowest average of roots as an update point from each iterate; 3. Repeat the above iterate until convergence. The weak contraction mapping is a self mapping that always map to a subset of domain. Intriguingly, as an extension of Banach fixed-point theorem, the iteration of weak contraction mapping is a Cauchy and yields a unique fixed-point, which fit perfectly with the task of optimization. The global minimum convergence regardless of initial point position and local minima is very significant strength for optimization algorithm. We hope that the advanced optimization with the development of the weak contraction mapping can contribute to empower the modern calculation.
A gradient-free method is proposed for non-convex optimization problem
1,751
scitldr
Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy. Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima. Deep reinforcement learning (DRL), on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost. In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL. We base our algorithm on the deep deterministic policy gradients (DDPG) algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints. Empirical show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima. In recent years, deep reinforcement learning (DRL) has emerged as a flexible and robust means of teaching simulated robots to complete complex tasks, from manipulation and locomotion (b), to navigating complex terrain . Compared with more direct optimization methods such as gradient descent or second-order optimization, DRL naturally incorporates exploration into its planning, allowing it to learn generalizable policies and robust state value estimations across simulated environments. Perhaps the most salient reason for DRL's surge in popularity is its ability to operate on black-box simulators where the underlying dynamics model is not available. DRL's model-free, Monte-Carlo-style methods have made it applicable to a wide range of physical (and non-physical) simulation environments, including those where a smooth, well-behaved dynamical model does not exist. This comes at two striking costs. First, such sampling procedures may be inefficient, requiring a large number of samples for adequate learning. Second, in order to be generally applicable to any model-free environment, underlying dynamical gradients are not used, even if they are available. In other words, valuable information that could greatly aid control tasks is not taken advantage of in these schemes. When an accurate model of robot dynamics is given, model-based methods such as model-predictive control (MPC) or trajectory optimization have historically been employed. These methods can solve tasks with higher sample efficiency than model-free DRL algorithms. Models provide access to ground-truth, analytical gradients of robot physics without the need for sample-based estimation. However, such methods don't incorporate exploration or learning into their procedures, and are especially prone to becoming trapped in poor local minima. While there has been a recent surge in fast and accurate differentiable simulators not previously available, most applications for control have relied on established local methods such as MPC (de), gradient descent , or trajectory optimization to solve control tasks. An ideal algorithm would exploit the efficiency of model-based methods while maintaining DRL's relative robustness to poor local minima. In this paper, we propose an actor-critic algorithm that leverages differentiable simulation and combines the benefits of model-based methods and DRL. We build our method upon standard actor-critic DRL algorithms and use true model gradients in order to improve the efficacy of learned critic models. Our main insights are twofold: First, gradients of critics play an important role in certain DRL algorithms, but optimization of these critics' gradients has not been explored by previous work. Second, the emergence of differentiable simulators enables computation of advantage estimation (AE) gradients with little additional computational overhead. Based on these observations, we present an algorithm that uses AE gradients in order to co-learn critic value and gradient estimation, demonstrably improving convergence of both actor and critic. In this paper, we contribute the following: 1) An efficient hybrid actor-critic method which builds upon deep deterministic policy gradients (DDPG, ), using gradient information in order to improve convergence in a simple way. 2) A principled mathematical framework for fitting critic gradients, providing a roadmap for applying our method to any deterministic policy gradient method, and 3) Demonstrations of our algorithm on seven control tasks, ranging from contact-free classic control problems to complex tasks with accurate, hard contact, such as the HalfCheetah, along with comparisons to both model-based control and DRL baselines. Model-based control methods peer into the underlying dynamics of a system in order to optimize control parameters. Such approaches have especially grown in popularity with the rise of differentiable simulators, whose application now spans rigid body simulation (de ;), soft body simulation , and fluid dynamics . Such simulators allow for backward propagating through an entire simulation, and even the simplest optimization algorithms (gradient descent) can be employed to a great effect. By treating robotic control tasks as a nonlinear numerical optimization problem, two more sophisticated strategies, trajectory optimization (; ; ; ;) and model-predictive control (MPC) , employ established numerical optimization techniques to efficiently solve complex, even high-dimensional control tasks. But, these methods are are prone to becoming trapped in poor local minima, especially in the presence of complex dynamics or terrain. Separate from model-based control methods, which assume the model of dynamics is known, modelbased learning methods attempt to learn a prior unknown model so as to directly optimize policies. In this paper, we do not focus on model-based learning, but rather the complementary problem, where we assume an accurate model of dynamics is given by a differentiable simulator and we use it to efficiently explore optimal control parameters. In contrast to the above local methods, stochastic global optimization methods, such as evolutionary algorithms (e.g., ) or simulated annealing, explore the solution space with a large number of samples in order to find global optima. Such methods, however, can be quite slow, due to the large number of samples they require. RL, while technically local, naturally incorporates sampling, exploration, and value function learning in a similar way that greatly mitigates optimization's propensity to becoming trapped in poor local minima. Our algorithm similarly incorporates exploration to mitigate the likelihood of poor local minima, while using gradient information to do so with high sample efficiency. Markov-Decision Problems (MDP) In this paper, we consider the rigid body simulation as a discrete-time Markov Decision Process (MDP) defined by (S, A, Γ, R), where S ⊆ R n is the state space, A ⊆ R m is the action space, Γ: S × A → S is the deterministic dynamic transition function, and R: S × A × S → R is the reward function. The optimal control policy π: S → A maximizes the γ-discounted sum of reward η(π) over a finite time horizon T, which is defined as: Here ρ 0 specifies the distribution of the initial state s 0 and r t = r(s t, a t, s t+1) represents the reward collected at time step t. We represent the policy function π as a neural network parameterized by θ and assume π θ is a deterministic policy that takes as input the current state s t and predicts the next action a t = π θ (s t). The action-value function Q π (s t, a t), which describes the expected return given the action a t and state s t at step t, is defined as in which a l ∼ π(s l) and s l ∼ Γ(s l−1, a l−1). The expectation can be removed in our case since both Γ and π are deterministic. This function is approximated by the critic network Q φ (s, a) parameterized with φ. Differentiable simulators In this paper, we focus on differentiable and deterministic rigid body simulators with fully observable states. At each time step t, the state s t fully characterizes the current status of the robot. The simulator takes current state s t and an action a t as input and returns the next state s t+1 by solving the governing physical equations. Additionally, we assume a differentiable simulator also returns a Jacobian matrix ∇ st,at s t+1. For all the differentiable rigid body simulators in this paper, the time and space complexities of computing the Jacobian matrix are the same as computing s t+1 in one forward step. Our hybrid algorithm takes inspiration from a subclass of modern actor-critic algorithms which seek to improve actor policies via critic gradients (such as DDPG). These algorithms iteratively improve the model of the critic, or estimated future return, using recorded data. Such algorithms implicitly assume that if the critic is well-fit to reality, then the gradients must also be well-fit. If we want to prioritize accurate critic gradients and better bootstrap our actor learning, it is best to ensure this directly. Specifically, we augment DDPG's critic gradient fitting step with a regularization term which prioritizes fitting accurate gradients in addition to accurate returns. Our choice of working in the domain of deterministic policy gradients is deliberate; as will be seen, this framework allows us to use our differentiable simulation engine to back propagate through the state-action pairs of our MDP, generating ground-truth reward gradients to fit. Algorithm 1 summarizes our method with our modifications to the original DDPG highlighted in color. Like actor-critic methods, our algorithm populates a replay buffer of state-action pairs from which to learn. At the same time, it records the dynamical Jacobian ∇ st,at s t+1 corresponding to those state-action pairs, without degrading its asymptotic time or space complexity. Like the state-action pairs, these Jacobians need only be computed during rollouts; they can be re-used during the embedded supervised training steps to efficiently improve the actor and critic networks. With additional Jacobian information about the dynamics at hand, we compute the gradients of the target network Q i. SinceQ i defines the target value that the critic Q φ attempts to fit to, ∇Q i provides a good estimation to the desired ∇Q as well. This motivates us to use it as a regularizer in the critic loss (Line 10 in Algorithm 1) with two additional weights w 1 and w 2 corresponding to the weights on the two partial derivatives. The definition of ∇ si,aiQi is provided by the following equation (For brevity, we use the network names to refer to their output values in the equation): which can be evaluated in constant time (Figure 1). It is important to note that this computation is only possible when a differentiable simulator is available due to the need for ∇ si,ai s i+1. Time and space complexity Our modifications require extra storage space for two Jacobian matrices of the size dim(s) × dim(s) and dim(s) × dim(a) associated with each s i in the replay buffer, which is tolerable as long as the dimensions of s and a are small constants. This is the case, for example, in rigid robot control tasks. In terms of the time cost, computing ∇Q i makes efficient use of the cached data. As a , its computation requires only a handful of matrix multiplications and back-propagation through Q and π. Simulate the agent using a t = π(s t) + N t at each time step t until termination 7: Update the critic parameter φ by minimizing the loss: Update the actor parameter θ using the following gradient: Update target networks:: end while We present a simple, analytical example with easily understandable dynamics in a low-dimensional space in order to illustrate the insights behind our algorithm. Consider a one dimensional kinematic mass point whose state s t and action a t are two scalars describing its position and velocity respectively. We define s 0 = −0.5 and a t ∈ [−1, 1]. The new state is computed by s t+1 = s t + a t ∆t where ∆t = 0.01 is a constant. The goal is to move the mass point to the origin s = 0 as quickly as possible. The simulation is terminated when |s t | < 0.01. The reward r(s, a, s) = (s 2 + 1) −1 − 1 encourages the mass point to converge to the origin in minimal number of steps. To see how the regularizer affects the training of the critic network, we compare from four sets of (w 1, w 2) parameters:, corresponding to the original DDPG algorithm;, which only regularizes ∇ siQi;, which only regularizes ∇ aiQi; and, which regularizes both ∇ siQi and ∇ aiQi. We plot the surfaces of intermediate critic network Q, their ground truth, and the return-timestep curves in Figure 2. In particular, the surfaces of ground truth Q for the optimal controller π are discounted returns directly measured from the forward simulation. Given these surfaces, the critic update steps in DDPG and the benefits of the regularizer can be explained visually: At each iteration, DDPG updates the critic network to fit the Q surface with points (s i, a i,Q i (s i, a i)), which are sampled from the replay buffer. Additionally, our hybrid method adds the regularizers to also fit the tangent directions of Q surface at the sample points. The tangent direction of the curve Q(·, a i) at s = s i is equivalent to ∇ aiQi. Since these tangent directions provide more information about the local shape of the Q surface, it is expected to boost the convergence of Q to its ground truth, which is reflected in Figure 2. We can also use this example to understand the effects of w 1 and w 2 on the convergence of Q: Figure 2 (c and d) demonstrate that w 2, or the weight for ∇ siQi, has a more evident effect on the convergence of Q. This can be partially explained by the fact that the ground truth Q surface has little variation in the a (action) direction, so knowing the partial derivative ∇ aiQi adds very little new information to the fitting problem. We test our hybrid algorithm on robot control tasks using a differentiable rigid body simulator, over three different types of settings related to the types of contact experienced: Contact-free simulation Many classical control examples are contact-free and useful for evaluating and understanding the performance of control algorithms (e.g. Pendulum, Acrobot, etc.). The equations of the motion of these problems can be compactly expressed in the form of the standard manipulator equations , allowing for the simple extraction of system gradients. Simulation with impulse-based collision responses Due to the significant changes in motion that occur during collisions, rigid body simulations with contacts possess much more complex dynamics than those which are contact-free. In this setting, we employ the Poisson collision model . During collision, an impulse is applied to the rigid body, instantaneously changing the velocity of the rigid body while conserving the momentum of the entire system. For simulating systems with such contacts, we follow (Popović et al., 2000). Simulation with constraint-based collision responses In order to simulate realistic motions for resting and collision contacts among rigid bodies, constraint-based methods formulate contacts as constrained systems, posing them as linear complementary problems (LCP) . Such simulation is accurate but much more complex than previous, simpler soft contact models. Inspired by the work of , we implemented a 2D LCP-based rigid body simulator and differentiate through contacts. Our hybrid control algorithm is implemented based on DDPG in OpenAI Baselines. Parameter space noise is applied to the actor network for exploration as proposed in. Slightly different from the mathematical definition, the critic network in this implementation takes normalized state and action as input, and outputs a normalized Q value in order to achieve task-irrelevant statistical stability. The mean value and standard deviation of state and Q value are empirically estimated from the replay buffer. Actions are scaled by a constant such that they fall in range [−1, 1]. Similarly, gradients in line 11 of Algorithm 1 are computed with normalized Q, states and actions. We present seven 2D control tasks implemented in our differentiable rigid body simulator. The tasks are of varying difficulty, including five classic control problems (CartPole, CartPoleSwingUp, Pendulum, MountainCar, and Acrobot), one problem with impulse-based collision response (RollingDie), and one complex control problem with LCP-based contact model (HalfCheetah). Classic control environments Our implementation of CartPole, CartPoleSwingUp, Pendulum, MountainCar, and Acrobot is based on the implementation of OpenAI Gym but with gradient information manually computed in the simulation. We ask readers to refer to for more details. In order to demonstrate applicability in the context of collisions, we designed a RollingDie task with impulse-based collision responses. The die is thrown from the air to the ground. Its initial state s 0 includes position in x and y, rotation angle, velocity in x and y, and angular velocity. The action a is the torque to the die, i.e. each step, the die can apply an internal torque so as to control its angular velocity (and thus its pose). The reward is defined based on the L2−norm of the distance between die center and the target position. Though for most of its trajectory the die is free falling and thus its position cannot be impacted by the torque, the angular velocity and torque do impact the angle with which the die impacts the ground and the ing bounce. This is a complex control task in which sparse, highly important moments (contact) can chaotically affect the trajectory of the die. Note that the die is allowed to bounce multiple times during the simulation, which increases the complexity of finding an optimized control. We designed a differentiable HalfCheetah example with LCP-based contacts. The HalfCheetah shares a similar structure as the one in MuJoCo but only has two links on each leg. The reward is defined as v x −0.1u T u, promoting forward velocity while penalizing highly actuated actions. Note that unlike MuJoCo's soft contact model, the hard contact model we implemented introduces an additional complexity and higher magnitude gradients, making it a more difficult control task. We compare our method with three baselines: DDPG, the RL algorithm that we base our method on, MPC with iterative Linear Quadratic Regulartor (iLQR) , a state-of-the-art model-based control algorithms employed in previous work on differentiable simulators , and gradient descent (GD), which employs a neural network actor and Adam to directly optimize a final return using the analytical gradients (as in and ). We repeat each algorithm 16 times on each task with different initializations. Initialization To ensure a fair comparison between all methods, we seed each algorithm with the same network models (if applicable). Since MPC does not require a neural network but directly solves for state and action variables, we initialize these variables by performing one rollout and collecting the state-action (s t, a t) pairs for initialization, using the same initial actor network. Metrics For methods using a neural network controller (ours, DDPG, and GD), we report the following metrics: return mean, which is the return from one episode generated by running the intermediate actor network every 800 timesteps, and return history, which is the average of the latest 100 return mean. For MPC, as it solves a t sequentially, return mean is stored whenever a t is returned. All returns are rescaled linearly so that the average return of a random controller is mapped to 0 and max(r) × T is mapped to 1. Figure 4 and Table 1 summarize the performance and averaged rewards of each method on the seven environments. Overall, our algorithm improves the sampling efficiency of DDPG and often discovers strategies with higher return. Compared with MPC and GD, our method suffers less from local minima, especially for more complex tasks such as HalfCheetah. For all environments, the experiment is terminated when return mean plateaus in GD, when MPC converges, or when the number of timesteps reaches pre-defined maximum number in DDPG and our algorithm. Unlike the other methods, MPC does not perform many sequential simulations, but rather performs one simulation optimized online. In order to perform as fair of a comparison as possible, we plot the predicted cumulative return (based on the previously simulated steps, current planned horizon, and remaining initialized action vector) as it evolves throughout the online planning procedure. As a , MPC may terminate without using the same number of function evaluations as RL algorithms or showing a sign of plateauing return-timestep curves. We do not compare MPC for the simpler control tasks (Pendulum, MountainCar, CartPole) that are trivial for the algorithm to solve. We also do not compare MPC for the Acrobot, for which the algorithm cannot be applied (discussed later). We discuss the performances of these methods in each environment below. Immediately obvious from our is the fact that DDPG and our algorithm are both competitive on all problems presented, regardless of problem difficulty. While MPC dominates on the simplest control tasks, it struggles on the more complicated tasks with hard contacts, and DRL approaches dominate. This underscores our thesis -that DRL's exploration properties make it better suited than model-based approaches for problems with a myriad of poor local minima. More naïve model-based approaches, such as GD, can succeed when they begin very close to a local minimum -as is the case with CartPole -but show slow or no improvement in dynamical environments with nontrivial control schemes. This is especially apparent in problems where the optimal strategy requires robots to make locally suboptimal motions in order to build up momentum to be used to escape local minima later. Examples include Pendulum, CartPoleSwingUp, and MountainCar, where the robot must learn to build up momentum through local oscillations before attempting to reach a goal. GD further fails on complex physical control tasks like HalfCheetah, where certain configurations, such as toppling, can be unrecoverable. Finally, we note that although MPC is able to tractably find a good solution for the RollingDie problem, the complex nonlinearities in the contact-heavy dynamics require long planning horizons (100 steps, chosen by running hyperparameter search) in order to find a good trajectory. Thus, although MPC eventually converges to a control sequence with very high reward, it requires abundant computation to converge. DRL-based control approaches are able to find success on all problems, and are especially competitive on those with contact. Compared with DDPG, our hybrid algorithm universally converges faster or to higher returns. The rolling die example presents a particularly interesting contrast. As the die is randomly initialized, it is more valuable to aim for higher return history rather than return mean due to the large variance in the initial state distribution. It can be seen from Figure 4 that our method managed to reach a higher average return history over 16 runs. Manually visualizing the controller from the best run in our method revealed that it discovered a novel two-bounce strategy for challenging initial poses (Figure 3), while most of the strategies in DDPG typically leveraged one bounce only. There are a few other reasons why our algorithm may be considered superior to MPC. First, our algorithm is applicable to a wider range of reward structures. While we had planned to demonstrate MPC on another classic control problem, namely the Acrobot, MPC is inapplicable to this robot's reward structure. The Acrobot's rewards penalize it with −1 point for every second it has not reached its target pose. MPC requires a differentiable reward, and this reward structure is not. Thus, our Hybrid DDPG algorithm applies to a wider range of problems than MPC. Second, closedloop network controllers are naturally more robust than MPC. Even as noise is added or initial conditions and tasks change, learned controllers can generalize. While MPC can recover from these scenarios, it requires expensive replanning. In these scenarios, MPC becomes especially unattractive to deploy on physical hardware, where power and computational resource constraints can render MPC inapplicable to realtime applications. Figure 3: Visualization of the twobounce strategy discovered by our algorithm. Solid red box: initial die. Dash cyan curve: trajectory of the die. Blue box: the target zone. Light red boxes: states of the die at collisions and about to enter the target. In this paper, we have presented an actor-critic algorithm that uses AE gradients to co-learn critic value and gradient estimation and improve convergence of both actor and critic. Our algorithm leverages differentiable simulation and combines the benefits of model-based methods and DRL. We designed seven 2D control tasks with three different contact scenarios and compared our method with several state-of-the-art baseline algorithms. We demonstrated our method boosts the performance of DDPG and is much less sensitive to local minima than model-based approaches. In the future, it would be interesting to see if our mathematical framework can be applied to improve the effectiveness of value functions used in other DRL algorithms. A APPENDIX In gradient-based optimization algorithms, the direction of gradients often plays a more crucial role than its length, especially when adaptive learning rates are applied. Here we experiment with the idea of penalizing the cosine angle between ∇Q φ and ∇Q instead of their L 2 distance in Algorithm 1. In particular, we replace line 10 with the following: In all examples, our hybrid algorithm achieves similar or better performance than pure DDPG. Figure 5 shows the experimental in our differentiable environments with the new loss function. For each environment, we started with the same parameters as in Figure 4 and decreased (w 1, w 2) ∈ {1, 0.1, 0.01} only if necessary. We ended up using the same parameters as before in MountainCar and Acrobot and switched weights in other environments. Comparing Figure 4 with Figure 5, we see the cosine angle regularizer is more effective in some examples (particularly MountainCar and Acrobot) than the original L2 norm. The L 2 regularizer outperforms the cosine angle for CartPoleSwingUp. The two regularizers perform similarly on the remaining examples. Apart from DDPG, we have also implemented our method with the original L 2 -norm loss in the Soft Actor-Critic (SAC) (a) algorithm. Figure 6 reports our experimental in five examples, among which our method improves the performance of the original SAC in three of them (CartPoleSwingUp, Pendulum, and MountainCar) and performs similarly in the other two. These have demonstrated that our proposed modification can be generalized to other actorcritic methods other than DDPG. In all examples, our hybrid algorithm performs similar to or better than pure SAC.
We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control
1,752
scitldr
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t):= q(h(e)) over the parameters t of another neural network (the ``primary network). We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(t | D) via sampling. In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(t). In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection. Simple and powerful techniques for Bayesian inference of deep neural networks' (DNNs) parameters have the potential to dramatically increase the scope of applications for deep learning techniques. In real-world applications, unanticipated mistakes may be costly and dangerous, whereas anticipating mistakes allows an agent to seek human guidance (as in active learning), engage safe default behavior (such as shutting down), or use a "reject option" in a classification context. DNNs are typically trained to find the single most likely value of the parameters (the "MAP estimate"), but this approach neglects uncertainty about which parameters are the best ("parameter uncertainty"), which may translate into higher predictive uncertainty when likely parameter values yield highly confident but contradictory predictions. Conversely, Bayesian DNNs model the full posterior distribution of a model's parameters given the data, and thus provides better calibrated confidence estimates, with corresponding safety benefits BID9 BID0. 1 Maintaining a distribution over parameters is also one of the most effective defenses against adversarial attacks BID4.Techniques for Bayesian DNNs are an active research topic. The most popular approach is variational inference BID2 BID8, which typically restricts the variational posterior to a simple family of distributions, for instance a factorial Gaussian BID2 BID16. Unfortunately, from a safety perspective, variational approximations tend to underestimate uncertainty, by heavily penalizing approximate distributions which place mass in regions where the true posterior has low density. This problem can be exacerbated by using a restricted family of posterior distribution; for instance a unimodal approximate posterior will generally only capture a single mode of the true posterior. With this in mind, we propose learning an extremely flexible and powerful posterior, parametrized by a DNN h, which we refer to as a Bayesian hypernetwork in reference to BID17.A Bayesian hypernetwork (BHN) takes random noise ∼ N (0, I) as input and outputs a sample from the approximate posterior q(θ) for another DNN of interest (the "primary network"). The key insight for building such a model is the use of an invertible hypernet, which enables Monte Carlo estimation of the entropy term − logq(θ) in the variational inference training objective. We begin the paper by reviewing previous work on Bayesian DNNs, and explaining the necessary components of our approach (Section 2). Then we explain how to compose these techniques to yield Bayesian hypernets, as well as design choices which make training BHNs efficient, stable and robust (Section 3). Finally, we present experiments which validate the expressivity of BHNs, and demonstrate their competitive performance across several tasks (Section 4). We begin with an overview of prior work on Bayesian neural networks in Section 2.1 before discussing the specific components of our technique in Sections 2.2 and 2.3. Bayesian DNNs have been studied since the 1990s BID30 BID29. For a thorough review, see BID8. Broadly speaking, existing methods either 1) use Markov chain Monte Carlo BID39 BID30, or 2) directly learn an approximate posterior distribution using (stochastic) variational inference BID16 BID9 BID2, expectation propagation BID19 BID36, or α-divergences BID27. We focus here on the most popular approach: variational inference. Notable recent work in this area includes BID9, who interprets the popular dropout BID37 algorithm as a variational inference method ("MC dropout"). This has the advantages of being simple to implement and allowing cheap samples from q(θ). emulates Gaussian dropout, but yields a unimodal approximate posterior, and does not allow arbitrary dependencies between the parameters. The other important points of reference for our work are Bayes by Backprop (BbB) BID2, and multiplicative normalizing flows BID28. Bayes by Backprop can be viewed as a special instance of a Bayesian hypernet, where the hypernetwork only performs an element-wise scale and shift of the input noise (yielding a factorial Gaussian distribution).More similar is the work of BID28, who propose and dismiss BHNs due to the issues of scaling BHNs to large primary networks, which we address in Section 3.3.2 Instead, in their work, they use a hypernet to generate scaling factors, z on the means µ of a factorial Gaussian distribution. Because z follows a complicated distribution, this forms a highly flexible approximate posterior: q(θ) = q(θ|z)q(z)dz. However, this approach also requires them to introduce an auxiliary inference network to approximate q(z|θ) in order to estimate the entropy term of the variational lower bound, ing in lower bound on the variational lower bound. Finally, the variational autoencoder (VAE) BID21 BID23 family of generative models is likely the best known application of variational inference in DNNs, but note that the VAE is not a Bayesian DNN in our sense. VAEs approximate the posterior over latent variables, given a datapoint; Bayesian DNNs approximate the posterior over model parameters, given a dataset. A hypernetwork BID17 BID3 BID1 ) is a neural net that outputs parameters of another neural net (the "primary network"). 3 The hypernet and primary net together form a single model which is trained by backpropagation. The number of parameters of a DNN scales quadratically in the number of units per layer, meaning naively parametrizing a large primary net requires an impractically large hypernet. One method of addressing this challenge is Conditional Batch Norm (CBN) BID7, and the closely related Conditional Instance Normalization (CIN) BID20 BID38, and Feature-wise Linear Modulation (FiLM) BID31 BID26, which can be viewed as specific forms of a hypernet. In these works, the weights of the primary net are parametrized directly, and the hypernet only outputs scale (γ) and shift (β) parameters for every neuron; this can be viewed as selecting which features are significant (scaling) or present (shifting). In our work, we employ the related technique of weight normalization, which normalizes the input weights for every neuron and introduces a separate parameter g for their scale. Our proposed Bayesian hypernetworks employ a differentiable directed generator network (DDGN) BID15 ) as a generative model of the primary net parameters. DDGNs use a neural net to transform simple noise (most commonly isotropic Gaussian) into samples from a complex distribution, and are a common component of modern deep generative models such as variational autoencoders (VAEs) BID23 BID21 and generative adversarial networks (GANs) BID13 BID12.We take advantage of techniques for invertible DDGNs developed in several recent works on generative modeling BID5 and variational inference of latent variables BID32. Training these models uses the change of variables formula, which involves computing the log-determinant of the inverse Jacobian of the generator network. This computation involves a potentially costly matrix determinant, and these works propose innovative architectures which reduce the cost of this operation but can still express complicated deformations, which are referred to as "normalizing flows". We now describe how variational inference is applied to Bayesian deep nets (Section 3.1), and how we compose the methods described in Sections 2.2 and 2.3 to produce Bayesian hypernets (Section 3.2). In variational inference, the goal is to maximize a lower bound on the marginal log-likelihood of the data, log p(D) under some statistical model. This involves both estimating parameters of the model, and approximating the posterior distribution over unobserved random variables (which may themselves also be parameters, e.g., as in the case of Bayesian DNNs). Let θ ∈ R D be parameters given the Bayesian treatment as random variables, D a training set of observed data, and q(θ) a learned approximation to the true posterior p(θ|D). Since the KL divergence is always non-negative, we have, for any q(θ): DISPLAYFORM0 The right hand side of FORMULA0 is the evidence lower bound, or "ELBO".The above derivation applies to any statistical model and any dataset. In our experiments, we focus on modeling conditional likelihoods p(D) = p(Y|X). Using the conditional independence assumption, we further decompose log p(D|θ):= log p(Y|X, θ) as n i=1 log p(y i |x i, θ), and apply stochastic gradient methods for optimization. Computing the expectation in is generally intractable for deep nets, but can be estimated by Monte Carlo sampling. For a given value of θ, log p(D|θ) and log p(θ) can be computed and differentiated exactly as in a non-Bayesian DNN, allowing training by backpropagation. The entropy term E q [− logq(θ)] is also straightforward to evaluate for simple families of approximate posteriors such as Gaussians. Similarly, the likelihood of a test data-point under the predictive posterior using S samples can be estimated using Monte Carlo: DISPLAYFORM0 Bayesian hypernets (BHNs) express a flexible q(θ) by using a DDGN (section 2.3), h ∈ R D → R D, to transform random noise ∼ N (0, I D) into independent samples from q(θ). This makes it cheap to compute Monte Carlo estimations of expectations with respect to q; these include the ELBO, and its derivatives, which can be backpropagated to train the hypernet h. Since BHNs are both trained and evaluated via samples of q(θ), expressing q(θ) as a generative model is a natural strategy. However, while DDGNs are convenient to sample from, computing the entropy term (E q [− logq(θ)]) of the ELBO additionally requires evaluating the likelihood of generated samples, and most popular DDGNs (such as VAEs and GANs) do not provide a convenient way of doing so. 5 In general, these models can be many-to-one mappings, and computing the likelihood of a given parameter value requires integrating over the latent noise variables ∈ R D: DISPLAYFORM0 To avoid this issue, we use an invertible h, allowing us to compute q(θ) simply by using the change of variables formula: DISPLAYFORM1 where q is the distribution of and θ = h.As discussed in Section 2.3, a number of techniques have been developed for efficiently training such invertible DDGNs. In this work, we employ both RealNVP (RNVP) BID6 and Inverse Autoregressive Flows (IAF). Note that the latter can be efficiently applied, since we only require the ability to evaluate likelihood of generated samples (not arbitrary points in the range of h, as in generative modeling applications, e.g., BID6); and this also means that we can use a lower-dimensional to generate samples along a submanifold of the entire parameter space, as detailed below. In order to scale BHNs to large primary networks, we use the weight normalization reparametrization 6: DISPLAYFORM0 where θ j are the input weights associated with a single unit j in the primary network. We only output the scaling factors g from the hypernet, and learn a maximum likelihood estimate of v. 7 This allows us to overcome the computational limitations of naively-parametrized BHNs noted by BID28, since computation now scales linearly, instead of quadratically, in the number of BID2. In the second subplot, we place a prior on the scaling factor g and infer the posterior distribution using a BHN, while in the third subplot the hypernet is used to generate the whole weight matrices of the primary net. Each shaded region represents half a standard deviation in the posterior on the predictive mean. The red crosses are 50 examples from the training dataset.primary net units. Using this parametrization restricts the family of approximate posteriors, but still allows for a high degree of multimodality and dependence between the parameters. We also employ weight normalization within the hypernet, and found this stabilizes training dramatically. Initialization plays an important role as well; we recommend initializing the hypernet weights to small values to limit the impact of noise at the beginning of training. We also find clipping the outputs of the softmax to be within (0.001, 0.999) critical for numerical stability. We perform experiments on MNIST, CIFAR10, and a 1D regression task. There is no single metric for how well a model captures uncertainty; to evaluate our model, we perform experiments on regularization (Section 4.2), active learning (Section 4.3), anomaly detection (Section 4.4), and detection of adversarial examples (Section 4.5). Active learning and anomaly detection problems make natural use of uncertainty estimates: In anomaly detection, higher uncertainty indicates a likely anomaly. In active learning, higher uncertainty indicates a greater opportunity for learning. Parameter uncertainty also has regularization benefits: integrating over the posterior creates an implicit ensemble. Intuitively, when the most likely hypothesis predicts "A", but the posterior places more total mass on hypotheses predicting "B", we prefer predicting "B". By improving our estimate of the posterior, we more accurately weigh the evidence for different hypotheses. Adversarial examples are an especially difficult kind of anomaly designed to fool a classifier, and finding effective defenses against adversarial attacks remains an open challenge in deep learning. For the hypernet architecture, we try both RealNVP BID6 and IAF with MADE BID11, with 1-layer ReLU-MLP coupling functions with 200 hidden units (each). In general, we find that IAF performs better. We use an isotropic standard normal prior on the scaling factors (g) of the weights of the network. We also use Adam with default hyper-parameter settings BID22 and gradient clipping in all of our experiments. Our mini-batch size is 128, and to reduce computation, we use the same noise-sample (and thus the same primary net parameters) for all examples in a mini-batch. We experimented with independent noise, but did not notice any benefit. Our baselines for comparison are Bayes by Backprop (BbB) BID2, MC dropout (MCdropout) BID9, and non-Bayesian DNN baselines (with and without dropout). We first demonstrate the behavior of the network on the toy 1D-regression problem from BID2 in FIG0. As expected, the uncertainty of the network increases away from the observed data. We also use this experiment to evaluate the effects of our proposal for scaling BHNs via the weight norm parametrization (Section 3.3) by comparing with a model which generates the full set of parameters, and find that the two models produce very similar , suggesting that our proposed method strikes a good balance between scalability and expressiveness. Next, we demonstrate the distinctive ability of Bayesian hypernets to learn multi-modal, dependent distributions. FIG4 (appendix) shows that BHNs do learn approximate posteriors with dependence between different parameters, as measured by the Pearson correlation coefficient. Meanwhile, FIG1 shows that BHNs are capable of learning multimodal posteriors. For this experiment, we trained an over-parametrized linear (primary) network:ŷ = a · b · x on a dataset generated as y = x +, and the BHN learns capture both the modes of a = b = 1 and a = b = −1. We now show that BHNs act as a regularizer, outperforming dropout and traditional mean field (BbB). Results are presented in TAB0. In our experiments, we find that BHNs perform on par with dropout on full datasets of MNIST and CIFAR10; furthermore, increasing the flexibility of the posterior by adding more coupling layers improves performance, especially compared with models with 0 coupling layers, which cannot model dependencies between the parameters. We also evaluate on a subset of MNIST (the first 5,000 examples); are presented in the last two columns of TAB0. Replicating these experiments (with 8 coupling layers) for 10 trials yields Figure 3. In these MNIST experiments, we use MLPs with 2 hidden layers of 800 or 1200 hidden units each. For CIFAR10, we train a convolutional neural net (CNN) with 4 hidden layers of channels, 2 × 2 max pooling after the second and the fourth layers, filter size of 3, and a single fully connected layer of 512 units. Figure 4: Active learning: Bayesian hypernets outperform other approaches after sufficient acquisitions when warm-starting (left), for both random acquisition function (top) and BALD acquisition function (bottom). Warm-starting improves stability for all methods, but hurts performance for other approaches, compared with randomly re-initializing parameters as in (right). We also note that the baseline model (no dropout) is competitive with MCdropout, and outperforms the Dropout baseline used by. 9 These curves are the average of three experiments. We now turn to active learning, where we compare to the MNIST experiments of, replicating their architecture and training procedure. Briefly, they use an initial dataset of 20 examples (2 from each class), and acquire 10 new examples at a time, training for 50 epochs between each acquisition. While re-initialize the network after every acquisition, we found that "warm-starting" from the current learned parameters was essential for good performance with BHNs, although it is likely that longer training or better initialization schemes could perform the same role. Overall, warm-started BHNs suffered at the beginning of training, but outperformed all other methods for moderate to large numbers of acquisitions. For anomaly detection, we take BID18 as a starting point, and perform the same suite of MNIST experiments, evaluating the ability of networks to determine whether an input came from their training distribution ("Out of distribution detection"). BID18 found that the confidence expressed in the softmax probabilities of a (non-Bayesian) DNN trained on a single dataset provides a good signal for both of these detection problems. We demonstrate that Bayesian DNNs outperform their non-Bayesian counterparts. Just as in active learning, in anomaly detection, we use MC to estimate the predictive posterior, and use this to score datapoints. For active learning, we would generally like to acquire points where there is higher uncertainty. In a well-calibrated model, these points are also likely to be challenging or anomalous examples, and thus acquisition functions from the active learning literature are good candidates for scoring anomalies. We consider all of the acquisition functions listed in as possible scores for the Area Under the Curve (AUC) of Precision-Recall (PR) and Receiver Operating Characteristic (ROC) metrics, but found that the maximum confidence of the softmax probabilities (i.e., "variation ratio") acquisition function used by BID18 gave the best performance. Both BHN and MCdropout achieve significant performance gains over the non-Bayesian baseline, and MCdropout performs significantly better than BHN in this task. Results are presented in TAB1.Second, we follow the same experimental setup, using all the acquisition functions, and exclude one class in the training set of MNIST at a time. We take the excluded class of the training data as out-of-distribution samples. The is presented in TAB2 (Appendix). This experiment shows the benefit of using scores that reflect dispersion in the posterior samples (such as mean standard deviation and BALD value) in Bayesian DNNs. Finally, we consider this same anomaly detection procedure as a novel tool for detecting adversarial examples. Our setup is similar to BID27 and BID28, where it is shown that when more perturbation is added to the data, model uncertainty increases and then drops. We use the Fast Gradient Sign method (FGS) BID14 for adversarial attack, and use one sample of our model to estimate the gradient. 10 We find that, compared with dropout, BHNs are less confident on data points which are far from the data manifold. In particular, BHNs constructed with IAF consistently outperform RealNVP-BHNs and dropout in detecting adversarial examples and errors. Results are shown in FIG3. We introduce Bayesian hypernets (BHNs), a new method for variational Bayesian deep learning which uses an invertible hypernetwork as a generative model of parameters. BHNs feature efficient we found the BALD values our implementation computes provide a better-than-random acquisition function (compare the blue line in the top and bottom plots). 10 and BID28 used 10 and 1 model samples, respectively, to estimate gradient. We report the with 1 sample; with more samples are given in the appendix. when more perturbation is added to the data (left), uncertainty measures also increase (first row). In particular, the BALD and Mean STD scores, which measure epistemic uncertainty, are strongly increasing for BHNs, but not for dropout. The second row and third row plots show for adversary detection and error detection (respectively) in terms of the AUC of ROC (y-axis) with increasing perturbation along the x-axis. Gradient direction is estimated with one Monte Carlo sample of the weights/dropout mask.training and sampling, and can express complicated multimodal distributions, thereby addressing issues of overconfidence present in simpler variational approximations. We present a method of parametrizing BHNs which allows them to scale successfully to real world tasks, and show that BHNs can offer significant benefits over simpler methods for Bayesian deep learning. Future work could explore other methods of parametrizing BHNs, for instance using the same hypernet to output different subsets of the primary net parameters. A We replicate the experiments of anomaly detection with unseen classes of MNIST. Here we use 32 samples to estimate the gradient direction with respect to the input. A better estimate of gradient amounts to a stronger attack, so accuracy drops lower for a given step size while an adversarial example can be more easily detected with a more informative uncertainty measure. In this paper, we employ weight normalization in the primary network, treating (only) the scaling factors g as random variables. We choose an isotropic Gaussian prior for g: p(g) = N (g; 0, λI), which in an L 2 weight-decay penalty on g, or, equivalently, w = g ≥ E ∼q ,g=h φ [log p(D|g; v, b) + log p(g) − log q + log det DISPLAYFORM0 where v and b are the direction and bias parameters of the primary net, and φ is the parameters of the hypernetwork. We optimize this bound with respect to {v, b, φ} during training.
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.
1,753
scitldr
Evolutionary-based optimization approaches have recently shown promising in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss. The ability of the brain to model the world arose from the process of evolution. It evolved because it helped organisms to survive and strive in their particular environments and not because such forward prediction was explicitly optimized for. In contrast to the emergent neural representations in nature, current world model approaches are often directly rewarded for their ability to predict future states of the environment (; ; ;). While it is undoubtedly useful to be able to explicitly encourage a model to predict what will happen next, in this paper we are interested in what type of representations can emerge from the less directed process of artificial evolution and what ingredients might be necessary to encourage the emergence of such predictive abilities. In particular, we are building on the recently introduced world model architecture introduced by. This agent model contains three different components: a visual module, mapping high-dimensional inputs to a lower-dimensional representative code, an LSTM-based memory component, and a controller component that takes input from the visual and memory module to determine the agent's next action. In the original approach, each component of the world model was trained separately and to perform a different and specialised function, such as predicting the future. demonstrated that these models can also be trained end-to-end through a population-based genetic algorithm (GA) that exclusively optimizes for final performance, the approach was only applied to the simpler 2D car racing domain and it is an open question how such an approach will scale to the more complex 3D VizDoom task that first validated the effectiveness of the world model approach. Here we show that a simple genetic algorithm fails to find a solution to solving the VizDoom task and ask the question what are the missing ingredients necessary to encourage the evolution of more powerful world models? The main insight in this paper is that we can view the optimization of a heterogeneous neural network (such as world models) as a co-evolving system of multiple different sub-systems. The other important insight is that representational innovations discovered in one subsystem (e.g. the visual system learns to track moving objects) require the other sub-systems to adapt. In fact, if the other systems are not given time to adapt, such innovation will likely initially have an adversarial effect on overall performance! In order to optimize such co-evolving heterogeneous neural systems, we propose to reduce the selection pressure on individuals whose visual or memory system was recently changed, given the controller component time to readapt. This Deep Innovation Protection (DIP) approach is inspired by the recently introduced morphological innovation protection method of , which allows for the scalable co-optimization of controllers and robot body plans. Our approach is able to find a solution to the VizDoom: Take Cover task, which was first solved by the original world model approach . More interestingly, the emergent world models learned to predict events important for the survival of the agent, even though they were not explicitly trained to predict the future. Additionally, our investigates into the training process show that DIP allows evolution to carefully orchestrate the training of the components in these heterogeneous architectures. We hope this work inspires more research that focuses on investigating representations emerging from approaches that do not necessarily only rely on gradient-based optimization. The hypothesis in this paper is that to optimize complex world models end-to-end for more complex tasks requires each of its components to be carefully tuned to work well together. For example, an innovation in the visual or memory component could adversely impact the controller component, leading to reduced performance. However, in the long run such innovation could allow an individual to outperform its predecessors. The agent model is based on the world model approach introduced by. The network includes a sensory component, implemented as a variational autoencoder (VAE) that compresses the high-dimensional sensory information into a smaller 32-dimensional representative code (Fig. 1). This code is fed into a memory component based on a recurrent LSTM , which should predict future representative codes based on previous information. Both the output from the sensory component and the memory component are then fed into a controller that decides on the action the agent should take at each time step. , we train these world models end-to-end with a genetic algorithm, in which mutations add Gaussian noise to the parameter vectors of the networks: θ = θ + σ, where ∼ N (0, I). The approach introduced in this paper aims to train heterogeneous neural systems end-to-end by temporally reducing the selection pressure on individuals with recently changed modules, allowing other components to adapt. For example, in case of the world model, in which a mutation can either affect the VAE, MDN-RNN or controller, selection pressure should be reduced if a mutation affects the VAE or MDN-RNN, giving the controller time to readapt to the changes in the learned representation. Inspired by the multi-objective morphological innovation protection introduced by , we employ the well-known multiobjective optimization approach NSGA-II , in which a second "age" objective keeps track of when a mutation changes either the VAE or the MDN-RNN. Every generation an individual's age is increased by 1, however, if a mutation changes the VAE or MDN-RNN, this age objective is set to zero (lower is better). Therefore, if two neural networks reach the same performance (i.e. the same final reward), the one that had less time to adapt (i.e. whose age is lower) would have a higher chance of being selected for the next generation. The second objective is the accumulated reward received during an episode. Pseudocode of the approach applied to world models is shown in Algorithm 1. It is important to note that this approach is different to the traditional usage of "age" in multiobjective optimization, in which age is used to increase diversity and keeps track of how long individuals have been in the population . In the approach in this paper, age counts how many generations the controller component of an individual had time to adapt to an unchanged visual and memory system. In the original world model approach the visual and memory component were trained separately and through unsupervised learning based on data from random rollouts. In this paper they are optimized through a genetic algorithm without evaluating each component individually. In other words, the VAE is not directly optimized to reconstruct the original input data and neither is the memory component optimized to predict the next time step; the whole network is trained in an endto-end fashion. Here we are interested in what type of neural representations emerge by themselves that allow the agent to solve the given task. Following the original world model approach , in the experiments presented here an agent is trained to solve the car racing tasks, and the more challenging VizDoom task from 64×64 RGB pixel inputs (Fig. 2). In the continuous control task Assign ranks based on Pareto fronts 9: Generate set of non-dominated solutions 10: Add solutions, starting from first front, until number solution = N 11: Generate child population through binary tournament selection and mutations 12: Reset age to 0 for all individuals whose VAE or MDN-RNN was mutated 13: end for A visual component (the encoder of a variational autoencoder) produces a latent code z t at each time step t, which is concatenated with the hidden state h t of the LSTM-based memory component that takes z t and previously performed action a t−1 as input. The combined vector (z t, h t) is input into the controller component to determine the next action of the agent. In this paper, the agent model is trained end-to-end with a multiobjective genetic algorithm. CarRacing-v0 the agent is presented with a new procedurally generated track every episode, receiving a reward of -0.1 every frame and a reward of +100/N for each visited track tile, where N is the total number of tiles in the track. The network controlling the agent (Fig. 1) has three outputs to control left/right steering, acceleration and braking. Further details on the network model, which is the same for both domains, can be found in the Appendix. In the VizDoom:Take Cover task the agent has to try to stay alive for 2,100 timesteps, while avoiding fireballs shot at it by strafing to the left or the right. The agent receives a +1 reward for every frame it is alive. The network controlling the agent has one output a to control left (a < −0.3) and right strafing (a > 0.3), or otherwise standing still. In this domain, a solution is defined as surviving for over 750 timesteps, averaged across 100 random rollouts . Following the NSGA-II approach, individuals for the next generation are determined stochastically through 2-way tournament selection from the 50% highest ranked individuals in the population (Algorithm 1). No crossover operation was employed. The population size was 200. Because of the randomness in this domain, we evaluate the top three individuals of each generation one additional time to get a better estimate of the true elite. We compare a total of four different approaches: The age objective is reset to zero when either the VAE or MDN-RNN is changed. The idea behind this approach is that the controller should get time to readapt if one of the components that precede it in the network change. Here the age objective is set to zero if the controller changes. This setting tests if protecting components upstream can be effective in optimizing heterogeneous neural models. Figure 2: In the CarRacing-v0 task the agent has to learn to drive across many procedurally generated tracks as fast as possible from 64 ×64 RGB color images. In the VizDoom: Take Cover domain the agent has to learn to avoid fireballs and to stay alive as long as possible. This setup is the same as the controller protection approach but we additionally reset age if the MDN-RNN changes. On average, this treatment will reset the age objective as often as DIP. 4. Random age objective: In this setup the age objective is assigned a random number between at each evaluation. This treatment tests if better performance can be reached just through introducing more diversity in the population. 5. Standard GA -no innovation protection: In this non-multi-objective setup, which is the same one as introduced in 1, only the accumulated reward is taken into account when evaluating individuals. For all treatments, a mutation has an equal probability to either mutate the visual, memory, or controller component of the network. Interestingly, while reported that this approach performs similarly well to an approach that always mutates all components, we noticed that it performs significantly worse in the more complicated VizDoom domain. This suggests that the more complex the tasks, the more important it is to be able to fine-tune individual components in the overall world model architecture. All are averaged over ten independent evolutionary runs. In the car racing domain we find that there is no noticeable difference between an approach with and without innovation protection Figure 4: Still frames of a learned policy. The evolved champion learned to primarily pay attention to the walls and fireballs, while ignoring the floor and ceiling. Interestingly the agent also seems to pay attention to the health and ammo indicator. and both can solve the domain. However, in the more complex VizDoom task, the DIP approach that protects innovations in both VAE and MDN-RNN, outperforms all other approaches (Fig. 3a). The approach is able to find a solution to the task, effectively avoiding fireballs and reaching an average score of 824.33 (sd=491.59). To better understand the decision-making process of the agent, we calculate perturbation-based saliency maps (see Appendix for details) to determine the parts of the environment the agent is paying most attention to (Fig. 4). The idea behind perturbation-based saliency maps is to measure to what extent the output of the model changes if parts of the input image are altered . Not surprisingly, the agent learned to pay particular attention to the walls, fireballs, and the position of the monsters. The better performance of the random age objective compared to no innovation protection suggests that increasing diversity in the population improves performance but less effectively than selectivity resetting the age objective as in DIP. Interestingly, the controller and the MDN-RNN&Controller protection approach perform less well, confirming our hypothesis that it is important to protect innovations upstream in the network for downstream components. We further investigate what type of world model can emerge from an evolutionary process that does not directly optimize for forward prediction or reconstruction loss. To gain insights into the learned representations we employ the t-SNE dimensionality reduction technique , which has proven valuable for visualizing the inner workings of deep neural networks ). We are particularly interested in the information contained in the compressed 32-dimensional vector of the VAE and the information stored in the hidden states of the MDN-RNN (which are both fed into the controller that decides on the agent's action). Different combinations of sequences of these latent vectors collected during one rollout are visualized in two dimensions in Fig. 5. Interestingly, while the 32-dimensional z vector from the VAE does not contain enough information to infer the correct action, either the hidden state alone or in combination with z in grouping the states into two distinct classes (one for moving left and one for moving right). The temporal dimension captured by the recurrent network proves invaluable in deciding what action is best. For example, not getting stuck in a position that makes avoiding incoming fireballs impossible, seems to require a level of forward prediction by the agent. To gain a deeper understanding of this issue we look more closely into the learned temporal representation next. In order to analyze the learned temporal dynamics of the forward model, we are taking a closer look at the average activation x t of all 256 hidden nodes at time step t and how much they differ from the overall average across all time stepsX = 1 N N 1x t. The variance ofx t is thus calculated as σ t = (X −x t) 2, and normalized to the range before plotting. The hypothesis is that activation levels far from the mean might indicate a higher importance and should have a greater impact on the agent's controller component. In other words, they likely indicate critical situations in which the agent needs to pay particular attention to the predictions of the MDN-RNN. Fig. 6 depicts frames from the learned policies in two different situations, which shows that the magnitude of LSTM activations are closely tied to specific situations. The forward model does not seem to react to fireballs by themselves but instead depends on the agent being in the line of impact of an approaching fireball, which is critical information for the agent to stay alive. Evolutionary Innovations In addition to analyzing the learned representations of the final networks, it is interesting to study the different stepping stones evolution discovered to solve the VizDoom task. We show one particular evolutionary run in Fig. 7, with other ones following similar progressions. In the first 30 generations the agent starts to learn to pay attention to fireballs but only tries avoiding them by either standing still or moving to the right. A jump in performance happens around generation 34 when the agent starts to discover moving to either the left or right; however, the learned representation between moving left or right is not well defined yet. This changes around generation 56, leading to another jump in fitness and some generations of quick fine-tuning later the agent is able to differentiate well between situations requiring different actions, managing to survive for the whole length of the episode. Motivated by the approach of to analyse the gradient descent-based training of neural networks, we investigate the weight distances of the world model components of the best-performing networks found during training to the final solution representation (Fig. 3b). The VAE is the component with the steepest decrease in distance with a noticeable jump around generation 60 due to another lineage taking over. The MDN-RNN is optimized slowest, which is likely due to the fact that the correct forward model dynamics are more complicated to discover than the visual component. These suggest that DIP is able to orchestrate the training of these heterogeneous world model architectures in an automated way. Generation 34 Generation 56 Generation 145 Figure 7: Development of the evolved representation over generations. Shown are t-SNE mappings of the 288-dimensional vectors (32-dimensional latent vectors + 256-dimensional hidden state vector) together with saliency maps of specific game situations. Early on in evolution the agent starts paying attention to the fireballs (generation 24) but only moves to the right (blue) or stands still (black). Starting around generation 34 the agent starts to move to the left and right, with the saliency maps becoming more pronounced. From generation 56 on the compressed learned representation (latent vector+hidden state vector) allows the agent to infer the correct action almost all the time. The champion discovered in generation 145 discovered a visual encoder and LSTM mapping that shows a clear division for left and right strafing actions. A variety of different RL algorithms have recently been shown to work well on a diverse set of problems when combined with the representative power of deep neural networks (; ; . While most approaches are based on variations of or policy gradient methods (;, recently evolutionary-based methods have emerged as a promising alternative for some domains). showed that a type of evolution strategy (ES) can reach competitive performance in the Atari benchmark and at controlling robots in MuJoCo. Additionally, Such et al. demonstrated that a simple genetic algorithm is in fact able to reach similar performance to deep RL methods such as DQN or A3C. Earlier approaches that evolved neural networks for RL tasks worked well in complex RL tasks with lower-dimensional input spaces (; ;). However, when trained end-to-end these networks are often still orders of magnitude simpler than networks employed for supervised learning problems or depend on additional losses that are responsible for training certain parts of the network . For complex agent models, different network components can be trained separately (Wahlström et al., 2015;). For example, in the world model approach , the authors first train a variational autoencoder (VAE) on 10,000 rollouts from a random policy to compress the high-dimensional sensory data and then train a recurrent network to predict the next latent code. Only after this process is a smaller controller network trained to perform the actual task, taking information from both the VAE and recurrent network as input to determine the action the agent should perform. Evolutionary approaches solving 3D tasks directly from pixels has so far proven difficult although a few notable approaches exist. Koutník et al. evolved an indirectly encoded and recurrent controller for car driving in TORCS, which learned to drive based on a raw 64×64 pixel image. The approach was based on an indirect encoding of the network's weights analogous to the JPEG compression in images. To scale to 3D FPS tasks, first trained an autoencoder in an unsupervised way and then evolved the controller giving the compressed representation as input. In another approach, trained an object recognizer in a supervised way and then in a separate step evolved a controller module. More recently, introduced an approach called safe mutations, in which the magnitude of mutations to weight connections is scaled based on the sensitivity of the network's output to that weight. The safe mutations approach allowed the evolution of large-scale deep networks for a simple 3D maze task and is a complementary approach that could be combined with DIP in the future. The approach introduced in this paper can be viewed as a form of diversity maintenance, in which selection pressure on certain mutated neural networks is reduced. Many other methods for encouraging diversity were invented by the evolutionary computation community, such as novelty search , quality diversity , or speciation . Using the concept of age to maintain diversity has a long history in evolutionary algorithms. first introduced the idea of using age in a genetic algorithm in which individuals are removed from the population if they reach a specific age limit. In the Age-Layered Population Structure approach introduced by , the population is segregated into different age layers and newly generated individuals are introduced into the youngest age layer to increase diversity. combine the idea of age with a multi-objective approach, in which individuals are rewarded for performance and low age. Inspired by the morphological innovation approach and in contrast to previous approaches , DIP does not introduce new random individuals into the generation but instead resets the age of individuals whose sensory or memory system have been mutated. That is, it is not a measure of how long the individual has been in the population, as in the traditional usage of age in multi-objective optimization. Approaches to learning dynamical models have mainly focused on gradient descent-based methods, with early work on RNNs in the 1990s . More recent work includes PILCO , which is a probabilistic model-based policy search method and Black-DROPS , which employs CMA-ES for data-efficient optimization of complex control problems. Additionally, interest has increased in learning dynamical models directly from high-dimensional pixel images for robotic tasks and also video games . Work on evolving forward models has mainly focused on neural networks that contain orders of magnitude fewer connections and lower-dimensional feature vectors than the models in this paper. The paper demonstrated that a world model representation for a 3D task can emerge under the right circumstances without being explicitly rewarded for it. To encourage this emergence, we introduced deep innovation protection, an approach that can dynamically reduce the selection pressure for different components in a heterogeneous neural architecture. The main insight is that when components upstream in the neural network change, such as the visual or memory system in a world model, components downstream need time to adapt to changes in those learned representations. The neural model learned to represent situations that require similar actions with similar latent and hidden codes (Fig. 5 and 7). Additionally, without a specific forward-prediction loss, the agent learned to predict "useful" events that are necessary for its survival (e.g. predicting when the agent is in the line-of-fire of a fireball). In the future it will be interesting to compare the differences and similarities of emergent representations and learning dynamics ing from evolutionary and gradient descent-based optimization approaches . Interestingly, without the need for a variety of specialized learning methods employed in the original world model paper, a simple genetic algorithm augmented with DIP can not only solve the simpler 2D car racing domain , but also more complex 3D domains such as VizDoom. That the average score across 100 random rollouts is lower when compared to the one reported in the original world model paper (824 compared to 1092) is maybe not surprising; if random rollouts are available, training each component separately can in a higher performance. However, in more complicated domains, in which random rollouts might not be able to provide all relevant experiences (e.g. a random policy might never reach a certain level), the proposed DIP approach could become increasingly relevant. An exciting future direction is to combine the end-to-end training regimen of DIP with the ability of training inside the world model itself . However, because the evolved representation is not directly optimized to predict the next time step and only learns to predict future events that are useful for the agent's survival, it is an interesting open question how such a different version of a hallucinate environment could be used for training. A natural extension to this work is to evolve the neural architectures in addition to the weights of the network. Searching for neural architectures in RL has previously only been applied to smaller networks (; ; ; ;) but could potentially now be scaled to more complex tasks. While our innovation protection approach is based on evolution, ideas presented here could also be incorporated in gradient descent-based approaches that optimize neural systems with multiple interacting components end-to-end. The size of each population was 200 and evolutionary runs had a termination criterion of 200 generations. The genetic algorithm σ was determined empirically and set to 0.03 for the experiments in this paper. The code for the DIP approach can be found at: [Removed for anonymous review]. An overview of the agent model is shown in Fig. 1, which employs the same architecture as the original world model approach. The sensory model is implemented as a variational autoencoder that compresses the high-dimensional input to a latent vector z. The VAE takes as input an RGB image of size 64 × 64 × 3, which is passed through four convolutional layers, all with stride 2. Details on the encoder are depicted in the visual component shown in Fig. 1, where layer details are shown as: activation type (e.g. ReLU), number of output channels × filter size. The decoder, which is in effect only used to analyze the evolved visual representation, takes as input a tensor of size 1 × 1 × 104 and processes it through four deconvolutional layers each with stride 2 and sizes of 128 × 5, 64 × 5, 32 × 6, and 32 × 6. The network's weights are set using the default The memory model combines a recurrent LSTM network with a mixture density Gaussian model as network outputs, known as a MDN-RNN (; b). The network has 256 hidden nodes and models P (z t+1 |a t, z t, h t), where a t is the action taken by the agent at time t and h t is the hidden state of the recurrent network. Similar models have previously been used for generating sequences of sketches and handwriting (a We performed an analysis of the cumulative reward per age and the number of individuals with a certain age averaged across all ten runs and all generations (Fig. 8). While the average reward increases with age, there are fewer and fewer individuals at higher age levels. This suggest that the two objectives (minimising age and increasing cumulative reward) are in competition with each other, motivating the choice for a multi-objective optimization approach. These indicate that the multi-objective optimization is working as intended; staying alive for longer becomes increasingly difficult and a high age needs to be compensated for by a high task reward. Similarly to the approach by , we calculate perturbation-based saliency maps by applying a Gaussian blur of 5 × 5 pixels to the coordinates (i, j) of an image I from the game. The Gaussian blur can be interpreted as adding uncertainty to a particular location of the screen. For example, if a fireball is at location (i, j) then adding noise to that location makes the agent less certain about the fireball's location. The saliency map intensity S(i, j) is calculated as the difference between the policy output π given the original image I and modified image I with added Gaussian blur at location (i, j): S(i, j) = |π(I) − π(I)|.
Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks.
1,754
scitldr
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius. Modern neural network classifiers are able to achieve very high accuracy on image classification tasks but are sensitive to small, adversarially chosen perturbations to the inputs . Given an image x that is correctly classified by a neural network, a malicious attacker may find a small adversarial perturbation δ such that the perturbed image x + δ, though visually indistinguishable from the original image, is assigned to a wrong class with high confidence by the network. Such vulnerability creates security concerns in many real-world applications. Researchers have proposed a variety of defense methods to improve the robustness of neural networks. Most of the existing defenses are based on adversarial training (; ; ; ;). During training, these methods first learn on-the-fly adversarial examples of the inputs with multiple attack iterations and then update model parameters using these perturbed samples together with the original labels. However, such approaches depend on a particular (class of) attack method. It cannot be formally guaranteed whether the ing model is also robust against other attacks. Moreover, attack iterations are usually quite expensive. As a , adversarial training runs very slowly. Another line of algorithms trains robust models by maximizing the certified radius provided by robust certification methods (; ; c). Using linear or convex relaxations of fully connected ReLU networks, a robust certification method computes a "safe radius" r for a classifier at a given input such that at any point within the neighboring radius-r ball of the input, the classifier is guaranteed to have unchanged predictions. However, the certification methods are usually computationally expensive and can only handle shallow neural networks with ReLU activations, so these training algorithms have troubles in scaling to modern networks. In this work, we propose an attack-free and scalable method to train robust deep neural networks. We mainly leverage the recent randomized smoothing technique . A randomized smoothed classifier g for an arbitrary classifier f is defined as g(x) = E η f (x + η), in which η ∼ N (0, σ 2 I). derived how to analytically compute the certified radius of the randomly smoothed classifier g, they did not show how to maximize that radius to make the classifier g robust. proposed SmoothAdv to improve the robustness of g, but it still relies on the expensive attack iterations. Instead of adversarial training, we propose to learn robust models by directly taking the certified radius into the objective. We outline a few challenging desiderata any practical instantiation of this idea would however have to satisfy, and provide approaches to address each of these in turn. A discussion of these desiderata, as well as a detailed implementation of our approach is provided in Section 4. And as we show both theoretically and empirically, our method is numerically stable and accounts for both classification accuracy and robustness. Our contributions are summarized as follows: • We propose an attack-free and scalable robust training algorithm by MAximizing the CErtified Radius (MACER). MACER has the following advantages compared to previous works: -Different from adversarial training, we train robust models by directly maximizing the certified radius without specifying any attack strategies, and the learned model can achieve provable robustness against any possible attack in the certified region. Additionally, by avoiding time-consuming attack iterations, our proposed algorithm runs much faster than adversarial training. -Different from other methods that maximize the certified radius but are not scalable to deep neural networks, our method can be applied to architectures of any size. This makes our algorithm more practical in real scenarios. • We empirically evaluate our proposed method through extensive experiments on Cifar-10, ImageNet, MNIST, and SVHN. On all tasks, MACER achieves better performance than state-of-the-art algorithms. MACER is also exceptionally fast. For example, on ImageNet, MACER uses 39% less training time than adversarial training but still performs better. Neural networks trained by standard SGD are not robust -a small and human imperceptible perturbation can easily change the prediction of a network. In the white-box setting, methods have been proposed to construct adversarial examples with small ∞ or 2 perturbations (; ; ;). Furthermore, even in the black-box setting where the adversary does not have access to the model structure and parameters, adversarial examples can be found by either transfer attack or optimization-based approaches (; ;). It is thus important to study how to improve the robustness of neural networks against adversarial examples. Adversarial training So far, adversarial training has been the most successful robust training method according to many recent studies. Adversarial training was first proposed in and , where they showed that adding adversarial examples to the training set can improve the robustness against such attacks. More recently, showed that adversarial training can be formulated as a min-max optimization problem and demonstrated that adversarial training with PGD attack can lead to very robust models empirically. Zhang et al. (2019b) further proposed to decompose robust error as the sum of natural error and boundary error to achieve better performance. Although models obtained by adversarial training empirically achieve good performance, they do not have certified error guarantees. Despite the popularity of PGD-based adversarial training, one major issue is that its speed is too slow. Some recent papers propose methods to accelerate adversarial training. For example, Freem replays an adversarial example several times in one iteration, YOPO-m-n (a) restricts back propagation in PGD within the first layer, and estimates the adversary with local linearization. Robustness certification and provable defense Many defense algorithms proposed in the past few years were claimed to be effective, but showed that most of them are based on "gradient masking" and can be bypassed by more carefully designed attacks. It is thus important to study how to measure the provable robustness of a network. A robustness certification algorithm takes a classifier f and an input point x as inputs, and outputs a "safe radius" r such that for any δ subject to δ ≤ r, f (x) = f (x + δ). Several algorithms have been proposed recently, including the convex polytope technique, abstract interpretation methods (; and the recursive propagation algrithms (; . These methods can provide attack-agnostic robust error lower bounds. Moreover, to achieve networks with nontrivial certified robust error, one can train a network by minimizing the certified robust error computed by the above-mentioned methods, and several algorithms have been proposed in the past year ; c; . Unfortunately, they can only be applied to shallow networks with limited activation and run very slowly. More recently, researchers found a new class of certification methods called randomized smoothing. The idea of randomization has been used for defense in several previous works but without any certification. Later on, first showed that if a Gaussian random noise is added to the input or any intermediate layer. A certified guarantee on small 2 perturbation can be computed via differential privacy. and then provided improved ways to compute the 2 certified robust error for Gaussian smoothed models. In this paper, we propose a new algorithm to train on these 2 certified error bounds to significantly reduce the certified error and achieve better provable adversarial robustness. It is thus quite natural to improve model robustness via maximizing the robust radius. Unfortunately, computing the robust radius of a classifier induced by a deep neural network is very difficult. showed that computing the l 1 robust radius of a deep neural network is NP-hard. Although there is no for the l 2 radius yet, it is very likely that computing the l 2 robust radius is also NP-hard. Certified radius Many previous works proposed certification methods that seek to derive a tight lower bound of R(f θ ; x, y) for neural networks (see Section 2 for related work). We call this lower bound certified radius and denote it by CR(f θ ; x, y). The certified radius satisfies 0 ≤ CR(f θ ; x, y) ≤ R(f θ ; x, y) for any f θ, x, y. The certified radius leads to a guaranteed upper bound of the 0/1 robust classification error, which is called 0/1 certified robust error. The 0/1 certified robust error of classifier f θ on sample (x, y) is defined as l e. a sample is counted as correct only if the certified radius reaches. The expectation of certified robust error over (x, y) ∼ p data serves as a performance metric of the provable robustness: Recall that CR(f θ ; x, y) is a lower bound of the true robust radius, which immediately implies that L 0/1 Therefore, a small 0/1 certified robust error leads to a small 0/1 robust classification error. Randomized smoothing In this work, we use the recent randomized smoothing technique , which is scalable to any architectures, to obtain the certified radius of smoothed deep neural networks. The key part of randomized smoothing is to use the smoothed version of f θ, which is denoted by g θ, to make predictions. The formulation of g θ is defined as follows. Definition 1. For an arbitrary classifier f θ ∈ F and σ > 0, the smoothed classifier g θ of f θ is defined as g θ (x) = arg max In short, the smoothed classifier g θ (x) returns the label most likely to be returned by f θ when its input is sampled from a Gaussian distribution N (x, σ 2 I) centered at x. proves the following theorem, which provides an analytic form of certified radius: Theorem 1. Let f θ ∈ F, and η ∼ N (0, σ 2 I). Let the smoothed classifier g θ be defined as in. Let the ground truth of an input x be y. If g θ classifies x correctly, i.e. Then g θ is provably robust at x, with the certified radius given by where Φ is the c.d.f. of the standard Gaussian distribution. As we can see from Theorem 1, the value of the certified radius can be estimated by repeatedly sampling Gaussian noises. More importantly, it can be computed for any deep neural networks. This motivates us to design a training method to maximize the certified radius and learn robust models. To minimize the 0/1 robust classification error in or the 0/1 certified robust error in, many previous works (b;) proposed to first decompose the error. Note that a classifier g θ has a positive 0/1 certified robust error on sample (x, y) if and only if exactly one of the following two cases happens: • g θ (x) = y, i.e. the classifier misclassifies x. • g θ (x) = y, but CR(g θ ; x, y) <, i.e. the classifier is correct but not robust enough. Thus, the 0/1 certified robust error can be decomposed as the sum of two error terms: a 0/1 classification error and a 0/1 robustness error: Minimizing the 0-1 error directly is intractable. A classic method is to minimize a surrogate loss instead. The surrogate loss for the 0/1 classification error is called classification loss and denoted by l C (g θ ; x, y). The surrogate loss for the 0/1 robustness error is called robustness loss and denoted by l R (g θ ; x, y). Our final objective function is We would like our loss functions l C (g θ ; x, y) and l R (g θ ; x, y) to satisfy some favorable conditions. These conditions are summarized below as (C1) -(C3): • (C1) (Surrogate condition): Surrogate loss should be an upper bound of the original error function, i.e. l C (g θ ; x, y) and l R (g θ ; x, y) should be upper bounds of 1 {g θ (x) =y} and 1 {g θ (x)=y,CR(g θ ;x,y)< }, respectively. • (C2) (Differentiablity): l C (g θ ; x, y) and l R (g θ ; x, y) should be (sub-)differentiable with respect to θ. • (C3) (Numerical stability): The computation of l C (g θ ; x, y) and l R (g θ ; x, y) and their (sub-)gradients with respect to θ should be numerically stable. The surrogate condition (C1) ensures that l(g θ ; x, y) itself meets the surrogate condition, i.e. Conditions (C2) and (C3) ensure that can be stably minimized with first order methods. We next discuss choices of the surrogate losses that ensure we satisfy condition (C1). The classification surrogate loss is relatively easy to design. There are many widely used loss functions from which we can choose, and in this work we choose the cross-entropy loss as the classification loss: For the robustness surrogate loss, we choose the hinge loss on the certified radius: where˜ > 0 and λ ≥ 1. We use the hinge loss because not only does it satisfy the surrogate condition, but also it is numerically stable, which we will discuss in Section 4.4. The classification surrogate loss in is differentiable with respect to θ, but the differentiability of the robustness surrogate loss in requires differentiability of CR(g θ ; x, y). In this section we will show that the randomized smoothing certified radius in does not meet condition (C2), and accordingly, we will introduce soft randomized smoothing to solve this problem. Whether the certified radius is sub-differentiable with respect to θ boils down to the differentiablity of E η 1 {f θ (x+η)=y}. Theoretically, the expectation is indeed differentiable. However, from a practical point of view, the expectation needs to be estimated by Monte Carlo sampling Gaussian noise and k is the number of samples. This estimation, which is a sum of indicator functions, is not differentiable. Hence, condition (C2) is still not met from the algorithmic perspective. To tackle this problem, we leverage soft randomized smoothing (Soft-RS). In contrast to the original version of randomized smoothing (Hard-RS), Soft-RS is applied to a neural network z θ (x) whose last layer is softmax. The soft smoothed classifierg θ is defined as follows. Definition 2. For a neural network z θ: X → P(K) whose last layer is softmax and σ > 0, the soft smoothed classifierg θ of z θ is defined as Using Lemma 2 in , we prove the following theorem in Appendix A: Theorem 2. Let the ground truth of an input x be y. Ifg θ classifies x correctly, i.e. Theng θ is provably robust at x, with the certified radius given by where Φ is the c.d.f. of the standard Gaussian distribution. We notice that in (see its Appendix B), a similar technique was introduced to overcome the non-differentiability in creating adversarial examples to a smoothed classifier. Different from their work, our method uses Soft-RS to obtain a certified radius that is differentiable in practice. The certified radius given by soft randomized smoothing meets condition (C2) in the algorithmic design. Even if we use Monte Carlo sampling to estimate the expectation, is still sub-differentiable with respect to θ as long as z θ is sub-differentiable with respect to θ. Connection between Soft-RS and Hard-RS We highlight two main properties of Soft-RS. Firstly, it is a differentiable approximation of the original Hard-RS. To see this, note that when a.e. − − → 1 {y=arg max c u c θ (x)}, sog θ converges to g θ almost everywhere. Consequently, the Soft-RS certified radius converges to the Hard-RS certified radius almost everywhere as β goes to infinity. Secondly, Soft-RS itself provides an alternative way to get a provable robustness guarantee. In Appendix A, we will provide Soft-RS certification procedures that certifỹ g θ with the Hoeffding bound or the empirical Bernstein bound. In this section, we will address the numerical stability condition (C3). While Soft-RS does provide us with a differentiable certified radius which we could maximize with first-order optimization methods, directly optimizing suffers from exploding gradients. The problem stems from the inverse cumulative density function Φ −1 (x), whose derivative is huge when x is close to 0 or 1. Fortunately, by minimizing the robustness loss instead, we can maximize the robust radius free from exploding gradients. The hinge loss restricts that samples with non-zero robustness loss must satisfy 0 < CR(g θ ; x, y) < +˜, which is equivalent to 0 < ξ θ (x, y) < γ where. Under this restriction, the derivative of Φ −1 is always bounded as shown in the following proposition. The proof can be found in Appendix B., γ} with respect to p 1 and p 2 is bounded. We are now ready to present the complete MACER algorithm. Expectations over Gaussian samples are approximated with Monte Carlo sampling. Let ). During training we minimize E (x,y)∼p data l(g θ ; x, y). Detailed implementation is described in Algorithm 1. To simplify the implementation, we choose γ to be a hyperparameter instead of˜. The inverse temperature of softmax β is also a hyperparameter. For each Compute the empirical expectations: Update θ with one step of any first-order optimization method to minimize 10: end for Compare to adversarial training Adversarial training defines the problem as a mini-max game and solves it by optimizing the inner loop (attack generation) and the outer loop (model update) iteratively. In our method, we only have a single loop (model update). As a , our proposed algorithm can run much faster than adversarial training because it does not require additional back propagations to generate adversarial examples. Compare to previous work The overall objective function of our method, a linear combination of a classification loss and a robustness loss, is similar to those of adversarial logit pairing (ALP) and TRADES (b). In MACER, the λ in the objective function can also be viewed as a trade-off factor between accuracy and robustness. However, the robustness term of MACER does not depend on a particular adversarial example x, which makes it substantially different from ALP and TRADES. In this section, we empirically evaluate our proposed MACER algorithm on a wide range of tasks. We also study the influence of different hyperparameters in MACER on the final model performance. To fairly compare with previous works, we follow and to use LeNet for MNIST, ResNet-110 for Cifar-10 and SVHN, and ResNet-50 for ImageNet. MACER Training For Cifar-10, MNIST and SVHN, we train the models for 440 epochs using our proposed algorithm. The learning rate is initialized to be 0.01, and is decayed by 0.1 at the 200 th /400 th epoch. For all the models, we use k = 16, γ = 8.0 and β = 16.0. The value of λ trades off the accuracy and robustness and we find that different λ leads to different robust accuracy when the model is injected by different levels (σ) of noise. We find setting λ = 12.0 for σ = 0.25 and λ = 4.0 for σ = 0.50 works best. For ImageNet, we train the models for 120 epochs. The initial learning rate is set to be 0.1 and is decayed by 0.1 at the 30 th /60 th /90 th epoch. For all models on ImageNet, we use k = 2, γ = 8.0 and β = 16.0. More details can be found in Appendix C. Baselines We compare the performance of MACER with two previous works. The first work trains smoothed networks by simply minimizing cross-entropy loss. The second one uses adversarial training on smoothed networks to improve the robustness. For both baselines, we use checkpoints provided by the authors and report their original numbers whenever available. In addition, we run's method on all tasks as it is a speical case of MACER by setting k = 1 and λ = 0. Certification Following previous works, we report the approximated certified test set accuracy, which is the fraction of the test set that can be certified to be robust at radius r. However, the approximated certified test set accuracy is a function of the radius r. It is hard to compare two models unless one is uniformly better than the other for all r. Hence, we also use the average certified radius (ACR) as a metric: for each test data (x, y) and model g, we can estimate the certified radius CR(g; x, y). The average certified radius is defined as 1 |Stest| (x,y)∈Stest CR(g; x, y) where S test is the test set. To estimate the certified radius for data points, we use the source code provided by. We report the on Cifar-10 and ImageNet in the main body of the paper. Results on MNIST and SVHN can be found in Appendix C.2. Performance The performance of different models on Cifar-10 are reported in Table 1, and in Figure 1 we display the radius-accuracy curves. Note that the area under a radius-accuracy curve is equal to the ACR of the model. First, the plots show that our proposed method consistently achieves significantly higher approximated certified test set accuracy than. This shows that robust training via maximizing the certified radius is more effective than simply minimizing the cross entropy classification loss. Second, the performance of our model is different from that of Salman et al. The gain of our model is relatively smaller when σ = 1.0. This is because σ = 1.0 is a very large noise level and both models perform poorly. The ImageNet are displayed in Table 2 and Figure 2, and the observation is similar. All experimental show that our proposed algorithm is more effective than previous ones. Training speed Since MACER does not require adversarial attack during training, it runs much faster to learn a robust model. Empirically, we compare MACER with on the average training time per epoch and the total training hours, and list the statistics in Table 3. For a fair comparison, we use the codes 34 provided by the original authors and run all algorithms on the same machine. For Cifar-10 we use one NVIDIA P100 GPU and for ImageNet we use four NVIDIA P100 GPUs. According to our experiments, on ImageNet, MACER achieves ACR=0.544 in 117.90 hours. On the contrary, only achieves ACR=0.528 but uses 193.10 hours, which clearly shows that our method is much more efficient. One might question whether the higher performance of MACER comes from the fact that we train for more epochs than previous methods. In Section C.3 we also run MACER for 150 epochs and compare it with the models in Table 3. The show that when run for only 150 epochs, MACER still achieves a performance comparable with SmoothAdv, and is 4 times faster at the same time. In this section, we carefully examine the effect of different hyperparameters in MACER. All experiments are run on Cifar-10 with σ = 0.25 or 0.50. The for σ = 0.25 are shown in Figure 3. All details can be found in Appendix C.4. Effect of k We sample k Gaussian samples for each input to estimate the expectation in. We can see from Figure 3 (a) that using more Gaussian samples usually leads to better performance. For example, the radius-accuracy curve of k = 16 is uniformly above that of k = 1. Effect of λ The radius-accuracy curves in Figure 3 (b) demonstrate the trade-off effect of λ. From the figure, we can see that as λ increases, the clean accuracy drops while the certified accuracy at large radii increases. Effect of γ γ is defined as the hyperparameter in the hinge loss. From Figure 3 (c) we can see that when γ is small, the approximated certified test set accuracy at large radii is small since γ "truncates" the large radii. As γ increases, the robust accuracy improves. It appears that γ also acts as a trade-off between accuracy and robustness, but the effect is not as significant as the effect of λ. Effect of β Similar to's finding (see its Appendix B), we also observe that using a larger β produces better . pointed out that a large β may make training unstable, we find that if we only apply a large β to the robustness loss, we can maintain training stability and achieve a larger average certified radius as well. In this work we propose MACER, an attack-free and scalable robust training method via directly maximizing the certified radius of a smoothed classifier. We discuss the desiderata such an algorithm would have to satisfy, and provide an approach to each of them. According to our extensive experiments, MACER performs better than previous provable l 2 -defenses and trains faster. Our strong empirical suggest that adversarial training is not a must for robust training, and defense based on certification is a promising direction for future research. Moreover, several recent papers (; ; suggest that using unlabeled data helps improve adversarially robust generalization. We will also extend MACER to the semisupervised setting. In this section we provide theoretical analysis and certification procedures for Soft-RS. Our proof is based on the following lemma: Proof of Theorem 2. Let y Because z c θ : Meanwhile, z B ≤ 1 − z A, so we can take z B = 1 − z A, and It reduces to find a confidence lower bound of z A . Here we provide two bounds: Hoeffding Bound The random variable z . By Hoeffding's inequality we have Hence, a 1 − α confidence lower bound z A of z A is where S 2 is the sample variance of X 1, · · ·, X k, i.e. Consequently, a 1 − α confidence lower bound z A of z A is The full certification procedure with the above two bounds is described in Algorithm 2. Algorithm 2 Soft randomized smoothing certification 1: # Certify the robustness ofg around an input x with Hoeffding bound 2: function CERTIFYHOEFFDING(z, σ 2, x, n 0, n, α) if z A > A ← SAMPLEUNDERNOISE(z, x, n, σ 2) 23: for j = 1 to num do 24: Sample noise η j ∼ N (0, σ 2 I) Compute: z j = z(x + η j) Figure 4. The show that Hard-RS consistently gives a larger lower bound of robust radius than Soft-RS. We also observe that there is a gap between Soft-RS and Hard-RS when β → ∞, which implies that the empirical Bernstein bound, though tighter than the Hoeffding bound, is still looser than the Clopper-Pearson bound. Proof of Proposition 1. We only need to consider the case when is a strictly increasing function of p, p * is unique, and Since p 1 is the largest value and p 1 + p 2 +... is continuous in any closed interval of, the derivative of Φ −1 (p 1) − Φ −1 (p 2) with respect to p 1 is bounded. Similarly, p 2 is the largest among p 2,..., and the derivative of Φ −1 (p 1) − Φ −1 (p 2) with respect to p 2 is bounded. In this section we list all compared models in the main body of this paper. Cifar-10 models are listed in Table 4, and ImageNet models are listed in Table 5. 1.00 Salman-1.00 2-sample 10-step SmoothAdv P GD with = 2.00 MACER-1.00 MACER with k = 16, dynamic λ 5, β = 16.0 and γ = 8.0 The are reported in Table 6. For all σ, we use k = 16, λ = 16.0, γ = 8.0 and β = 16.0. Table 7. We use k = 16, λ = 12.0, γ = 8.0 and β = 16.0. Table 8 we report the performance and training time of MACER on Cifar-10 when it is only run for 150 epochs, and compare with SmoothAdv and MACER (440 epochs). The learning rate is decayed by 0.1 at epochs 60 and 120. All other hyperparameters are kept the same as in Table 4. Table 9 for detailed experimental settings. Results are reported in Tables 10-13.
We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses.
1,755
scitldr
Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC). GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning. Our main theoretical contributions are two folds. First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic. Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored. Through experiments, we show that our method is a promising reinforcement learning method for continuous controls. The goal of reinforcement learning (RL) is to learn an optimal policy that lets an agent achieve the maximum cumulative rewards known as the return BID31. Reinforcement learning has been shown to be effective in solving challenging artificial intelligence tasks such as playing games BID20 and controlling robots BID6.Reinforcement learning methods can be classified into three categories: value-based, policy-based, and actor-critic methods. Value-based methods learn an optimal policy by firstly learning a value function that estimates the expected return. Then, they infer an optimal policy by choosing an action that maximizes the learned value function. Choosing an action in this way requires solving a maximization problem which is not trivial for continuous controls. While extensions to continuous controls were considered recently, they are restrictive since specific structures of the value function are assumed BID10 BID3.On the other hand, policy-based methods, also called policy search methods BID6, learn a parameterized policy maximizing a sample approximation of the expected return without learning the value function. For instance, policy gradient methods such as REIN-FORCE BID34 use gradient ascent to update the policy parameter so that the probability of observing high sample returns increases. Compared with value-based methods, policy search methods are simpler and naturally applicable to continuous problems. Moreover, the sample return is an unbiased estimator of the expected return and methods such as policy gradients are guaranteed to converge to a locally optimal policy under standard regularity conditions BID32. However, sample returns usually have high variance and this makes such policy search methods converge too slowly. Actor-critic methods combine the advantages of value-based and policy search methods. In these methods, the parameterized policy is called an actor and the learned value-function is called a critic. The goal of these methods is to learn an actor that maximizes the critic. Since the critic is a low variance estimator of the expected return, these methods often converge much faster than policy search methods. Prominent examples of these methods are actor-critic BID32 BID15, natural actor-critic BID25, trust-region policy optimization BID27, and asynchronous advantage actor-critic BID21. While their approaches to learn the actor are different, they share a common property that they only use the value of the critic, i.e., the zero-th order information, and ignore higher-order ones such as gradients and Hessians w.r.t. actions of the critic 1. To the best of our knowledge, the only actor-critic methods that use gradients of the critic to update the actor are deterministic policy gradients (DPG) BID29 and stochastic value gradients. However, these two methods do not utilize the second-order information of the critic. In this paper, we argue that the second-order information of the critic is useful and should not be ignored. A motivating example can be seen by comparing gradient ascent to the Newton method: the Newton method which also uses the Hessian converges to a local optimum in a fewer iterations when compared to gradient ascent which only uses the gradient BID24. This suggests that the Hessian of the critic can accelerate actor learning which leads to higher data efficiency. However, the computational complexity of second-order methods is at least quadratic in terms of the number of optimization variables. For this reason, applying second-order methods to optimize the parameterized actor directly is prohibitively expensive and impractical for deep reinforcement learning which represents the actor by deep neural networks. Our contribution in this paper is a novel actor-critic method for continuous controls which we call guide actor-critic (GAC). Unlike existing methods, the actor update of GAC utilizes the secondorder information of the critic in a computationally efficient manner. This is achieved by separating actor learning into two steps. In the first step, we learn a non-parameterized Gaussian actor that locally maximizes the critic under a Kullback-Leibler (KL) divergence constraint. Then, the Gaussian actor is used as a guide for learning a parameterized actor by supervised learning. Our analysis shows that learning the mean of the Gaussian actor is equivalent to performing a second-order update in the action space where the curvature matrix is given by Hessians of the critic and the step-size is controlled by the KL constraint. Furthermore, we establish a connection between GAC and DPG where we show that DPG is a special case of GAC when the Hessians and KL constraint are ignored. In this section, we firstly give a of reinforcement learning. Then, we discuss existing second-order methods for policy learning and their issue in deep reinforcement learning. We consider discrete-time Markov decision processes (MDPs) with continuous state space S Ď R ds and continuous action space A Ď R da. We denote the state and action at time step t P N by s t and a t, respectively. The initial state s 1 is determined by the initial state density s 1 " ppsq. At time step t, the agent in state s t takes an action a t according to a policy a t " πpa|s t q and obtains a reward r t " rps t, a t q. Then, the next state s t`1 is determined by the transition function s t`1 " pps 1 |s t, a t q. A trajectory τ " ps 1, a 1, r 1, s 2,... q gives us the cumulative rewards or return defined as ř 8 t"1 γ t´1 rps t, a t q, where the discount factor 0 ă γ ă 1 assigns different weights to rewards given at different time steps. The expected return of π after executing an action a in a state s can be expressed through the action-value function which is defined as DISPLAYFORM0 where E p r¨s denotes the expectation over the density p and the subscript t ě 1 indicates that the expectation is taken over the densities at time steps t ě 1. We can define the expected return as DISPLAYFORM1 where G P R d θˆdθ is a curvature matrix. The behavior of second-order methods depend on the definition of a curvature matrix. The most well-known second-order method is the Newton method where its curvature matrix is the Hessian of the objective function w.r.t. the optimization variables: DISPLAYFORM2 The natural gradient method is another well-known second-order method which uses the Fisher information matrix (FIM) as the curvature matrix : DISPLAYFORM3 Unlike the Hessian matrix, FIM provides information about changes of the policy measured by an approximated KL divergence: E ppsq rKLpπ θ pa|sq||π θ 1 pa|sqqs « pθ´θ 1 q J G FIM pθ´θ 1 q BID13. We can see that G Hessian and G FIM are very similar but the former also contains the critic and the Hessian of the actor while the latter does not. This suggests that the Hessian provides more information than that in FIM. However, FIM is always positive semi-definite while the Hessian may be indefinite. Please see BID7 for detailed comparisons between the two curvature matrices in policy search 4. Nonetheless, actor-critic methods based on natural gradient were shown to be very efficient BID25 BID27 ).We are not aware of existing work that considers second-order updates for DPG or SVG. However, their second-order updates can be trivially derived. For example, a Newton update for DPG is DISPLAYFORM4 where the pi, jq-th entry of the Hessian matrix DISPLAYFORM5 Note that Bπ θ psq{Bθ and B 2 π θ psq{BθBθ 1 are vectors since π θ psq is a vector-valued function. Interestingly, the Hessian of DPG contains the Hessians of the actor and the critic. In contrast, the Hessian of the actor-critic method contains the Hessian of the actor and the value of the critic. Second-order methods are appealing in reinforcement learning because they have high data efficiency. However, inverting the curvature matrix (or solving a linear system) requires cubic computational complexity in terms of the number of optimization variables. For this reason, the secondorder updates in Eq. and Eq. are impractical in deep reinforcement learning due to a large number of weight parameters in deep neural networks. In such a scenario, an approximation of the curvature matrix is required to reduce the computational burden. For instance, BID7 proposed to use only diagonal entries of an approximated Hessian matrix. However, this approximation clearly leads to a loss of useful curvature information since the gradient is scaled but not rotated. More recently, BID35 proposed a natural actor-critic method that approximates block-diagonal entries of FIM. However, this approximation corresponds to ignoring useful correlations between weight parameters in different layers of neural networks. In this section, we propose the guide actor-critic (GAC) method that performs second-order updates without the previously discussed computational issue. Unlike existing methods that directly learn the parameterized actor from the critic, GAC separates the problem of learning the parameterized actor into problems of 1) learning a guide actor that locally maximizes the critic, and 2) learning a parameterized actor based on the guide actor. This separation allows us to perform a second-order update for the guide actor where the dimensionality of the curvature matrix is independent of the parameterization of the actor. We formulate an optimization problem for learning the guide actor in Section 3.1 and present its solution in Section 3.2. Then in Section 3.3 and Section 3.4, we show that the solution corresponds to performing second-order updates. Finally, Section 3.5 presents the learning step for the parameterized actor using supervised learning. The pseudo-code of our method is provided in Appendix B and the source code is available at https://github.com/voot-t/guide-actor-critic. Our first goal is to learn a guide actor that maximizes the critic. However, greedy maximization should be avoided since the critic is a noisy estimate of the expected return and a greedy actor may change too abruptly across learning iterations. Such a behavior is undesirable in real-world problems, especially in robotics BID6. Instead, we maximize the critic with additional constraints: DISPLAYFORM0 subject to E p β psq rKLpπpa|sq||π θ pa|sqqs ď, DISPLAYFORM1 whereπpa|sq is the guide actor to be learned, π θ pa|sq is the current parameterized actor that we want to improve upon, and p β psq is the state distribution induced by past trajectories. The objective function differs from the one in Eq. in two important aspects. First, we maximize for a policy functionπ and not for the policy parameter. This is more advantageous than optimizing for a policy parameter since the policy function can be obtained in a closed form, as will be shown in the next subsection. Second, the expectation is defined over a state distribution from past trajectories and this gives us off-policy methods with higher data efficiency. The first constraint is the Kullback-Leibler (KL) divergence constraint where KLpppxq||qpxqq " E ppxq rlog ppxq´log qpxqs. The second constraint is the Shannon entropy constraint where Hpppxqq "´E ppxq rlog ppxqs. The KL constraint is commonly used in reinforcement learning to prevent unstable behavior due to excessively greedy update BID25 BID26 BID16 BID27. The entropy constraint is crucial for maintaining stochastic behavior and preventing premature convergence BID36 BID0 BID21 BID11. The final constraint ensures that the guide actor is a proper probability density. The KL bound ą 0 and the entropy bound´8 ă κ ă 8 are hyper-parameters which control the explorationexploitation trade-off of the method. In practice, we fix the value of and adaptively reduce the value of κ based on the current actor's entropy, as suggested by BID0. More details of these tuning parameters are given in Appendix C.This optimization problem can be solved by the method of Lagrange multipliers. The solution is DISPLAYFORM2 where η ‹ ą 0 and ω ‹ ą 0 are dual variables corresponding to the KL and entropy constraints, respectively. The dual variable corresponding to the probability density constraint is contained in the normalization term and is determined by η ‹ and ω ‹. These dual variables are obtained by minimizing the dual function: DISPLAYFORM3 All derivations and proofs are given in Appendix A. The solution in Eq. FORMULA0 tells us that the guide actor is obtained by weighting the current actor with p Qps, aq. If we set Ñ 0 then we haveπ « π θ and the actor is not updated. On the other hand, if we set Ñ 8 then we havẽ πpa|sq 9 expp p Qps, aq{ω ‹ q, which is a softmax policy where ω ‹ is the temperature parameter. Computingπpa|sq and evaluating gpη, ωq are intractable for an arbitrary π θ pa|sq. We overcome this issue by imposing two assumptions. First, we assume that the actor is the Gaussian distribution: DISPLAYFORM0 where the mean φ θ psq and covariance Σ θ psq are functions parameterized by a policy parameter θ. Second, we assume that Taylor's approximation of p Qps, aq is locally accurate up to the secondorder. More concretely, the second-order Taylor's approximation using an arbitrary action a 0 is given by DISPLAYFORM1 where g 0 psq " ∇ a p Qps, aq| a"a0 and H 0 psq " ∇ 2 a p Qps, aq| a"a0 are the gradient and Hessian of the critic w.r.t. a evaluated at a 0, respectively. By assuming that the higher order term Op}a} 3 q is sufficiently small, we can rewrite Taylor's approximation at a 0 as DISPLAYFORM2 where ψ 0 psq " g 0 psq´H 0 psqa 0 and ξ 0 psq " DISPLAYFORM3 Note that H 0 psq, ψ 0 psq, and ξ 0 psq depend on the value of a 0 and do not depend on the value of a. This dependency is explicitly denoted by the subscript. The choice of a 0 will be discussed in Section 3.3.Substituting the Gaussian distribution and Taylor's approximation into Eq. FORMULA0 yields another Gaussian distributionπpa|sq " N pa|φ`psq, Σ`psqq, where the mean and covariance are given by DISPLAYFORM4 The matrix F psq P R daˆda and vector Lpsq P R da are defined as DISPLAYFORM5 The dual variables η ‹ and ω ‹ are obtained by minimizing the following dual function: DISPLAYFORM6 where F η psq and L η psq are defined similarly to F psq and Lpsq but with η instead of η ‹.The practical advantage of using the Gaussian distribution and Taylor's approximation is that the guide actor can be obtained in a closed form and the dual function can be evaluated through matrixvector products. The expectation over p β psq can be approximated by e.g., samples drawn from a replay buffer BID20. We require inverting F η psq to evaluate the dual function. However, these matrices are computationally cheap to invert when the dimension of actions is not large. As shown in Eq., the mean and covariance of the guide actor is computed using both the gradient and Hessian of the critic. Yet, these computations do not resemble second-order updates discussed previously in Section 2.2. Below, we show that for a particular choice of a 0, the mean computation corresponds to a second-order update that rotates gradients by a curvature matrix. For now we assume that the critic is an accurate estimator of the true action-value function. In this case, the quality of the guide actor depends on the accuracy of sample approximation in p gpη, ωq and the accuracy of Taylor's approximation. To obtain an accurate Taylor's approximation of p Qps, aq using an action a 0, the action a 0 should be in the vicinity of a. However, we did not directly use any individual a to compute the guide actor, but we weight π θ pa|sq by expp p Qps, aqq (see Eq. FORMULA0). Thus, to obtain an accurate Taylor's approximation of the critic, the action a 0 needs to be similar to actions sampled from π θ pa|sq. Based on this observation, we propose two approaches to perform Taylor's approximation. Taylor's approximation around the mean. In this approach, we perform Taylor's approximation using the mean of π θ pa|sq. More specifically, we use a 0 " E π θ pa|sq ras " φ θ psq for Eq.. In this case, we can show that the mean update in Eq. corresponds to performing a second-order update in the action space to maximize p Qps, aq: DISPLAYFORM0 where DISPLAYFORM1 Qps, aq| a"φ θ psq. This equivalence can be shown by substitution and the proof is given in Appendix A.2. This update equation reveals that the guide actor maximizes the critic by taking a step in the action space similarly to the Newton method. However, the main difference lies in the curvature matrix where the Newton method uses Hessians H φ θ psq but we use a damped Hessian F φ θ psq. The damping term η ‹ Σ´1 θ psq corresponds to the effect of the KL constraint and can be viewed as a trust-region that controls the step-size. This damping term is particularly important since Taylor's approximation is accurate only locally and we should not take a large step in each update BID24.Expectation of Taylor's approximations. Instead of using Taylor's approximation around the mean, we may use an expectation of Taylor's approximation over the distribution. More concretely, we define r Qps, aq to be an expectation of p Q 0 ps, aq over π θ pa 0 |sq: DISPLAYFORM2 Note that E π θ pa0|sq rH 0 psqs " E π θ pa0|sq r∇ 2 a p Qps, aq| a"a0 s and the expectation is computed w.r.t. the distribution π θ of a 0. We use this notation to avoid confusion even though π θ pa 0 |sq and π θ pa|sq are the same distribution. When Eq. FORMULA1 is used, the mean update does not directly correspond to any second-order optimization step. However, under an (unrealistic) independence assumption E π θ pa0|sq rH 0 psqa 0 s " E π θ pa0|sq rH 0 psqsE π θ pa0|sq ra 0 s, we can show that the mean update corresponds to the following second-order optimization step: DISPLAYFORM3 where E π θ pa0|sq rF 0 psqs " η ‹ Σ´1 θ psq´E π θ pa0|sq rH 0 psqs. Interestingly, the mean is updated by rotating an expected gradient using an expected Hessians. In practice, the expectations can be approximated using sampled actions ta 0,i u S i"1 " π θ pa|sq. We believe that this sampling can be advantageous for avoiding local optima. Note that when the expectation is approximated by a single sample a 0 " π θ pa|sq, we obtain the update in Eq. regardless of the independence assumption. In the remainder, we use F psq to denote both of F φ θ psq and E π θ pa0|sq rF 0 psqs, and use Hpsq to denote both of H φ θ psq and E π θ pa0|sq rH 0 psqs. In the experiments, we use GAC-0 to refer to GAC with Taylor's approximation around the mean, and we use GAC-1 to refer to GAC with Taylor's approximation by a single sample a 0 " π θ pa|sq. The covariance update in Eq. indicates that F psq " η ‹ Σ´1 θ psq´Hpsq needs to be positive definite. The matrix F psq is guaranteed to be positive definite if the Hessian matrix Hpsq is negative semi-definite. However, this is not guaranteed in practice unless p Qps, aq is a concave function in terms of a. To overcome this issue, we firstly consider the following identity: DISPLAYFORM0 The proof is given in Appendix A.3. The first term is always negative semi-definite while the second term is indefinite. Therefore, a negative semi-definite approximation of the Hessian can be obtained as DISPLAYFORM1 The second term in Eq. FORMULA1 is proportional to expp´p Qps, aqq and it will be small for high values of p Qps, aq. This implies that the approximation should gets more accurate as the policy approach a local maxima of p Qps, aq. We call this approximation Gauss-Newton approximation since it is similar to the Gauss-Newton approximation for the Newton method BID24. The second step of GAC is to learn a parameterized actor that well represents the guide actor. Below, we discuss two supervised learning approaches for learning a parameterized actor. Since the guide actor is a Gaussian distribution with a state-dependent mean and covariance, a natural choice for the parameterized actor is again a parameterized Gaussian distribution with a state-dependent mean and covariance: π θ pa|sq " N pa|φ θ psq, Σ θ psqq. The parameter θ can be learned by minimizing the expected KL divergence to the guide actor: DISPLAYFORM0 where DISPLAYFORM1 ı is the weighted-mean-squared-error (WMSE) which only depends on θ of the mean function. The const term does not depend on θ. Minimizing the KL divergence reveals connections between GAC and deterministic policy gradients (DPG) BID29. By computing the gradient of the WMSE, it can be shown that DISPLAYFORM2 The proof is given in Appendix A.4. The negative of the first term is precisely equivalent to DPG. Thus, updating the mean parameter by minimizing the KL loss with gradient descent can be regarded as updating the mean parameter with biased DPG where the bias terms depend on η ‹. We can verify that ∇ a p Qps, aq| a"φ`psq " 0 when η ‹ " 0 and this is the case of Ñ 8. Thus, all bias terms vanish when the KL constraint is ignored and the mean update of GAC coincides with DPG. However, unlike DPG which learns a deterministic policy, we can learn both the mean and covariance in GAC. While a state-dependent parameterized covariance function is flexible, we observe that learning performance is sensitive to the initial parameter of the covariance function. For practical purposes, we propose using a parametrized Gaussian distribution with state-independent covariance: π θ pa|sq " N pa|φ θ psq, Σq. This class of policies subsumes deterministic policies with additive independent Gaussian noise for exploration. To learn θ, we minimize the mean-squared-error (MSE): DISPLAYFORM0 For the covariance, we use the average of the guide covariances: Σ " pη ‹`ω‹ qE p β psq " F´1psq ‰. For computational efficiency, we execute a single gradient update in each learning iteration instead of optimizing this loss function until convergence. Similarly to the above analysis, the gradient of the MSE w.r.t. θ can be expanded and rewritten into DISPLAYFORM1 Again, the mean update of GAC coincides with DPG when we minimize the MSE and set η ‹ " 0 and Hpsq "´I where I is the identity matrix. We can also substitute these values back into Eq.. By doing so, we can interpret DPG as a method that performs first-order optimization in the action space: DISPLAYFORM2 and then uses the gradient in Eq. to update the policy parameter. This interpretation shows that DPG is a first-order method that only uses the first-order information of the critic for actor learning. Therefore in principle, GAC, which uses the second-order information of the critic, should learn faster than DPG. Beside actor learning, the performance of actor-critic methods also depends on the accuracy of the critic. We assume that the critic p Q ν ps, aq is represented by neural networks with a parameter ν. We adopt the approach proposed by with some adjustment to learn ν. More concretely, we use gradient descent to minimize the squared Bellman error with a slowly moving target critic: DISPLAYFORM0 where α ą 0 is the step-size. The target value y " rps, aq`γE πpa 1 |s 1 q r p Qν ps 1, a 1 qs is computed by the target critic p Qν ps 1, a 1 q whose parameterν is updated byν Ð τ ν`p1´τ qν for 0 ă τ ă 1. As suggested by, the target critic improves the learning stability and we set τ " 0.001 in experiments. The expectation for the squared error is approximated using mini-batch samples tps n, a n, r n, s 1 n qu N n"1 drawn from a replay buffer. The expectation over the current actor πpa 1 |s 1 q is approximated using samples ta DISPLAYFORM1 We do not use a target actor to compute y since the KL upper-bound already constrains the actor update and a target actor will further slow it down. Note that we are not restricted to this evaluation method and more efficient methods such as Retrace BID22 can also be used. Our method requires computing ∇ a p Q ν ps, aq and its outer product for the Gauss-Newton approximation. The computational complexity of the outer product operation is Opd 2 a q and is inexpensive when compared to the dimension of ν. For a linear-in-parameter model p Q ν ps, aq " ν J µps, aq, the gradient can be efficiently computed for common choices of the basis function µ such as the Gaussian function. For deep neural network models, the gradient can be computed by the automaticdifferentiation BID9 where its cost depends on the network architecture. Besides the connections to DPG, our method is also related to existing methods as follows. A similar optimization problem to Eq. was considered by the model-free trajectory optimization (MOTO) method BID1. Our method can be viewed as a non-trivial extension of MOTO with two significant novelties. First, MOTO learns a sequence of time-dependent log-linear Gaussian policies π t pa|sq " N pa|B t s`b t, Σ t q, while our method learns a log-nonlinear Gaussian policy. Second, MOTO learns a time-dependent critic given by p Q t ps, aq " 1 2 a J C t a`a J D t sà J c t`ξt psq and performs policy update with these functions. In contrast, our method learns a more complex critic and performs Taylor's approximation in each training step. Besides MOTO, the optimization problem also resembles that of trust region policy optimization (TRPO) BID27. TRPO solves the following optimization problem: DISPLAYFORM0 where p Qps, aq may be replaced by an estimate of the advantage function BID28 ). There are two major differences between the two problems. First, TRPO optimizes the policy parameter while we optimize the guide actor. Second, TRPO solves the optimization problem by conjugate gradient where the KL divergence is approximated by the Fisher information matrix, while we solve the optimization problem in a closed form with a quadratic approximation of the critic. Our method is also related to maximum-entropy RL BID36 BID4 BID11 BID23, which maximizes the expected cumulative reward with an additional entropy bonus: ř 8 t"1 E p π psq rrps t, a t q`αHpπpa t |s t qqs, where α ą 0 is a trade-off parameter. The optimal policy in maximum-entropy RL is the softmax policy given by DISPLAYFORM1 where Q ‹ soft ps, aq and V ‹ soft psq are the optimal soft action-value and state-value functions, respectively BID11 BID23. For a policy π, these are defined as Q π soft ps, aq " rps, aq`γE pps 1 |s,aq DISPLAYFORM2 DISPLAYFORM3 The softmax policy and the soft state-value function in maximum-entropy RL closely resemble the guide actor in Eq. when η ‹ " 0 and the log-integral term in Eq. when η " 0, respectively, except for the definition of action-value functions. To learn the optimal policy of maximum-entropy RL, BID11 proposed soft Q-learning which uses importance sampling to compute the soft value functions and approximates the intractable policy using a separate policy function. Our method largely differs from soft Q-learning since we use Taylor's approximation to convert the intractable integral into more convenient matrix-vector products. The idea of firstly learning a non-parameterized policy and then later learning a parameterized policy by supervised learning was considered previously in guided policy search (GPS) BID16. However, GPS learns the guide policy by trajectory optimization methods such as an iterative linear-quadratic Gaussian regulator BID18, which requires a model of the transition function. In contrast, we learn the guide policy via the critic without learning the transition function. We evaluate GAC on the OpenAI gym platform BID5 with the Mujoco Physics simulator BID33. The actor and critic are neural networks with two hidden layers of 400 and 300 units, as described in Appendix C. We compare GAC-0 and GAC-1 against deep DPG (DDPG), Q-learning with a normalized advantage function (Q-NAF) BID10, and TRPO BID27 BID35. FIG0 shows the learning performance on 9 continuous control tasks. Overall, both GAC-0 and GAC-1 perform comparably with existing methods and they clearly outperform the other methods in Half-Cheetah. The performance of GAC-0 and GAC-1 is comparable on these tasks, except on Humanoid where GAC-1 learns faster. We expect GAC-0 to be more stable and reliable but easier to get stuck at local optima. On the other hand, the randomness introduced by GAC-1 leads to high variance approximation but this could help escape poor local optima. We conjecture GAC-S that uses S ą 1 samples for the averaged Taylor's approximation should outperform both GAC-0 and GAC-1. While this is computationally expensive, we can use parallel computation to reduce the computation time. The expected returns of both GAC-0 and GAC-1 have high fluctuations on the Hopper and Walker2D tasks when compared to TRPO as can be seen in FIG0 and FIG0. We observe that they can learn good policies for these tasks in the middle of learning. However, the policies quickly diverge to poor ones and then they are quickly improved to be good policies again. We believe that this happens because the step-size F´1psq "`η ‹ Σ´1´Hpsq˘´1 of the guide actor in Eq. FORMULA1 can be very large near local optima for Gauss-Newton approximation. That is, the gradients near local optima have small magnitude and this makes the approximation Hpsq " ∇ a p Qps, aq∇ a p Qps, aq J small as well. If η ‹ Σ´1 is also relatively small then the matrix F´1psq can be very large. Thus, under these conditions, GAC may use too large step sizes to compute the guide actor and this in high fluctuations in performance. We expect that this scenario can be avoided by reducing the KL bound or adding a regularization constant to the Gauss-Newton approximation. TAB1 in Appendix C shows the wall-clock computation time. DDPG is computationally the most efficient method on all tasks. GAC has low computation costs on tasks with low dimensional actions and its cost increases as the dimensionality of action increases. This high computation cost is due to the dual optimization for finding the step-size parameters η and ω. We believe that the computation cost of GAC can be significantly reduced by letting η and ω be external tuning parameters. Actor-critic methods are appealing for real-world problems due to their good data efficiency and learning speed. However, existing actor-critic methods do not use second-order information of the critic. In this paper, we established a novel framework that distinguishes itself from existing work by utilizing Hessians of the critic for actor learning. Within this framework, we proposed a practical method that uses Gauss-Newton approximation instead of the Hessians. We showed through experiments that our method is promising and thus the framework should be further investigated. Our analysis showed that the proposed method is closely related to deterministic policy gradients (DPG). However, DPG was also shown to be a limiting case of the stochastic policy gradients when the policy variance approaches zero BID29. It is currently unknown whether our framework has a connection to the stochastic policy gradients as well, and finding such a connection is our future work. Our main goal in this paper was to provide a new actor-critic framework and we do not claim that our method achieves the state-of-the-art performance. However, its performance can still be improved in many directions. For instance, we may impose a KL constraint for a parameterized actor to improve its stability, similarly to TRPO BID27. We can also apply more efficient policy evaluation methods such as Retrace BID22 ) to achieve better critic learning. The solution of the optimization problem: DISPLAYFORM0 subject to E p β psq rKLpπpa|sq||π θ pa|sqqs ď, E p β psq rHpπpa|sqqs ě κ, DISPLAYFORM1 can be obtained by the method of Lagrange multipliers. The derivation here follows the derivation of similar optimization problems by BID26 and BID0. The Lagrangian of this optimization problem is DISPLAYFORM2 where η, ω, and ν are the dual variables. Then, by taking derivative of L w.r.t. r π we obtain DISPLAYFORM3 We set this derivation to zero in order to obtain 0 " E p β psq "ż´p Qps, aq´pη`ωq log r πpa|sq`η log π θ pa|sq¯da ´p η`ω´νq " p Qps, aq´pη`ωq log r πpa|sq`η log π θ pa|sq´pη`ω´νq. Then the solution is given by DISPLAYFORM4 To obtain the dual function gpη, ωq, we substitute the solution to the constraint terms of the Lagrangian and this gives us DISPLAYFORM5 After some calculation, we obtain Lpη, ω, νq " η ´ωκ`E p β psq rη`ω´νs DISPLAYFORM6 where in the second line we use the fact that expp´η`ω´ν η`ω q is the normalization term of r πpa|sq. Firstly, we show that GAC performs second-order optimization in the action space when Taylor's approximation is performed with a 0 " E πpa|sq ras " φ θ psq. Recall that Taylor's approximation with φ θ is given by DISPLAYFORM0 where ψ φ θ psq " ∇ a p Qps, aq| a"φ θ psq´H φ θ psqφ θ psq. By substituting ψ φ θ psq into Lpsq " η ‹ Σ´1 θ psqφ θ psq´ψ θ psq, we obtain DISPLAYFORM1 Therefore, the mean update is equivalent to φ`psq " F´1psqLpsq DISPLAYFORM2 which is a second-order optimization step with a curvature matrix F psq " η ‹ Σ´1 θ psq´H φ θ psq. Similarly, for the case where a set of samples ta 0 u " π θ pa 0 |sq " N pa 0 |φ θ psq, Σpsqq is used to compute the averaged Taylor's approximation, we obtain DISPLAYFORM3 Then, by assuming that E π θ pa0|sq rH 0 psqa 0 psqs " E π θ pa0|sq rH 0 s E π θ pa0|sq ra 0 s, we obtain DISPLAYFORM4 Therefore, we have a second-order optimization step DISPLAYFORM5 where F´1psq " η ‹ Σ´1 θ psq´E π θ pa0|sq rH 0 psqs is a curvature matrix. As described in the main paper, this interpretation is only valid when the equality E π θ pa0|sq rH 0 psqa 0 psqs " E π θ pa0|sq rH 0 s E π θ pa0|sq ra 0 s holds. While this equality does not hold in general, it holds when only one sample a 0 " π θ pa 0 |sq is used. Nonetheless, we can still use the expectation of Taylor's approximation to perform policy update regardless of this assumption. Let f ps, aq " expp p Qps, aqq, then the Hessian Hpsq " ∇ 2 a p Qps, aq can be expressed as Hpsq " ∇ a r∇ a log f ps, aqs DISPLAYFORM0 a f ps, aqf ps, aq´1 " ∇ a f ps, aq∇ f f ps, aq´1 p∇ a f ps, aqq J`∇2 a f ps, aqf ps, aq´1 "´∇ a f ps, aqf ps, aq´2 p∇ a f ps, aqq J`∇2 a f ps, aqf ps, aq´1 "´`∇ a f ps, aqf ps, aq´1˘`∇ a f ps, aqf ps, aq´1˘J`∇ 2 a f ps, aqf ps, aq´1 "´∇ a log f ps, aq∇ a log f ps, aq J`∇2 a f ps, aqf ps, aq´1 "´∇ a p Qps, aq∇ a p Qps, aq DISPLAYFORM1 which concludes the proof. Beside Gauss-Newton approximation, an alternative approach is to impose a special structure on p Qps, aq so that Hessians are always negative semi-definite. In literature, there exists two special structures that satisfies this requirement. Normalized advantage function (NAF) BID10: NAF represents the critic by a quadratic function with a negative curvature: DISPLAYFORM2 where a negative-definite matrix-valued function W psq, a vector-valued function bpsq and a scalarvalued function V psq are parameterized functions whose their parameters are learned by policy evaluation methods such as Q-learning BID31. With NAF, negative definite Hessians can be simply obtained as Hpsq " W psq. However, a significant disadvantage of NAF is that it assumes the action-value function is quadratic regardless of states and this is generally not true for most reward functions. Moreover, the Hessians become action-independent even though the critic is a function of actions. Input convex neural networks (ICNNs) BID3: ICNNs are neural networks with special structures which make them convex w.r.t. their inputs. Since Hessians of concave functions are always negative semi-definite, we may use ICNNs to represent a negative critic and directly use its Hessians. However, similarly to NAF, ICNNs implicitly assume that the action-value function is concave w.r.t. actions regardless of states and this is generally not true for most reward functions. We first consider the weight mean-squared-error loss function where the guide actor is N pa|φ`psq, Σ`psqq and the current actor is N pa|φ θ psq, Σ θ psqq. Taylor's approximation of DISPLAYFORM0 By assuming that H φ θ psq is strictly negative definite 5, we can take a derivative of this approximation w.r.t. a and set it to zero to obtain a " H´1 φ θ psq∇ a p Qps, aq´H´1 φ θ psqψ φ θ psq. Replacing a by φ θ psq and φ`psq yields DISPLAYFORM1 φ`psq " H´1 φ θ psq∇ a p Qps, aq| a"φ`psq´H´1 φ θ psqψ φ θ psq. Recall that the weight mean-squared-error is defined as DISPLAYFORM2 Published as a conference paper at ICLR 2018Firstly, we expand the quadratic term of NAF as follows: DISPLAYFORM3 where ψpsq "´W psqbpsq and ξpsq " 1 2 bpsq J W psqbpsq`V psq. By substituting the quadratic model obtained by NAF into the GAC framework, the guide actor is now given byπpa|sq " N pa|φ`psq, Σ`psqqq with DISPLAYFORM4 To obtain Q-learning with NAF, we set η ‹ " 0, i.e., we perform a greedy maximization where the KL upper-bound approaches infinity, and this yields φ`psq "´W psq´1p´W psqbpsqq DISPLAYFORM5 which is the policy obtained by performing Q-learning with NAF. Thus, NAF with Q-learning is a special case of GAC if Q-learning is also used in GAC to learn the critic. The pseudo-code of GAC is given in Algorithm 1. The source code is available at https:// github.com/voot-t/guide-actor-critic. We try to follow the network architecture proposed by the authors of each baseline method as close as possible. For GAC and DDPG, we use neural networks with two hidden layers for the actor network and the critic network. For both networks the first layer has 400 hidden units and the second layer has 300 units. For NAF, we use neural networks with two hidden layers to represent each of the functions bpsq, W psq and V psq where each layer has 200 hidden units. All hidden units use the relu activation function except for the output of the actor network where we use the tanh activation function to bound actions. We use the Adam optimizer BID14 with learning rate 0.001 and 0.0001 for the critic network and the actor network, respectively. The moving average step for target networks is set to τ " 0.001. The maximum size of the replay buffer is set to 1000000. The mini-batches size is set to N " 256. The weights of the actor and critic networks are initialized as described by BID8, except for the output layers where the initial weights are drawn uniformly from Up´0.003, 0.003q, as described by. The initial covariance Σ in GAC is set to be an identity matrix. DDPG and QNAF use the OU-process with noise parameters θ " 0.15 and σ " 0.2 for exploration.For TRPO, we use the implementation publicly available at https://github.com/openai/ baselines. We also use the provided network architecture and hyper-parameters except the batch size where we use 1000 instead of 1024 since this is more suitable in our test setup. For GAC, the KL upper-bound is fixed to " 0.0001. The entropy lower-bound κ is adjusted heuristically by κ " maxp0.99pE´E 0 q`E 0, E 0 q,where E « E p β psq rHpπ θ pa|sqqs denotes the expected entropy of the current policy and E 0 denotes the entropy of a base policy N pa|0, 0.01Iq. This heuristic ensures that the lower-bound gradually decreases but the lower-bound cannot be too small. We apply this heuristic update once every 5000 training steps. The dual function is minimize by the sequential least-squares quadratic programming (SLSQP) method with an initial values η " 0.05 and ω " 0.05. The number of samples for computing the target critic value is M " 10. Sample N mini-batch samples tps n, a n, r n, s Compute y n, update ν by, e.g., Adam, and updateν by moving average: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 13:end procedure procedure LEARN GUIDE ACTOR 15:Compute a n,0 for each s n by a n,0 " φ θ ps n q or a n,0 " N pa|φ θ ps n q, Σq. Compute g 0 psq " ∇ a p Qps n, aq| a"an,0 and H 0 ps s q "´g 0 ps n qg 0 ps n q J. Solve for pη ‹, ω ‹ q " argmin ηą0,ωą0 p gpη, ωq by a non-linear optimization method. Compute the guide actor r πpa|s n q " N pa|φ`ps n q, Σ`ps n qq for each s n. end procedure procedure UPDATE PARAMETERIZED ACTOR Update policy parameter by, e.g., Adam, to minimize the MSE: DISPLAYFORM0 ∇ θ }φ θ ps n q´φ`ps n q} 2 2.22:Update policy covariance by averaging the guide covariances: DISPLAYFORM1 23:end procedure end procedure 25: end for 26: Output: Learned actor π θ pa|sq. We perform experiments on the OpenAI gym platform BID5 with Mujoco Physics simulator BID33 where all environments are v1. We use the state space, action space and the reward function as provided and did not perform any normalization or gradient clipping. The maximum time horizon in each episode is set to 1000. The discount factor γ " 0.99 is only used for learning and the test returns are computed without it. Experiments are repeated for 10 times with different random seeds. The total computation time are reported in TAB1. The figures below show the averaged over 10 trials. The y-axis indicates the averaged test returns where the test returns in each trial are computed once every 5000 training time steps by executing 10 test episodes without exploration. The error bar indicates standard error. The total computation time for training the policy for 1 million steps (0.1 million steps for the Invert-Pendulum task). The mean and standard error are computed over 10 trials with the unit in hours. TRPO is not included since it performs a lesser amount of update using batch data samples.
This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor.
1,756
scitldr
Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs. In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs. The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated. Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. With SDIM, we could perform \emph{classification with rejection}. Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples. Non-robustness of neural network models emerges as a pressing concern since they are observed to be vulnerable to adversarial examples . Many attack methods have been developed to find imperceptible perturbations to fool the target classifiers (; ;). Meanwhile, many defense schemes have also been proposed to improve the robustnesses of the target models (; Tramèr et al., 2017; ;). An important fact about these works is that they focus on discriminative classifiers, which directly model the conditional probabilities of labels given samples. Another promising direction, which is almost neglected so far, is to explore robustness of generative classifiers . A generative classifier explicitly model conditional distributions of inputs given the class labels. During inference, it evaluates all the class conditional likelihoods of the test input, and outputs the class label corresponding to the maximum. Conditional generative models are powerful and natural choices to model the class conditional distributions, but they suffer from two big problems: it is hard to scale generative classifiers on high-dimensional tasks, like natural images classification, with comparable performance to the discriminative counterparts. Though generative classifiers have shown promising of adversarial robustness, they hardly achieve acceptable classification performance even on CIFAR10; ). The behaviors of likelihood-based generative models can be counter-intuitive and brittle. They may assign surprisingly higher likelihoods to out-of-distribution (OoD) samples . discuss the issues of likelihood as a metric for density modeling, which may be the reason of non-robust classification, e.g. OoD samples detection. In this paper, we propose supervised deep infomax (SDIM) by introducing supervised statistical constraints into deep infomax , an unsupervised learning framework by maximizing the mutual information between representations and data. SDIM is trained by optimizing two objectives: maximizing the mutual information (MI) between the inputs and the high-level data representations from encoder; ensuring that the representations satisfy the supervised statistical constraints. The supervised statistical constraints can be interpreted as a generative classifier on high-level data representations giving up the full generative process. Unlike full generative models making implicit manifold assumptions, the supervised statistical constraints of SDIM serve as explicit enforcement of manifold assumption: data representations (low-dimensional) are trained to form clusters corresponding to their class labels. With SDIM, we could perform classification with rejection . SDIMs reject illegal inputs based on off-manifold conjecture , where illegal inputs, e.g. adversarial examples, lie far away from the data manifold. Samples whose class conditionals are smaller than the pre-chosen thresholds will be deemed as off-manifold, and prediction requests on them will be rejected. The contributions of this paper are: • We propose Supervised Deep Infomax (SDIM), an end-to-end framework whose probabilistic constraints are equivalent to a generative classifier. SDIMs can achieve comparable classification performance with similar discrinimative counterparts at the cost of small over-parameterization. • We propose a simple but novel rejection policy based on off-manifold conjecture: SDIM outputs a class label only if the test sample's largest class conditional surpasses the prechosen class threshold, otherwise outputs rejection. The choice of thresholds relies only on training set, and takes no additional computations. • Experiments show that SDIM with rejection policy can effectively reject illegal inputs, including OoD samples and adversarial examples generated by a comprehensive group of adversarial attacks. Deep InfoMax is an unsupervised representation learning framework by maximizing the mutual information (MI) of the inputs and outputs of an encoder. The computation of MI takes only input-output pairs with the deep neural networks based esimator MINE . Let E φ be an encoder parameterized by φ, working on the training set, and generating. DIM is trained to find the set of parameters φ such that: the mutual information I(X, Y) is maximized over sample sets X and Y. the representations, depending on the potential downstream tasks, match some prior distribution. Denote J and M the joint and product of marginals of random variables X, Y respectively. MINE estimates a lower-bound of MI with Donsker-Varadhan representation of KL-divergence: where T ω (x, y) ∈ R is a family of functions with parameters ω represented by a neural network. Since in representation learning we are more interested in maximizing MI, than its exact value, non-KL divergences are also favorable candidates. We can get a family of variational lower-bounds using f -divergence representations : where f * is the Fenchel conjugate of a specific divergence f. For KL-divergence, f * (t) = e (t−1). A full f * list is provided in Tab. 6 of. Noise-Contrastive Estimation (Gutmann & Hyvärinen, 2010) can also be used as lower-bound of MI in "infoNCE" . All the components of SDIM framework are summurized in Fig. 1. The focus of Supervised Deep InfoMax (SDIM) is on introducing supervision to probabilistic constraints of DIM for (generative) classification. We choose to maximize the local MI, which has shown to be more effective in classification tasks than maximizing global MI. Equivalently, we minimize J MI: where L φ (x) is a local M × M feature map of x extracted from some intermediate layer of encoder E, andĨ can be any possible MI lower-bounds. By adopting a generative approach p(x, y) = p(y)p(x|y), we assume that the data follows the manifold assumption: the (high-dimensional) data lies on low-dimensional manifolds corresponding to their class labels. Denotex the compact representation generated with encoder E φ (x). In order to explicitly enforce the manifold assumption, we admit the existence of data manifold in the representation space. Assume that y is a discrete random variable representing class labels, and p(x|y) is the real class conditional distribution of the data manifold given y. Let p θ (x|y) be the class conditionals we model parameterized with θ. We approximate p(x|y) by minimizing the KL-divergence between p(x|y) and our model p θ (x|y), which is given by: where the first item on RHS is a constant independent of the model parameters θ. Eq. 4 equals to maximize the expectation Ex,y∼p(x,y) [log p θ (x|y)]. In practice, we minimize the following loss J NLL, equivalent to empicically maximize the above expectation over: Besides the introduction of supervision, SDIM differs from DIM in its way of enforcing the statistical constraints: DIM use adversarial learning to push the representations to the desired priors, while SDIM directly maximizes the parameterized class conditional probability. Maximize Likelihood Margins Since a generative classifier, at inference, decides which class a test input x belongs to according to its class conditional probability. On one hand, we maximize samples' true class conditional probabilities (classes they belong to) using J NLL; On the other hand, we also hope that samples' false class conditional probabilities (classes they do not belong to) can be minimized. This is assured by the following likelihood margin loss J LM: where K is a positive constant to control the margin. For each encoder outputx i, the C − 1 truefalse class conditional gaps are squared 1, which quadratically increases the penalties when the gap becomes large, then are averaged. Putting all these together, the complete loss function we minimize is: Parameterization of Class Conditional Probability Each of the class conditional distribution is represented as an isotropic Gaussian. So the generative classifier is simply a embedding layer with C entries, and each entry contains the trainable mean and variance of a Gaussian. This minimized parameterization encourages the encoder to learn simple and stable low-dimensional representations that can be easily explained by even unimodal distributions. Considering that we maximize the true class conditional probability, and minimize the false class conditional probability at the same time, we do not choose conditional normalizing flows, since the parameters are shared across class labels, and the training can be very difficult. , each class conditional probability is represented with a VAE, thus scaling to complex datasets with huge number of classes, e.g. ImageNet, is almost impossible. A generative approach models the class-conditional distributions p(x|y), as well as the class priors p(y). For classification, we compute the posterior probabilities p(y|x) through Bayes' rule: The prior p(y) can be computed from the training set, or we simply use uniform class prior for all class labels by default. Then the prediction of test sample x * from posteriors is: The drawback of the above decision function is that it always gives a prediction even for illegal inputs. Instead of simply outputting the class label that maximizes class conditional probability of x *, we set a threshold for each class conditional probability, and define our decision function with rejection to be: The model gives a rejection when log p(x * |y *) is smaller than the threshold δ y *. Note that here we can use p(x * |y *) and p(x * |y *) interchangeably. This is also known as selective classification or classification with reject option (See Supp. A) 4 RELATED WORKS Robustness of Likelihood-based Generative Models Though likelihood-based generative models have achieved great success in samples synthesis, the behaviors of their likelihoods can be counter-intuitive. Flow-based models and as well as VAEs , surprisingly assign even higher likelihoods to out-of-distribution samples than the samples in the training set . Pixel-level statistical analyses in show that OoD dataset may "sit inside of" the in-distribution dataset (i.e. with roughly the same mean but smaller variance). observe that adversarial examples are outside the training distribution via statistical testing. DefenseGAN models real data distribution with the generator G of GAN. At inference, instead of feeding the test input x to the target classifier directly, it searches for the "closest" sample G(z *) from generator distribution to x as the final input to the classifier. It ensures that the classifier only make predictions on the data manifold represented by the generator, ruling out the potential adversarial perturbations in x. PixelDefend takes a similar approach which uses likelihood-based generative model -PixelCNN to model the data distribution. Both DefenseGAN and PixelDefend are additionally trained as peripheral defense schemes agnostic to the target classifiers. Training generative models on complex datasets notoriously takes huge amount of computational resources . In contrast, the training of SDIM is computationally similar to its discriminative counterpart. The verification of whether inputs are offmanifold is a built-in property of the SDIM generative classifier. The class conditionals of SDIM are modeled on low-dimensional data representations with simple Gaussians, which is much easier, and incurs very small computations. Datasets We evaluate the effectiveness of the rejection policy of SDIM on four image datasets: MNIST, FashionMNIST (both resized to 32×32 from 28×28); and CIFAR10, SVHN. See App. B.1 for details of data processing. For out-of-distribution samples detection, we use the dataset pairs on which likelihood-based generative models fail : FashionMNIST (in)-MNIST (out) and CIFAR10 (in)-SVHN (out). Adversarial examples detection are evaluated on MNIST and CIFAR10. Choice of thresholds It is natural that choosing thresholds based on what the model knows, i.e. training set, and can reject what the model does not know, i.e. possible illegal inputs. We set one threshold for each class conditional. For each class conditional probability, we choose to evaluate on two different thresholds: 1st and 2nd percentiles of class conditional log-likelihoods of the correctly classified training samples. Compared to the detection methods proposed in, our choice of thresholds is much simpler, and takes no additional computations. Models A typical SDIM instance consists of three networks: an encoder, parameterized by φ, which outputs a d-dimensional representation; mutual information evaluation networks, i.e. T ω in Eqn. and Eqn.; and C-way class conditional embedding layer, parameterized by θ, with each entry a 2d-dimensional vector. We set d = 64 in all our experiments. For encoder of SDIM, we use ResNet on 32 × 32 with a stack of 8n + 2 layers, and 4 filter sizes {32, 64, 128, 256}. The architecture is summarized as: The last layer of encoder is a d-way fully-connected layer. To construct a discriminative counterpart, we simply set the output size of the encoder's last layer to C for classification. We use ResNet10 (n = 1) on MNIST, FashionMNIST, and ResNet26 (n = 3) on CIFAR10, SVHN. We report the classification accuracies (see Tab. 1) of SDIMs and the discriminative counterparts on clean test sets. Results show that SDIMs achieve the same level of accuracy as the discriminative counterparts with slightly increased number of parameters (17% increase for ResNet10, and 5% increase for ResNet26). We are aware of the existence of better reported on these datasets using more complex models or automatically designed architectures , but pushing the state-of-the-art is not the focus of this paper. , both model class conditional probability with VAE , and achieve acceptable accuracies (> 98%) on MNIST. However, it is hard for fully conditional generative models to achieve satisfactory classification accuracies even on CIFAR10. On CIFAR10, methods in achieve only < 50% accuracy. They also point out that the classification accuracy of a conditional PixelCNN++ is only 72.4%. The test accuracy of ABS in is only 54%. In contrast, SDIM could achieve almost the same performance with similar discriminative classifier by giving up the full generative process, and building generative classifier on high-level representations. improves the accuracy to 92% by feeding the features learned by powerful discriminative classifier-VGG16 to their generative classifiers, which also suggests that modeling likelihood on high-level representation (features) is more favorable for generative classification than pixel-level likelihood of fully generative classifiers. For classification tasks, discovering discriminative features is much more important than reconstructing the all the image pixels. Thus performing generative classification with full generative models may not be the right choice. We also investigate the implications of the proposed decision function with rejection under different thresholds. The in Tab. 2 show that choosing a higher percentile as threshold will reject more prediction requests. At the same time, the classification accuracies of SDIM on the left test sets become increasingly better. This demonstrate that out rejection policy tend to reject the ones on which SDIMs make wrong predictions. Table 2: Classification performances of SDIMs using the proposed decision function with rejection. We report the rejection rates of the test sets and the accuracies on the left test sets for each threshold. Class-wise OoD detections are performed, and mean detection rates over all in-distribution classes are reported in Tab. 3. For each in-distribution class c, we evaluate the log-likelihoods of the whole OoD dataset. Samples whose log-likelihoods are lower the class threshold δ c will be detected as OoD samples. Same evaluations are applied on conditional Glows with 10th percentile thresholds, but the are not good. The are clear and confirm that SDIMs, generative classifiers on high-level representations, are more effective on classification tasks than fully conditional generative models on raw pixels. Note that fully generative models including VAE used in; fail on OoD detection. The stark difference between SDIM and full generative models (flows or VAEs) is that SDIM models samples' likelihoods in the high-level representation spaces, while generative models evaluate directly on the raw pixels. See Supp. C for more about the histograms of the class conditionals of in-out distributions. Table 3: Mean detection rates of SDIMs and Glows with different thresholds on OoD detection. We comprehensively evaluate the robustness of SDIMs against various attacks: • gradient-based attacks: one-step gradient attack FGSM , its iterative variant projected gradient descent , CW-L 2 attack , deepfool . • score-based attacks: local search attack . • decision-based attack: boundary attack . Attacks Using Cross-Entropy We find that SDIMs are much more robust to gradient-based attacks using cross-entropy, e.g. FGSM and PGD, since the gradients numerically vanish as a side effect of the likelihood margin loss J LM of SDIM. This phenomenon is similar to some defences that try to hinder generations of adversarial examples by masking the gradients on inputs. While full generative classifiers in still suffer from these attacks. See Supp. D.1 for detailed . Conservative Adversarial Examples Adversarial attacks aim to find the minimal perturbations that sufficiently change the classification labels, i.e. flip other logits to be the largest one. We show case examples on MNIST generated by untargeted attacks and their logits in Tab. 4 (See Supp. D.2 for examples of CIFAR10). Though these attacks successfully flip the logits, they are designed to be conservative to avoid more distortions to the original images. As a , the largest logits of adversarial examples are still much lower than the thresholds, so they can be detected by our rejection policy. We find that our rejection policy performs perfectly on MNIST, but fails to detect all adversarial examples on CIFAR10 except for Boundary attack (See Tab. 5). It seems to be a well-known observation that models trained on CIFAR10 are more vulnerable than one trained on MNIST. connects this observation to the generalization of models. They found that many test samples, though correctly classified, are close to the misclassfied samples, which implies the existence of adversarial examples. If a model has higher error rate, it would take smaller perturbations to move correctly classified samples to misclassified areas. Table 4: Full logits of the adversarial examples generated with different attacks. The original image is the first sample of class 0 of MNIST test set. The first row gives the 1st percentile thresholds, and the second row shows the logits of the original image. The largest logits are marked in bold. Adversarial examples with more confidence Based on the observations above, a natural question we should ask is: can we generate adversarial examples with not only successfully flipped logits, Table 5: Detection rates of our rejection policies. We perform untargeted adversarial evaluation on the first 1000 images of test sets. CW-L 2 is not involved here, but carefully investigated below. but also the largest logit larger than some threshold value? Unlike the conservativeness on paying more distortions of other attacks, CW attack allows us to control the gap between largest and second largest logits with some confidence value κ. We perform targeted CW attacks with confidences κ = {0, 500, 1000} (Tab. 6). We find that increasing the confidences help increasing the largest logits of adversarial examples to some extent, but may lead to failures of generation. The sensitivity to confidence values is also different given different targets. The success rates of generating adversarial examples monotonically decreases with the confidences increasing (Tab. 7). Note that on discriminative counterparts, CW-L 2 with the same settings easily achieves 100% success rates. This means that explicitly forcing data representations to form clusters with maximum margins between them help increase average distances between normal samples and the nearest misclassified areas, thus increase the hardness of finding minimal adversarial perturbations. In this case, it takes a large enough adversarial perturbation to move a sample from its cluster to the other. Meanwhile, detection rates remain satisfactory on MNIST, but obviously decline on CIFAR10. For victim generative classifiers in under CW-L 2 attack, the detection rates of adversarial examples using the proposed detection methods can be > 95% on MNIST, but fall < 50% on even CIFAR10-binary (their models don't scale on CIFAR10, and CW-L 2 with non-zero confidences are also not evaluated). Table 7: Targeted adversarial evaluations of our rejection policies on the first 1000 test samples. We report the detection rates with different thresholds and success rates of generating adversarial examples. Discussions on off-manifold conjecture challenges whether the off-manifold conjecture holds in general. They experiment on synthetic dataset-two high-dimensional concentric spheres with theoretical analyses, showing that even for a trained classifier with close to zero test error, there may be a constant fraction of the data manifold misclassified, which indicates the existence of adversarial examples within the manifold. But there are still several concerns to be addressed: First, as also pointed out by the authors, the manifolds in natural datasets can be quite complex than that of simple synthesized dataset. draws similar from analyses on synthesized data with particular geometry. So the big concern is whether the in; still hold for the manifolds in natural datasets. A practical obstacle to verify this is that works modeling the full generative processes are based on manifold assumption, but provide no explicit manifolds for analytical analyses like;. While SDIM enables explicit and customized manifolds on high-level data representations via probabilistic constraints, thus enables analytical analyses. In this paper, samples of different classes are trained to form isotropic Gaussians corresponding to their classes in representation space (other choices are possible). The relation between the adversarial robustness and the forms and dimensionalities of data manifolds is to be explored. Second, in their experiments, all models evaluated are discriminative classifiers. Considering the recent promising of generative classifiers against adversarial examples, would using generative classifiers lead to different ? One thing making us feel optimistic is that even though the existence of adversarial examples is inevitable, suggest that adversarial robustness can be improved by minimizing the test errors, which is also supported by our experimental differences on MNIST and CIFAR10. We introduce supervised probabilistic constraints to DIM. Giving up the full generative process, SDIMs are equivalent to generative classifiers on high-level data representations. Unlike full conditional generative models which achieve poor classification performance even on CIFAR10, SDIMs attain comparable performance as the discriminative counterparts on complex datasets. The training of SDIM is also computationally similar to discriminative classifiers, and does not require prohibitive computational resources. Our proposed rejection policy based on off-manifold conjecture, a built-in property of SDIM, can effectively reject illegal inputs including OoD samples and adversarial examples. We demonstrate that likelihoods modeled on high-level data representations, rather than raw pixel intensities, are more robust on downstream tasks without the requirement of generating real samples. We make comparisons between SDIM and GBZ, which consistently performs best in Deep Bayes. FGSM and PGD-L ∞ The in Fig 4 and Fig 5 show that SDIM performs consistently better than the baseline. We find that increasing the distortion factor of FGSM has no influences of SDIM's accuracy, and the adversarial examples keep the same. Recall that the class conditionals are optimized to keep a considerable margin. Before evaluating the cross entropy loss, softmax is applied on the class conditionals log p(x|c) to generate a even sharper distribution. So for the samples that are correctly classified, their losses are numerically zeros, and the gradient on inputs ∇J x (x, y) are also numerically zeros. The PGD-L ∞ we use here is the randomized version 2, which adds a small random perturbation before the iterative loop. The randomness is originally introduced to generate different adversarial examples for adversarial training, but here it breaks the zero loss so that the gradient on inputs ∇J x (x, y) will not be zeros in the loop. FGSM can also be randomized (Tramèr et al., 2017), which can be seen as a one-step variant of randomized PGD. This phenomena is similar to what some defenses using gradient obfuscation want to achieve. Defensive distillation masks the gradients of cross-entropy by increasing the temperature of softmax. But for CW attacks, which do not use cross-entropy, and operate on logits directly, this could be ineffective. 4 -353.1 -471.6 -400.3 -342.7 -367.2 -486.4 -326.4 Boundary 213.9 -417.4 -458.3 -548.0 -587.4 -236.3 214.0 -1246.1 -171.2 -555.6 LocalSearch 165.2 -485.7 190.9 -325.6 -439.0 -379.0 -318.8 -327.5 -357.9 -272.3 Table 9: Full logits of adversarial examples generated with different attacks. The original image is the fist sample of class 0 of CIFAR10 test set. The first row gives the 1st percentile thresholds, and the second row shows the logits of the original image. The largest logits are marked in bold.
scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples.
1,757
scitldr
Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights. We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule. For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work. The design of neural networks is often considered a black-art, driven by trial and error rather than foundational principles. This is exemplified by the success of recent architecture random-search techniques , which take the extreme of applying no human guidance at all. Although as a field we are far from fully understanding the nature of learning and generalization in neural networks, this does not mean that we should proceed blindly. In this work we define a scaling quantity γ l for each layer l that approximates the average squared singular value of the corresponding diagonal block of the Hessian for layer l. This quantity is easy to compute from the (non-central) second moments of the forward-propagated values and the (non-central) second moments of the backward-propagated gradients. We argue that networks that have constant γ l are better conditioned than those that do not, and we analyze how common layer types affect this quantity. We call networks that obey this rule preconditioned neural networks, in analogy to preconditioning of linear systems. As an example of some of the possible applications of our theory, we: • Propose a principled weight initialization scheme that can often provide an improvement over existing schemes; • Show which common layer types automatically in well-conditioned networks; • Show how to improve the conditioning of common structures such as bottlenecked residual blocks by the addition of fixed scaling constants to the network (Detailed in Appendix E). We will use the multilayer perceptron (i.e. a classical feed-forward deep neural network) as a running example as it is the simplest non-trivial deep neural network structure. We use ReLU activation functions, and use the following notation for layer l of L (following): x l+1 = ReLU (y l), where W l is a n out l × n in l matrix of weights, b l is the bias vector, y l the preactivation vector and x l is the input activation vector for the layer. The quantities n fan-in of the layer respectively. We also denote the gradient of a quantity with respect to the loss (i.e. the back-propagated gradient) with the prefix ∆. We initially focus on the least-squares loss. Additionally, we assume that each bias vector is initialized with zeros unless otherwise stated. Our proposed approach focuses on the singular values of the diagonal blocks of the Hessian. In the case of a multilayer perceptron network (MLP) network, each diagonal block corresponds to the weights from a single weight matrix W l or bias vector b l. This block structure is used by existing approaches such as K-FAC and variants (; ; ;), which correct the gradient step using estimates of secondorder information. In contrast, our approach modifies the network to improve the Hessian without modifying the step. Estimates of the magnitude of the singular values σ i (G l) of the diagonal blocks G 1,...,G L of the Hessian G provide information about the singular values of the full matrix. Proposition 1. Let G l be the lth diagonal block of a real symmetric matrix G: n × n. Then for all i = 1... n: We can use this simple bound to provide some insight into the conditioning of the full matrix: Corollary 2. Let S = {s 1, . . .,} be the union of the sets of singular values of the diagonal blocks G 1,..., G l of a real symmetric matrix G: n × n. Then the condition number κ(G) = σ max (G)/σ min (G) is bounded as: In particular, a Hessian matrix with a very large difference between the singular values of each block must be ill-conditioned. This provides strong motivation for balancing the magnitude of the singular values of each diagonal block, the goal of this work. Although ideally, we would like to state the converse, that a matrix with balanced blocks is well conditioned, we can not make such a statement without strong assumptions on the off-diagonal behavior of the matrix. We use the average squared singular value of each block as a proxy for the full spectrum, as it is particularly easy to estimate in expectation. Although the minimum and maximum for each block would seem like a more natural quantity to work with, we found that any such bounds tend to be too pessimistic to reflect the behavior of the actual singular values. When using the ReLU activation function, as we consider in this work, a neural network is no longer a smooth function of its inputs, and the Hessian becomes ill-defined at some points in the parameter space. Fortunately, the spectrum is still well-defined at any twice-differentiable point, and this gives a local measure of the curvature. ReLU networks are typically twice-differentiable almost everywhere. We assume this throughout the remainder of this work. Our analysis will proceed with batch-size 1 and a network with k outputs. We consider the network at initialization, where weights are centered, symmetric and i.i.d random variables and biases are set to zero. ReLU networks have a particularly simple structure for the Hessian with respect to any set of activations, as the network's output is a piecewise-linear function g fed into a final layer consisting of a loss. This structure in greatly simplified expressions for diagonal blocks of the Hessian with respect to the weights. We will consider the output of the network as a composition two functions, the current layer g, and the remainder of the network h. We write this as a function of the weights, i.e. f (W l) = h(g(W l)). The dependence on the input to the network is implicit in this notation, and the network below layer l does not need to be considered. h(y l) be the Hessian of h, the remainder of the network after application of layer l (recall y l = W l x l). Let J l be the Jacobian of y l with respect to W l. The Jacobian has shape Given these quantities, the diagonal block of the Hessian corresponding to W l is equal to: The lth diagonal block of the (Generalized) Gauss-Newton matrix G . We discuss this decomposition further in Appendix A.1. We use the notation E[X 2] for any matrix or vector X to denote the expectation of the element-wise non-central second moment. Proposition 3. (The GR scaling) Under the assumptions outlined in Section 3.2, the average squared singular value of G l is equal to the following quantity, which we call the GR scaling for MLP layers:. We define a "balanced" or "preconditioned" network as one in which γ l is equal for all l (full derivation in Appendix A). Balancing this theoretically derived GR scaling quantity in a network will produce an initial optimization problem for which the blocks of the Hessian are expected to be approximately balanced with respect to their average squared singular value. Due to the large number of approximations needed for this derivation, which we discuss further in the next section, we don't claim that this theoretical approximation is accurate, or that the blocks will be closely matched in practice. Rather, we make the lesser claim that a network with very disproportionate values of γ l between layers is likely to have convergence difficulties during the early stages of optimization due to Cor. 2. To check the quality of our approximation, we computed the ratio of the convolutional version of the GR scaling equation (Equation 1) to the actual E[(G l r) 2 ] product for a strided (rather than max-pooled, see Table 1) LeNet model, where we use random input data and a random loss (i.e. for outputs y we use y T Ry for an i.i.d normal matrix R), with batch-size 1024, and 32 × 32 input images. The are shown in Figure 2 for 100 sampled setups; there is generally good agreement with the theoretical expectation. The following strong assumptions are used in the derivation of the GR scaling: (A1) The input and target values are drawn element-wise i.i.d from a centered symmetric distribution with known variance. (A2) The Hessian of the remainder of the network above each block, with respect to the output, has Frobenius norm much larger than 1. More concretely, we assume that all but the highest order terms that are polynomial in this norm may be dropped. (A3) All activations, pre-activations and gradients are independently distributed element-wise. In practice due to the mixing effect of multiplication by random weight matrices, only the magnitudes of these quantities are correlated, and the effect is small for wide networks due to the law of large numbers. Independence assumptions of this kind are common when approximating second-order methods; the block-diagonal variant of K-FAC makes similar assumptions for instance. Assumption A2 is the most problematic of these assumptions, and we make no claim that it holds in practice. However, we are primarily interested in the properties of blocks and their scaling with respect to each other, not their absolute scaling. Assumption A2 in very simple expressions for the scaling of the blocks without requiring a more complicated analysis of the top of the network. Similar theory can be derived for other assumptions on the output structure, such as the assumption that the target values are much smaller than the outputs of the network. We provide further motivation for the utility of preconditioning by comparing it to another simple quantity of interest. Consider at network initialization, the ratio of the (element-wise non-central) second moments of each weight-matrix gradient to the weight matrix itself: This ratio approximately captures the relative change that a single SGD step with unit step-size on W l will produce. We call this quantity the weight-to-gradient ratio. A network with constant ν l is also well-behaved under weight-decay, as the ratio of weight-decay second moments to gradient second moments will stay constant throughout the network, keeping the push-pull of gradients and decay constant across the network. Remarkably, the weight-to-gradient ratio ν l turns out to be equivalent to the GR scaling for MLP networks: Proposition 4. (Appendix 8) ν l is equal to the GR scaling γ l for i.i.d mean-zero randomly-initialized multilayer perceptron layers under the independence assumptions of Appendix 3.2. The concept of GR scaling may be extended to scaled convolutional layers y l = α l Conv W l (x l) + b l with scaling factor α l, kernel width k l, batch-size b, and output resolution ρ l × ρ l. A straight-forward derivation gives expressions for the convolution weight and biases of: This requires an assumption of independence of the values of activations within a channel that is not true in practice, so γ l tends to be further away from empirical estimates for convolutional layers than for non-convolutional layers, although it is still a useful guide. The effect of padding is also ignored here. Sequences of convolutions are well-scaled against each other along as the kernel size remains the same. The scaling of layers involving differing kernel sizes can be corrected using the alpha parameter (Appendix E), and more generally any imbalance between the conditioning of layers can be fixed by modifying α l while at the same time changing the initialization of W l so that the forward variance remains the same as the unmodified version. This adjusts γ l while leaving all other γ the same. For ReLU networks with a classical multilayer-perceptron (i.e. non-convolutional, non-residual) structure, we show in this section that initialization using i.i.d mean-zero random variables with (non-central) second moment inversely proportional to the geometric mean of the fans: for some fixed constant c, in a constant GR scaling throughout the network. Proposition 5. Let W 0: m × n and W 1: p × m be weight matrices satisfying the geometric initialization criteria of Equation 2, and let b 0, b 1 be zero-initialized bias parameters. Then consider the following sequence of two layers where x 0 and ∆y 1 are i.i.d, mean 0, uncorrelated and symmetrically distributed: Proof. Note that the ReLU operation halves both the forward and backward (non-central) second moments, due to our assumptions on the distributions of x 0 and ∆y 1. So: Consider the first weight-gradient ratio, using E[∆W Under our assumptions, back-propagation to ∆x 1 in E[∆x Now consider the second weight-gradient ratio: Under our assumptions, applying forward propagation gives E[y 0], and so from Equation 3 we have: which matches Equation 4, so ν 0 = ν 1. Remark 6. This relation also holds for sequences of (potentially) strided convolutions, but only if the same kernel size is used everywhere and circular padding is used. The initialization should be modified to include the kernel size, changing the expression to c/ k Under review as a conference paper at ICLR 2020 The most common approaches are the Kaiming and Xavier initializations. The Kaiming technique for ReLU networks is actually one of two approaches: For the feed-forward network above, assuming random activations, the forward-activation variance will remain constant in expectation throughout the network if fan-in initialization of weights is used, whereas the fan-out variant maintains a constant variance of the back-propagated signal. The constant factor 2 in the above expressions corrects for the variance-reducing effect of the ReLU activation. Although popularized by , similar scaling was in use in early neural network models that used tanh activation functions . These two principles are clearly in conflict; unless n in l = n out l, either the forward variance or backward variance will become non-constant, or as more commonly expressed, either explode or vanish. No prima facie reason for preferring one initialization over the other is provided. Unfortunately, there is some confusion in the literature as many works reference using Kaiming initialization without specifying if the fan-in or fan-out variant is used. The Xavier initialization is the closest to our proposed approach. They balance these conflicting objectives using the arithmetic mean: to "... approximately satisfy our objectives of maintaining activation variances and back-propagated gradients variance as one moves up or down the network". This approach to balancing is essentially heuristic, in contrast to the geometric mean approach that our theory directly guides us to. We can use the same proof technique to compute the GR scaling for the bias parameters in a network. Our update equations change to include the bias term: y l = W l x l + b l, with b l assumed to be initialized at zero. We show in Appendix D that:. It is easy to show using the techniques of Section 6 that the biases of consecutive layers have equal GR scaling as long as geometric initialization is used. However, unlike in the case of weights, we have less flexibility in the choice of the numerator. Instead of allowing all weights to be scaled by c for any positive c, we require that c = 2, so that: Proposition 7 It is traditional to normalize a dataset before applying a neural network so that the input vector has mean 0 and variance 1 in expectation. This principle is rarely quested in modern neural networks, even though there is no longer a good justification for its use in modern ReLU based networks. In contrast, our theory provides direct guidance for the choice of input scaling. We show that the (non-central) second moment of the input affects the GR scaling of bias and weight parameters differently and that they can be balanced by careful choice of the initialization. Consider the GR scaling values for the bias and weight parameters in the first layer of a ReLU-based multilayer perceptron network, as considered in previous sections. We assume the data is already. We can cancel terms to find the value of E x 2 0 that makes these two quantities equal: In common computer vision architectures, the input planes are the 3 color channels, and the kernel size is k = 3, giving E x 2 0 ≈ 0.2. Using the traditional variance-one normalization will in the effective learning rate for the bias terms being lower than that of the weight terms. This will in potentially slower learning of the bias terms than for the input scaling we propose. A neural network's behavior is also very sensitive to the (non-central) second moment of the outputs. For a convolutional network without pooling layers (but potentially with strided dimensionality reduction), if geometric-mean initialization is used the activation (non-central) second moments are given by: The application of a sequence of these layers gives a telescoping product: We potentially have independent control over this (non-central) second moment at initialization, as we can insert a fixed scalar multiplication factor at the end of the network that modifies it. This may be necessary when adapting a network architecture that was designed and tested under a different initialization scheme, as the success of the architecture may be partially due to the output scaling that happens to be produced by that original initialization. We are not aware of any existing theory guiding the choice of output variance at initialization for the case of log-softmax losses, where it has a non-trivial effect on the back-propagated signals, although output variances of 0.01 to 0.1 appear to work well. The output variance should always be checked and potentially corrected when switching initialization schemes. Consider a network where γ l is constant throughout. We may add a layer between any two existing layers without affecting this conditioning, as long as the new layer maintains the activation-gradient (non-central) second-moment product: and dimensionality; this follows from Equation 1. For instance, adding a simple scaling layer of the form x l+1 = 2x l doubles the (non-central) second moment during the forward pass and doubles the backward (non-central) second moment during back-propagation, which maintains this product: When spatial dimensionality changes between layers we can see that the GR scaling is no longer maintained just by balancing this product, as γ depends directly on the square of the spatial dimension. Instead, a pooling operation that changes the forward and backward signals in a way that counteracts the change in spatial dimension is needed. The use of stride-2 convolutions, as well as average pooling, in the correct scaling, but other common types of spatial reduction generally do not. It is particularly interesting to note that the evolution in state-of-the-art architectures corresponds closely to a move from poorly scaled building blocks to well-scaled ones. Early shallow architectures like LeNet-5 used tanh nonlinearities, which were replaced by the (well-scaled) ReLU, used for instance in the seminal AlexNet architecture . AlexNet and the latter VGG architectures made heavy use of max-pooling and reshaping before the final layers, both operations which have been replaced in modern fully-convolutional architectures with (well-scaled) striding and average-pooling respectively. The use of large kernel sizes is also in decline. The AlexNet architecture used kernel sizes of 11, 5 and 3, whereas modern ResNet architectures only use 7, 3 and 1. Furthermore, recent research has shown that replacing the single 7x7 convolution used with a sequence of three 3x3 convolutions improves performance . We considered a selection of dense and moderate-sparsity multi-class classification datasets from the LibSVM repository, 26 in total. The same model was used for all datasets, a non-convolutional ReLU network with 3 weight layers total. The inner two layer widths were fixed at 384 and 64 nodes respectively. These numbers were chosen to in a larger gap between the optimization methods, less difference could be expected if a more typical 2× gap was used. Our are otherwise generally robust to the choice of layer widths. For every dataset, learning rate and initialization combination we ran 10 seeds and picked the median loss after 5 epochs as the focus of our study (The largest differences can be expected early in training). Learning rates in the range 2 1 to 2 −12 (in powers of 2) were checked for each dataset and initialization combination, with the best learning rate chosen in each case based off of the median of the 10 seeds. Training loss was used as the basis of our comparison as we care primarily about convergence rate, and are comparing identical network architectures. Some additional details concerning the experimental setup and which datasets were used is available in the Appendix. Table 1 shows that geometric initialization is the most consistent of the initialization approaches considered. It has the lowest loss, after normalizing each dataset, and it is never the worst of the 4 methods on any dataset. Interestingly, the fan-out method is most often the best method, but consideration of the per-dataset plots (Appendix F) shows that it often completely fails to learn for some problems, which pulls up its average loss and in it being the worst for 9 datasets. Testing an initialization method on modern computer vision problems is problematic due to the heavy architecture search, both automated and manual, that is behind the current best methods. This search will fit the architecture to the initialization method, in a sense, so any other initialization is at a disadvantage compared to the one used during architecture search. This is further complicated by the prevalence of BatchNorm which is not handled in our theory. Instead, to provide a clear comparison we use an older network with a large variability in kernel sizes, the AlexNet architecture. This architecture has a large variety of filter sizes (11, 5, 3, linear), which according to our theory will affect the conditioning adversely, and which should highlight the differences between the methods. We found that a network with consistent kernel sizes through-out showed only negligible differences between the initialization methods. The network was modified to replace max-pooling with striding as max-pooling is not well-scaled by our theory (further details in Appendix F). Following Section 6.4, we normalize the output of the network at initialization by running a single batch through the network and adding a fixed scaling factor to the network to produce output standard deviation 0.05. For our preconditioned variant, we added alpha correction factors following Section 5 in conjunction with geometric initialization, and compared this against other common initialization methods. We tested on CIFAR-10 following the standard practice as closely as possible, as detailed in Appendix F. We performed a geometric learning rate sweep over a power-of-two grid. Results are shown in Figure 3 for an average of 10 seeds for each initialization. Preconditioning improves training loss over all other initialization schemes tested, although only by a small margin. Although not a panacea, by using the scaling principle we have introduced, neural networks can be designed with a reasonable expectation that they will be optimizable by stochastic gradient methods, minimizing the amount of guess-and-check neural network design. As a consequence of our scaling principle, we have derived an initialization scheme that automatically preconditions common network architectures. Most developments in neural network theory attempt to explain the success of existing techniques post-hoc. Instead, we show the power of the scaling law approach by deriving a new initialization technique from theory directly. we then combine the simplifications from Equations 9, 10, and 11 to give:. Standard ReLU classification and regression networks have a particularly simple structure for the Hessian with respect to the input, as the network's output is a piecewise-linear function g feed into a final layer consisting of a convex log-softmax operation, or a least-squares loss. This structure in the Hessian with respect to the input being equivalent to its Gauss-Newton approximation. The Gauss-Newton matrix can be written in a factored form, which is used in the analysis we perform in this work. We emphasize that this is just used as a convenience when working with diagonal blocks, the GN representation is not an approximation in this case. The (Generalized) Gauss-Newton matrix G is a positive semi-definite approximation of the Hessian of a non-convex function f, given by factoring f into the composition of two functions where h is convex, and g is approximated by its Jacobian matrix J at x, for the purpose of computing G: The GN matrix also has close ties to the Fisher information matrix , providing another justification for its use. Surprisingly, the Gauss-Newton decomposition can be used to compute diagonal blocks of the Hessian with respect to the weights W l as well as the inputs . To see this, note that for any activation y l, the layers above may be treated in a combined fashion as the h in a f (W l) = h(g(W l)) decomposition of the network structure, as they are the composition of a (locally) linear function and a convex function and thus convex. In this decomposition g(W l) =W l x l + b l is a function of W l with x l fixed, and as this is linear in W l, the Gauss-Newton approximation to the block is thus not an approximation. We make heavy use of the equations for forward propagation and backward propagation of second moments, under the assumption that the weights are uncorrelated to the activations or gradients. For a convolution with input channels n in, output channels n out, and square k × k kernels, these formulas are (recall our notation for the second moments is element-wise for vectors and matrices): C THE WEIGHT GRADIENT RATIO IS EQUAL TO GR SCALING FOR MLP MODELS Proposition 8. The weight-gradient ratio ν l is equal to the GR scaling γ l for i.i.d mean-zero randomly-initialized multilayer perceptron layers under the independence assumptions of Appendix 3.2. Proof. To see the equivalence, note that under the zero-bias initialization, we have from and so: The gradient of the weights is given by ∆W ij = ∆y li x lj and so its second moment is: Combining these quantities gives:. We consider the case of a convolutional neural network with spatial resolution ρ × ρ for greater generality. Consider the Jacobian of y l with respect to the bias. It has shape J Each row corresponds to a y l output, and each column a bias weight. As before, we will approximate the product of G with a random i.i.d unit variance vector r: The structure of J b l is that each block of ρ 2 rows has the same set of 1s in the same column. Only a single 1 per row. It follows that: The calculation of the product of R l with J b l r is approximated in the same way as in the weight scaling calculation. For the J bT product, note that there is an additional ρ 2 as each column has ρ 2 non-zero entries, each equal to 1. Combining these three quantities gives:. Proposition 9. Consider the setup of Proposition 5, with the addition of biases: As long as the weights are initialized following Equation 7 and the biases are initialized to 0, we have that We will include c = 2 as a variable as it clarifies it's relation to other quantities. We reuse some calculations from Proposition 5. Namely that: Plugging these into the definition of γ b 0:. b 1, we require the additional quantity: Again plugging this in:. There has been significant recent interest in training residual networks without the use of batchnormalization or other normalization layers . In this section, we explore the modifications that are necessary to a network for this to be possible and show how to apply our preconditioning principle to these networks. The building block of a ResNet model is the residual block: where F is a composition of layers. Unlike classical feedforward architectures, the pass-through connection in an exponential increase in the variance of the activations in the network as the depth increases. A side effect of this is the output of the network becomes exponentially more sensitive to the input of the network as depth increases, a property characterized by the Lipschitz constant of the network . This exponential dependence can be reduced by the introduction of scaling constants s l to each block: The introduction of these constants requires a modification of the block structure to ensure constant conditioning between blocks. A standard bottleneck block, as used in the ResNet-50 architecture, has the following form: In this notation, C 0 is a 1 × 1 convolution that reduces the number of channels 4 fold, C 1 is a 3 × 3 convolution with equal input and output channels, and C 2 is a 1 × 1 convolution at increases the number of channels back up 4 fold to the original input count. If we introduce a scaling factor s l to each block l, then we must also add conditioning multipliers β l to each convolution to change their GR scaling, as we described in Section 5. The correct scaling constant depends on the scaling constant of the previous block. A simple calculation gives the equation: The initial β 0 and s 0 may be chosen arbitrarily. If a flat s l = s is used for all l, then we may use β l = 1. The block structure including the β l factors is: x 1 = ReLU(y 0), x 2 = ReLU(y 1), The weights of each convolution must then be initialized with the standard deviation modified such that the combined convolution-scaling operation gives the same output variance as would be given if the geometric-mean initialization scheme is used without extra scaling constants. For instance, the initialization of the C 0 convolution must have standard deviation scaled down by dividing by 1 β so that the multiplication by 1 β during the forward pass in the correct forward variance. The 1/ √ 3 factor is an α correction that corrects for change in kernel shape for the middle convolution. The variance at initialization must be scaled to correct for the α factor also. Since the initial convolution in a ResNet-50 model is also not within a residual block, it's GR scaling is different from the convolutions within residual blocks. Consider the composition of a non-residual followed by a residual block, without max-pooling or ReLUs for simplicity of exposition: Without loss of generality, we assume that E x 2 0 = 1, and assume a single channel input and output. Our goal is to find a constant α, so that γ 0 = γ 1. Recall that when using α scaling factors we must initialize C 0 so that the variance of y 0 is independent of the choice of α. Our scaling factor will also depend on the kernel sizes used in the two convolutions, so we must include those in the calculations. From Equation 1, the GR scaling for C 0 is Note that E[∆y For the residual convolution, we need to use a modification of the standard GR equation due to the residual branch. The derivation of γ for non-residual convolutions assumes that the remainder of the network above the convolution responds linearly (locally) with the scaling of the convolution, but here due to the residual connection, this is no longer the case. For instance, if the weight were scaled to zero, the output of the network would not also become zero (recall our assumption of zero-initialization for bias terms). This can be avoided by noting that the ratio E[∆y] in the GR scaling may be computed further up the network, as long as any scaling in between is corrected for. In particular, we may compute this ratio at the point after the residual addition, as long as we include the factor s 4 1 to account for this. So we in fact have: We now equate γ 0 = γ 1: Therefore to ensure that γ 0 = γ 1 we need:. A similar calculation applies when the residual block is before the non-residual convolution, as in the last layer linear in the ResNet network, giving a scaling factor for the linear layer (effective kernel size 1) of: To prevent the from being skewed by the number of classes and the number of inputs affecting the output variance, the logit output of the network was scaled to have standard deviation 0.05 after the first minibatch evaluation for every method, with the scaling constant fixed thereafter. LayerNorm was used on the input to whiten the data. Weight decay of 0.00001 was used for every dataset. To aggregate the losses across datasets we divided by the worst loss across the initializations before averaging. LIBSVM PLOTS Figure 4 shows the interquartile range (25%, 50% and 75% quantiles) of the best learning rate for each case. Following standard practice, training used random augmentations consisting of horizontal flips and random crops to 32x32, as well as normalization to the interval [-1,+1]. We used SGD with momentum 0.9, a learning rate schedule of decreases at 150 and 225 epochs, and no weight decay. The network architecture is the following sequence, with circular "equal" padding used and ReLU nonlinearities after each convolution: 1. 11x11 stride-1 convolution with 3 input and 64 output channels, 2. 5x5 stride-2 convolution with 64 input and 192 output channels, 3. 3x3 stride-2 convolution with 192 input and 384 output channels,
A theory for initialization and scaling of ReLU neural network layers
1,758
scitldr
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT , we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model. Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones. Machine learning models represent valuable intellectual property: the process of gathering training data, iterating over model design, and tuning hyperparameters costs considerable money and effort. As such, these models are often only indirectly accessible through web APIs that allow users to query a model but not inspect its parameters. Malicious users might try to sidestep the expensive model development cycle by instead locally reproducing an existing model served by such an API. In these attacks, known as "model stealing" or "model extraction" (; Tramèr et al., 2016), the adversary issues a large number of queries and uses the collected (input, output) pairs to train a local copy of the model. Besides theft of intellectual property, extracted models may leak sensitive information about the training data (Tramèr et al., 2016) or be used to generate adversarial examples that evade the model served by the API . With the recent success of contextualized pretrained representations for transfer learning, NLP APIs based on ELMo and BERT have become increasingly popular . Contextualized pretrained representations boost performance and reduce sample complexity , and they typically require only a shallow task-specific network-sometimes just a single layer as in BERT. While these properties are advantageous for representation learning, we hypothesize that they also make model extraction easier. In this paper, we demonstrate that NLP models obtained by fine-tuning a pretrained BERT model can be extracted even if the adversary does not have access to any training data used by the API provider. In fact, the adversary does not even need to issue well-formed queries: our experiments show that extraction attacks are possible even with queries consisting of randomly sampled sequences of words coupled with simple task-specific heuristics (Section 3). This contrasts with prior work, which for large-scale attacks requires at minimum that the adversary can access a small amount of semantically-coherent data relevant to the task (; ; a; ;). Extraction performance improves further by using randomly-sampled sentences and paragraphs from Wikipedia (instead of random word sequences) as queries (Section 4). These attacks are cheap; our most expensive attack cost around $500, estimated using rates of current API providers. Step 1: Attacker randomly samples words to form queries and sends them to victim BERT model Step 2: Attacker fine-tunes their own BERT on these queries using the victim outputs as labels Figure 1: Overview of our model extraction setup for question answering. 1 An attacker first queries a victim BERT model, and then uses its predicted answers to fine-tune their own BERT model. This process works even when passages and questions are random sequences of words as shown here. We perform a fine-grained analysis of the randomly-generated queries to shed light on why they work so well for model extraction. Human studies on the random queries show that despite their effectiveness in extracting good models, they are mostly nonsensical and uninterpretable, although queries closer to the original data distribution seem to work better for extraction (Section 5.1). Furthermore, we discover that pretraining on the attacker's side makes model extraction easier (Section 5.2). Finally, we study the efficacy of two simple defenses against extraction -membership classification (Section 6.1) and API watermarking (Section 6.2) -and find that while they work well against naïve adversaries, they fail against more clever ones. We hope that our work spurs future research into stronger defenses against model extraction and, more generally, on developing a better understanding of why these models and datasets are particularly vulnerable to such attacks. We relate our work to prior efforts on model extraction, most of which have focused on computer vision applications. Because of the way in which we synthesize queries for extracting models, our work also directly relates to zero-shot distillation and studies of rubbish inputs to NLP systems. Model extraction attacks have been studied both empirically (Tramèr et al., 2016; a;) and theoretically , mostly against image classification APIs. These works generally synthesize queries in an active learning setup by searching for inputs that lie close to the victim classifier's decision boundaries. This method does not transfer to text-based systems due to the discrete nature of the input space. 2 The only prior work attempting extraction on NLP systems is , who adopt pool-based active learning to select natural sentences from WikiText-2 and extract 1-layer CNNs for tasks expecting single inputs. In contrast, we study a more realistic extraction setting with nonsensical inputs on modern BERT-large models for tasks expecting pairwise inputs like question answering. Our work is related to prior work on data-efficient distillation, which attempts to distill knowledge from a larger model to a small model with access to limited input data or in a zeroshot setting . However, unlike the model extraction setting, these methods assume white-box access to the teacher model to generate data impressions. Rubbish inputs, which are randomly-generated examples that yield high-confidence predictions, have received some attention in the model extraction literature. Prior work (Tramèr et al., 2016) reports successful extraction on SVMs and 1-layer networks using i.i.d noise, but no prior work has scaled this idea to deeper neural networks for which a single class tends to dominate model predictions on most noise inputs . Unnatural text inputs have previously been shown to produce overly confident model predictions , break translation systems , and trigger disturbing outputs from text generators . In contrast, here we show their effectiveness at training models that work well on real NLP tasks despite not seeing any real examples during training. What is BERT? We study model extraction on BERT, Bidirectional Encoder Representations from Transformers . BERT-large is a 24-layer transformer , f bert,θ, which converts a word sequence x = (x 1, ..., x n) of length n into a high-quality sequence of vector representations v = (v 1, ..., v n). These representations are contextualized -every vector v i is conditioned on the whole sequence x. BERT's parameters θ * are learnt using masked language modelling on a large unlabelled corpus of natural text. The public release of f bert,θ * revolutionized NLP, as it achieved state-of-the-art performance on a wide variety of NLP tasks with minimal task-specific supervision. A modern NLP system for task T typically leverages the fine-tuning methodology in the public BERT repository: 3 a task-specific network f T,φ (generally, a 1-layer feedforward network) with parameters φ expecting v as input is used to construct a composite function g T = f T,φ • f bert,θ. The final parameters φ T, θ T are learned end-to-end using training data for T with a small learning rate ("fine-tuning"), with φ initialized randomly and θ initialized with θ *. Description of extraction attacks: Assume g T (the "victim model") is a commercially available black-box API for task T. A malicious user with black-box query access to g T attempts to reconstruct a local copy g T (the "extracted model"). Since the attacker does not have training data for T, they use a task-specific query generator to construct several possibly nonsensical word sequences {x i} m 1 as queries to the victim model. The ing dataset {x i, g T (x i)} m 1 is used to train g T. Specifically, we assume that the attacker fine-tunes the public release of f bert,θ * on this dataset to obtain g T. 4 A schematic of our extraction attacks is shown in Figure 1. We extract models on four diverse NLP tasks that have different kinds of input and output spaces: binary sentiment classification using SST2 , where the input is a single sentence and the output is a probability distribution between positive and negative; ternary natural language inference (NLI) classification using MNLI , where the input is a pair of sentences and the output is a distribution between entailment, contradiction and neutral; extractive question answering (QA) using SQuAD 1.1 , where the input is a paragraph and question and the output is an answer span from the paragraph; and boolean question answering using BoolQ , where the input is a paragraph and question and the output is a distribution between yes and no. Query generators: We study two kinds of query generators, RANDOM and WIKI. In the RANDOM generator, an input query is a nonsensical sequence of words constructed by sampling 5 a Wikipedia vocabulary built from WikiText-103 . In the WIKI setting, input queries are formed from actual sentences or paragraphs from the WikiText-103 corpus. We found these generators insufficient by themselves to extract models for tasks featuring complex interactions between different parts of the input space (e.g., between premise and hypothesis in MNLI or question and paragraph in SQuAD). Hence, we additionally apply the following task-specific heuristics: • MNLI: since the premise and hypothesis often share many words, we randomly replace three words in the premise with three random words to construct the hypothesis. • SQuAD / BoolQ: since questions often contain words in the associated passage, we uniformly sample words from the passage to form a question. We additionally prepend a question starter word (like "what") to the question and append a? symbol to the end. For more details on the query generation, see Appendix A.3. Representative example queries and their outputs are shown in Table 1. More examples are provided in Appendix A.5. First, we evaluate our extraction procedure in a controlled setting where an attacker uses an identical number of queries as the original training dataset (Table 2); afterwards, we investigate different query budgets for each task (Table 3). We provide commercial cost estimates for these query budgets using the Google Cloud Platform's Natural Language API calculator. 6 We use two metrics for eval- Table 3: Development set accuracy of various extracted models on the original development set at different query budgets expressed as fractions of the original dataset size. Note the high accuracies for some tasks even at low query budgets, and diminishing accuracy gains at higher budgets. uation: Accuracy of the extracted models on the original development set, and Agreement between the outputs of the extracted model and the victim model on the original development set inputs. In our controlled setting (Table 2), our extracted models are surprisingly accurate on the original development sets of all tasks, even when trained with nonsensical inputs (RANDOM) that do not match the original data distribution. 7 Accuracy improves further on WIKI: extracted SQuAD models recover 95% of original accuracy despite seeing only nonsensical questions during training. While extracted models have high accuracy, their agreement is only slightly better than accuracy in most cases. Agreement is even lower on held-out sets constructed using the WIKI and RANDOM sampling scheme. On SQuAD, extracted WIKI and RANDOM have low agreements of 59.2 F1 and 50.5 F1 despite being trained on identically distributed data. This indicates poor functional equivalence between the victim and extracted model as also found by. An ablation study with alternative query generation heuristics for SQuAD and MNLI is conducted in Appendix A.4. Classification with argmax labels only: For classification datasets, we assumed the API returns a probability distribution over output classes. This information may not be available to the adversary in practice. To measure what happens when the API only provides argmax outputs, we re-run our WIKI experiments for SST2, MNLI and BoolQ with argmax labels and present our in Table 2 (WIKI-ARGMAX). We notice a minimal drop in accuracy from the corresponding WIKI experiments, indicating that access to the output probability distribution is not crucial for model extraction. Hence, hiding the full probability distribution is not a viable defense strategy. Query efficiency: We measure the effectiveness of our extraction algorithms with varying query budgets, each a different fraction of the original dataset size, in Table 3. Even with small query budgets, extraction is often successful; while more queries is usually better, accuracy gains quickly diminish. Approximate costs for these attacks can be extrapolated from Table 2. These bring many natural questions to mind. What properties of nonsensical input queries make them so amenable to the model extraction process? How well does extraction work for these tasks without using large pretrained language models? In this section, we perform an analysis to answer these questions. Previously, we observed that nonsensical input queries are surprisingly effective for extracting NLP models based on BERT. Here, we dig into the properties of these queries in an attempt to understand why models trained on them perform so well. Do different victim models produce the same answer when given a nonsensical query? Are some of these queries more representative of the original data distribution than others? Did our task-specific heuristics perhaps make these nonsensical queries "interpretable" to humans in some way? We specifically examine the RANDOM and WIKI extraction configurations for SQuAD in this section to answer these questions. Do different victim models agree on the answers to nonsensical queries? We train five victim SQuAD models on the original training data with identical hyperparameters, varying only the random seed; each achieves an F1 of between 90 and 90.5. Then, we measure the average pairwise F1 ("agreement") between the answers produced by these models for different types of queries. As expected, the models agree very frequently when queries come from the SQuAD training set (96.9 F1) or development set (90.4 F1). However, their agreement drops significantly on WIKI queries (53.0 F1) and even further on RANDOM queries (41.2 F1). Are high-agreement queries closer to the original data distribution? While these indicate that on average, victim models tend to be brittle on nonsensical inputs, it is possible that highagreement queries are more useful than others for model extraction. To measure this, we sort queries from our 10x RANDOM and WIKI datasets according to their agreement and choose the highest and lowest agreement subsets, where subset size is a varying fraction of the original training data size (Figure 2). We observe large F1 improvements when extracting models using high-agreement subsets, consistently beating random and low-agreement subsets of identical sizes. This shows that agreement between victim models is a good proxy for the quality of an input-output pair for extraction. Measuring this agreement in extracted models and integrating this observation into an active learning objective for better extraction is an interesting direction that we leave to future work. Are high-agreement nonsensical queries interpretable to humans? Prior work has shown deep neural networks can leverage non-robust, uninterpretable features to learn classifiers. Our nonsensical queries are not completely random, as we do apply task-specific heuristics. Perhaps as a of these heuristics, do high-agreement nonsensical textual inputs have a human interpretation? To investigate, we asked three human annotators 9 to answer twenty SQuAD questions from each of the WIKI and RANDOM subsets that had unanimous agreement among victim models, and twenty original SQuAD questions as a control. On the WIKI subset, annotators matched the victim models' answer exactly 23% of the time (33 F1). Similarly, a 22% exact match (32 F1) was observed on RANDOM. In contrast, annotators scored significantly higher on original SQuAD questions (77% exact match, 85 F1 against original answers). Interviews with the annotators revealed a common trend: annotators used a word overlap heuristic (between the question and paragraph) to select entities as answer spans. While this heuristic partially interprets the extraction data's signal, most of the nonsensical question-answer pairs remain mysterious to humans. More details on inter-annotator agreement are provided in Appendix A.6. So far we assumed that the victim and the attacker both fine-tune a pretrained BERT-large model. However, in practical scenarios, the attacker might not have information about the victim architecture. What happens when the attacker fine-tunes a different base model than the victim? What if the attacker extracts a QA model from scratch instead of fine-tuning a large pretrained language model? Here, we examine how much the extraction accuracy depends on the pretraining setup. BERT comes in two different sizes: the 24 layer BERT-large and the 12 layer BERT-base. In Table 4, we measure the development set accuracy on MNLI and SQuAD when the victim and attacker use different configurations of these two models. Accuracy is always higher when the attacker starts from BERT-large, even when the victim was initialized with BERT-base. Additionally, given a fixed attacker architecture, accuracy is better when the victim uses the same model (e.g., if the attacker starts from BERT-base, they will have better if the victim also used BERT-base). This is reminiscent of similar discussion in Tramèr et al. What if we train from scratch? Finetuning BERT seems to give attackers a significant headstart, as only the final layer of the model is randomly initialized and the BERT parameters start from a good representation of language. To measure the importance of fine-tuning from a good starting point, we train a QANet model on SQuAD with no contextualized pretraining. This model has 1.3 million randomly initialized parameters at the start of training. Table 5 shows that QANet achieves high accuracy when original SQuAD inputs are used (TRUE X) with BERT-large labels, indicating sufficient model capacity. However, the F1 significantly degrades when training on nonsensical RANDOM and WIKI queries. The F1 drop is particularly striking when compared to the corresponding rows in Table 2 (only 4.5 F1 drop for WIKI). This reinforces our finding that better pretraining allows models to start from a good representation of language, thus simplifying extraction. Having established that BERT-based models are vulnerable to model extraction, we now shift our focus to investigating defense strategies. An ideal defense preserves API utility (b) while remaining undetectable to attackers ; furthermore, it is convenient if the defense does not require re-training the victim model. Here we explore two defenses that satisfy these properties but are also only effective against a class of weak adversaries. Our first defense uses membership inference, which determines whether a classifier was trained on a particular input point , to identify nonsensical out-ofdistribution inputs or adversarial examples that are unlikely to be issued by a legitimate user. When such inputs are detected, the API issues a random output instead of the model's predicted output, which eliminates the extraction signal. We treat membership inference as a binary classification problem, constructing datasets for MNLI and SQuAD by labeling their original training and validation examples as real and WIKI extraction examples as fake. We use the logits in addition to the final layer representations of the victim model as input features to train the classifier, as model confidence scores and rare word representations are useful for membership inference . Table 6 shows that these classifiers transfer well to a balanced development set with the same distribution as their training data (WIKI). They are also robust to the query generation process: accuracy remains high on auxiliary test sets where fake examples are either RANDOM (described in Section 3) or SHUFFLE, in which the word order of real examples is shuffled. An ablation study on the input features of the classifier is provided in Appendix A.7. Another defense against extraction is watermarking , in which a tiny fraction of queries are chosen at random and modified to return a wrong output. These "watermarked queries" and their outputs are stored on the API side. Since deep neural networks have the ability to memorize arbitrary information (;, this defense anticipates that extracted models will memorize some of the watermarked queries, leaving them vulnerable to post-hoc detection if they are deployed publicly. We evaluate watermarking on MNLI (by randomly permuting the predicted probability vector to ensure a different argmax output) and SQuAD (by returning a Table 8 : Results on watermarked models. Dev Acc represents the overall development set accuracy, WM Label Acc denotes the accuracy of predicting the watermarked output on the watermarked queries and Victim Label Acc denotes the accuracy of predicting the original labels on the watermarked queries. A watermarked WIKI has high WM Label Acc and low Victim Label Acc. single word answer which has less than 0.2 F1 overlap with the actual output). For both tasks, we watermark just 0.1% of all queries to minimize the overall drop in API performance. Table 8 shows that extracted models perform nearly identically on the development set (Dev Acc) with or without watermarking. When looking at the watermarked subset of the training data, however, non-watermarked models get nearly everything wrong (low WM Label Acc%) as they generally predict the victim model's outputs (high Victim Label Acc%), while watermarked models behave oppositely. Training with more epochs only makes these differences more drastic. Limitations: Watermarking works, but it can only be used after an attack has been carried out. Importantly, it assumes that an attacker will deploy an extracted model publicly with black-box query access and is thus irrelevant if the attacker instead keeps the model private. Furthermore, an attacker who anticipates watermarking might take steps to prevent detection, including differentially private training on extraction data ; fine-tuning or re-extracting an extracted model with different queries ; or issuing random outputs on queries exactly matching inputs in the extraction data. We study model extraction attacks against NLP APIs that serve BERT-based models. These attacks are surprisingly effective at extracting good models with low query budgets, even when an attacker uses nonsensical input queries. Our show that fine-tuning large pretrained language models simplifies the process of extraction for an attacker. Unfortunately, existing defenses against extraction, while effective in some scenarios, are generally inadequate, and further research is necessary to develop defenses robust in the face of adaptive adversaries who develop counter-attacks anticipating simple defenses. Other interesting future directions that follow from the in this paper include 1) leveraging nonsensical inputs to improve model distillation on tasks for which it is difficult to procure input data; 2) diagnosing dataset complexity by using query efficiency as a proxy; 3) further investigation of the agreement between victim models as a method to identify proximity in input distribution and its incorporation into an active learning setup for model extraction. We provide a distribution of agreement between victim SQuAD models on RANDOM and WIKI queries in Figure 3. In this paper, we have used the cost estimate from Google Cloud Platform's Calculator. 10 The Natural Language APIs typically allows inputs of length up to 1000 characters per query (https: //cloud.google.com/natural-language/pricing). To calculate costs for different datasets, we counted input instances with more than 1000 characters multiple times. Since Google Cloud did not have APIs for all tasks we study in this paper, we extrapolated the costs of the entity analysis and sentiment analysis APIs for natural language inference (MNLI) and reading comprehension (SQuAD, BoolQ). We believe this is a reasonable estimate since every model studied in this paper is a single layer in addition to BERT-large (thereby needing a similar number of FLOPs for similar input lengths). It is hard to provide a widely applicable estimate for the price of issuing a certain number of queries. Several API providers allow a small budget of free queries. An attacker could conceivably set up multiple accounts and collect extraction data in a distributed fashion. In addition, most APIs are implicitly used on webpages -they are freely available to web users (such as Google Search or Maps). If sufficient precautions are not taken, an attacker could easily emulate the HTTP requests used to call these APIs and extract information at a large scale, free of cost ("web scraping"). Besides these factors, API costs could also vary significantly depending on the computing infrastructure involved or the revenue model of the company deploying them. Given these caveats, it is important to focus on the relatively low costs needed to extract datasets rather than the actual cost estimates. Even complex text generation tasks like machine translation and speech recognition (for which Google Cloud has actual API estimates) are relatively inexpensive. It costs -$430.56 to extract Switchboard LDC97S62 , a large conversational speech recognition dataset with 300 hours of speech; $2000.00 to issue 1 million translation queries, each having a length of 100 characters. In this section we provide more details on the input generation algorithms adopted for each dataset. (SST2, RANDOM) -A vocabulary is built using wikitext103. The top 10000 tokens (in terms of unigram frequency in wikitext103) are preserved while the others are discarded. A length is chosen from the pool of wikitext-103 sentence lengths. Tokens are uniformly randomly sampled from the top-10000 wikitext103 vocabulary up to the chosen length. (SST2, WIKI) -A vocabulary is built using wikitext103. The top 10000 tokens (in terms of unigram frequency in wikitext103) are preserved while the others are discarded. A sentence is chosen at random from wikitext103. Words in the sentence which do not belong to the top-10000 wikitext103 vocabulary are replaced with words uniformly randomly chosen from this vocabulary. (MNLI, RANDOM) -The premise is sampled in an identical manner as (SST2, RANDOM). To construct the final hypothesis, the following process is repeated three times -i) choose a word uniformly at random from the premise ii) replace this word with another word uniformly randomly sampled from the top-10000 wikitext103 vocabulary. (MNLI, WIKI) -The premise is sampled in a manner identical to (SST2, WIKI). The hypothesis is sampled in a manner identical (MNLI, RANDOM). (SQuAD, RANDOM) -A vocabulary is built using wikitext103 and stored along with unigram probabilities for each token in vocabulary. A length is chosen from the pool of paragraph lengths in wikitext103. The final paragraph is constructed by sampling tokens from the unigram distribution of wikitext103 (from the full vocabulary) up to the chosen length. Next, a random integer length is chosen from the range. Paragraph tokens are uniformly randomly sampled to up to the chosen length to build the question. Once sampled, the question is appended with a? symbol and prepended with a question starter word chosen uniformly randomly from the list [A, According, After, Along, At, By, During, For, From, How, In, On, The, To, What, What's, When, Where, Which, Who, Whose, Why]. (SQuAD, WIKI) -A paragraph is chosen at random from wikitext103. Questions are sampled in a manner identical to (SQuAD, RANDOM). (BoolQ, RANDOM) -identical to (SQuAD, RANDOM). We avoid appending questions with? since they were absent in BoolQ. Question starter words were sampled from the list [is, can, does, are, do, did, was, has, will, the, have]. (BoolQ, WIKI) -identical to (SQuAD, WIKI). We avoid appending questions with? since they were absent in BoolQ. The question starter word list is identical to (BoolQ, RANDOM). In this section we study some additional query generation heuristics. In Table 11, we compare numerous extraction datasets we tried for SQuAD 1.1. Our general findings are -i) RANDOM works much better when the paragraphs are sampled from a distribution reflecting the unigram frequency in wikitext103 compared to uniform random sampling ii) starting questions with common question starter words like "what" helps, especially with RANDOM schemes. We present a similar ablation study on MNLI in Table 12. Our general findings parallel recent work studying MNLI -i) when the lexical overlap between the premise and hypothesis is too low (when they are independently sampled), the model almost always predicts neutral or contradiction, limiting the extraction signal from the dataset; ii) when the lexical overlap is too high (hypothesis is shuffled version of premise), the model generally predicts entailment leading to an unbalanced extraction dataset; iii) when the premise and hypothesis have a few different words (edit-distance 3 or 4), datasets tend to be balanced and have strong extraction signal; iv) using frequent words (top 10000 wikitext103 words) tends to aid extraction. More examples have been provided in Table 13. For our human studies, we asked fifteen human annotators to annotate five sets of twenty questions. Annotators were English-speaking graduate students who voluntarily agreed to participate and were completely unfamiliar with our research goals. Three annotators were used per question set. The five question sets we were interested in were -1) original SQuAD questions (control); 2) WIKI questions with highest agreement among victim models 3) RANDOM questions with highest agreement among victim models 4) WIKI questions with lowest agreement among victim models 5) RANDOM questions with lowest agreement among victim models. In Table 10 we show the inter-annotator agreement. Notice that average pairwise F1 (a measure of inter-annotator agreement) follows the order original SQuAD >> WIKI, highest agreement > RANDOM, highest agreement ∼ WIKI, lowest agreement > RANDOM, lowest agreement. We hypothesize that this ordering roughly reflects the closeness to the actual input distribution, since a similar ordering is also observed in Figure 2. Individual annotation scores have been shown below. In this section we run an ablation study on the input features for the membership classifier. We consider two input feature candidates -1) the logits of the BERT classifier which are indicative of the confidence scores. 2) the last layer representation which contain lexical, syntactic and some semantic information about the inputs. We present our in Table 9. Our ablation study indicates that the last layer representations are more effective than the logits in distinguishing between real and fake inputs. However, the best in most cases are obtained by using both feature sets. Table 9: Ablation study of the membership classifiers. We measure accuracy on an identically distributed development set (WIKI) and differently distributed test sets (RANDOM, SHUFFLE). Note the last layer representations tend to be more effective in classifying points as real or fake.
Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs.
1,759
scitldr
We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. Recurrent neural networks (RNNs) have been quite successful in structured prediction applications such as machine translation BID27, parsing BID1 or caption generation. These models use the same repeated cell (or unit) to output a sequence of tokens one by one. As each prediction takes into account all previous predictions, this cell learns to output the next token conditioned on the previous ones. The standard training loss for RNNs is derived from maximum likelihood estimation (MLE): we consider that the cell outputs a probability distribution at each step in the sequence, and we seek to maximize the probability of the ground truth. Unfortunately, this training loss is not a particularly close surrogate to the various test errors we want to minimize. A striking example of discrepancy is that the MLE loss is close to 0/1: it makes no distinction between candidates that are close or far away from the ground truth (with respect to the structured test error), thus failing to exploit valuable information. Another example of train/test discrepancy is called exposure or exploration bias BID22: in traditional MLE training the cell learns the conditional probability of the next token, based on the previous ground truth tokens -this is often referred to as teacher forcing. However, at test time the model does not have access to the ground truth, and thus feeds its own previous predictions to its next cell for prediction instead. Improving RNN training thus appears as a relevant endeavor, which has received much attention recently. In particular, ideas coming from reinforcement learning (RL), such as the REINFORCE and ACTOR-CRITIC algorithms BID22 BID0, have been adapted to derive training losses that are more closely related to the test error that we actually want to minimize. In order to address the issues of MLE training, we propose instead to use ideas from the structured prediction field, in particular from the "learning to search" (L2S) approach introduced by BID8 and later refined by BID24 and BID5 among others. Contributions. In Section 2, we review the limitations of MLE training for RNNs in details. We also clarify some related claims made in the recent literature. In Section 3, we make explicit the strong links between RNNs and the L2S approach. In Section 4, we present SEARNN, a novel training algorithm for RNNs, using ideas from L2S to derive a global-local loss that is much closer to the test error than MLE. We demonstrate that this novel approach leads to significant improvements on two difficult structured prediction tasks, including a spelling correction problem recently introduced in BID0. As this algorithm is quite costly, we investigate scaling solutions in Section 5. We explore a subsampling strategy that allows us to considerably reduce training times, while maintaining improved performance compared to MLE. We apply this new algorithm to machine translation and report significant improvements in Section 6. Finally, we contrast our novel approach to the related L2S and RL-inspired methods in Section 7. RNNs are a large family of neural network models aimed at representing sequential data. To do so, they produce a sequence of states (h 1, ..., h T) by recursively applying the same transformation (or cell) f on the sequential data: h t = f (h t−1, y t−1, x), with h 0 an initial state and x an optional input. Many possible design choices fit this framework. We focus on a subset typically used for structured prediction, where we want to model the joint probability of a target sequence (y 1, . . ., y Tx) ∈ A Tx given an input x (e.g. the decoder RNN in the encoder-decoder architecture BID27 BID6). Here A is the alphabet of output tokens and T x is the length of the output sequence associated with input x (though T x may take different values, in the following we drop the dependency in x and use T for simplicity). To achieve this modeling, we feed h t through a projection layer (i.e. a linear classifier) to obtain a vector of scores s t over all possible tokens a ∈ A, and normalize these with a softmax layer (an exponential normalizer) to obtain a distribution o t over tokens:h t = f (h t−1, y t−1, x); s t = proj(h t); o t = softmax(s t) ∀ 1 ≤ t ≤ T.The vector o t is interpreted as the predictive conditional distribution for the t th token given by the RNN model, i.e. p(a|y 1, . . ., y t−1, x):= o t (a) for a ∈ A. Multiplying the values o t (y t) together thus yields the joint probability of the sequence y defined by the RNN (thanks to the chain rule): DISPLAYFORM0 As pointed by BID11, the underlying structure of these RNNs as graphical models is thus a complete graph, and there is no conditional independence assumption to simplify the difficult prediction task of computing arg max y∈Y p(y|x). In practice, one typically uses either beam search to approximate this decoding, or a sequence of greedy predictionŝ y t:= arg max a∈A p(a|ŷ 1, . . .,ŷ t−1, x).If we use the "teacher forcing" regimen, where the inputs to the RNN cell are the ground truth tokens (as opposed to its own greedy predictions), we obtain the probability of each ground truth sequence according to the RNN model. We can then use MLE to derive a loss to train the RNN. One should note here that despite the fact that the individual output probabilities are at the token level, the MLE loss involves the joint probability (computed via the chain rule) and is thus at the sequence level. The limitations of MLE training. While this maximum likelihood style of training has been very successful in various applications, it suffers from several known issues, especially for structured prediction problems. The first one is called exposure or exploration bias BID22. During training (with teacher forcing), the model learns the probabilities of the next tokens conditioned on the ground truth. But at test time, the model does not have access to the ground truth and outputs probabilities are conditioned on its own previous predictions instead. Therefore if the predictions differ from the ground truth, the model has to continue based on an exploration path it has not seen during training, which means that it is less likely to make accurate predictions. This phenomenon, which is typical of sequential prediction tasks BID17 BID8 ) can lead to a compounding of errors, where mistakes in prediction accumulate and prevent good performance. The second major issue is the discrepancy between the training loss and the various test errors associated with the tasks for which RNNs are used (e.g. edit distance, F1 score...). Of course, a single surrogate is not likely to be a good approximation for all these errors. One salient illustration of that fact is that MLE ignores the information contained in structured losses. As it only focuses on maximizing the probability of the ground truth, it does not distinguish between a prediction that is very close to the ground truth and one that is very far away. Thus, most of the information given by a structured loss is not leveraged when using this approach. Local vs. sequence-level. Some recent papers BID22 ) also point out the fact that since RNNs output next token predictions, their loss is local instead of sequence-level, contrary to the error we typically want to minimize. This claim seems to contradict the standard RNN analysis, which postulates that the underlying graphical model is the complete graph: that is, the RNN outputs the probability of the next tokens conditioned on all the previous predictions. Thanks to the chain rule, one recovers the probability of the whole sequence. Thus the maximum likelihood training loss is indeed a sequence level loss, even though we can decompose it in a product of local losses at each cell. However, if we assume that the RNN outputs are only conditioned on the last few predictions (instead of all previous ones), then we can indeed consider the MLE loss as local. In this setting, the underlying graphical model obeys Markovian constraints (as in maximum entropy Markov models (MEMMs)) rather than being the complete graph; this corresponds to the assumption that the information from the previous inputs is imperfectly carried through the network to the cell, preventing the model from accurately representing long-term dependencies. Given all these limitations, exploring novel ways of training RNNs appears to be a worthy endeavor, and this field has attracted a lot of interest in the past few years. While many papers try to adapt ideas coming from the reinforcement learning literature, we instead focus in this paper on the links we can draw with structured prediction, and in particular with the L2S approach. The L2S approach to structured prediction was first introduced by BID8. The main idea behind it is a learning reduction BID3: transforming a complex learning problem (structured prediction) into a simpler one that we know how to solve (multiclass classification). To achieve this, BID8 propose in their SEARN algorithm to train a shared local classifier to predict each token sequentially (conditioned on all inputs and all past decisions), thus searching greedily step by step in the big combinatorial space of structured outputs. The idea that tokens can be predicted one at a time, conditioned on their predecessors, is central to this approach. The training procedure is iterative: at the beginning of each round, one uses the current model (or policy 1) to build an intermediate dataset to train the shared classifier on. The specificity of this new dataset is that each new sample is accompanied by a cost vector containing one entry per token in the output vocabulary A. To obtain these cost vectors, one starts by applying a roll-in policy to predict all the tokens up to T, thus building one trajectory (or exploration path) in the search space per sample in the initial dataset. Then, at each time step t, one picks arbitrarily each possible token (diverging from the roll-in trajectory) and then continues predicting to finish the modified trajectory using a roll-out policy. One then computes the cost of all the obtained sequences, and ends up with T vectors (one per time step) of size |A| (the number of possible tokens) for every sample. FIG0 describes the same process for our SEARNN algorithm (although in this case the shared classifier is an RNN).One then extracts features from the "context" at each time step t (which encompasses the full input and the previous tokens predicted up to t during the roll-in).2 Combining the cost vectors to these features yields the new intermediary dataset. The original problem is thus reduced to multi-class cost-sensitive classification. Once the shared classifier has been fully trained on this new dataset, the policy is updated for the next round. The algorithm is described more formally in Algorithm 2 (see Appendix A). Theoretical guarantees for various policy updating rules are provided by e.g. BID8 and BID5. FIG0: Illustration of the roll-in/roll-out mechanism used in SEARNN. The goal is to obtain a vector of costs for each cell of the RNN in order to define a cost-sensitive loss to train the network. These vectors have one entry per possible token. Here, we show how to obtain the vector of costs for the red cell. First, we use a roll-in policy to predict until the cell of interest. We highlight here the learned policy where the network passes its own prediction to the next cell. Second, we proceed to the roll-out phase. We feed every possible token (illustrated by the red letters) to the next cell and let the model predict the full sequence. For each token a, we obtain a predicted sequenceŷa. Comparing it to the ground truth sequence y yields the associated cost c(a).Roll-in and roll-out policies. The policies used to create the intermediate datasets fulfill different roles. The roll-in policy controls what part of the search space the algorithm explores, while the roll-out policy determines how the cost of each token is computed. The main possibilities for both roll-in and roll-out are explored by BID5. The reference policy tries to pick the optimal token based on the ground truth. During the roll-in, it corresponds to picking the ground truth. For the roll-out phase, while it is easy to compute an optimal policy in some cases (e.g. for the Hamming loss where simply copying the ground truth is also optimal), it is often too expensive (e.g. for BLEU score). One then uses a heuristic (in our experiments the reference policy is to copy the ground truth for both roll-in and roll-out unless indicated otherwise). The learned policy simply uses the current model instead, and the mixed policy stochastically combines both. According to BID5, the best combination when the reference policy is poor is to use a learned roll-in and a mixed roll-out. Links to RNNs. One can identify the following interesting similarities between a greedy approach to RNNs and L2S. Both models handle sequence labeling problems by outputting tokens recursively, conditioned on past decisions. Further, the RNN "cell" is shared at each time step and can thus also be seen as a shared local classifier that is used to make structured predictions, as in the L2S framework. In addition, there is a clear equivalent to the choice of roll-in policy in RNNs. Indeed, teacher forcing (conditioning the outputs on the ground truth) can be seen as the roll-in reference policy for the RNN. Instead, if one conditions the outputs on the previous predictions of the model, then we obtain a roll-in learned policy. Despite these connections, many differences remain. Amongst them, the fact that no roll-outs are involved in standard RNN training. We thus consider next whether ideas coming from L2S could mitigate the limitations of MLE training for RNNs. In particular, one key property of L2S worth porting over to RNN training is that the former fully leverages structured losses information, contrarily to MLE as previously noted. Since we are interested in leveraging structured loss information, we can try to obtain it in the same fashion as L2S. The main tool that L2S uses in order to construct a cost-sensitive dataset is the roll-out policy. In many classical structured prediction use cases, one does not need to follow through with a policy because the "cost-to-go" that the roll-out yields is either free or easily computable from the ground truth. We are however also interested in cases where this information is unavailable, and roll-outs are needed to approximate it (e.g. for machine translation). This leads to several questions. How can we integrate roll-outs in a RNN model? How do we use this additional information, i.e. what loss do we use to train the model on? How do we make it computationally tractable?The SEARNN Algorithm. The basic idea of the SEARNN algorithm is quite simple: we borrow from L2S the idea of using a global loss for each local cell of the RNN. As in L2S, we first compute a roll-in trajectory, following a specific roll-in policy. Then, at each step t of this trajectory, we compute the costs c t (a) associated with each possible token a. To do so we pick a at this step and then follow a roll-out policy to finish the output sequenceŷ a. We then compareŷ a with the ground truth using the test error itself, rather than a surrogate. By repeating this for the T steps we obtain T cost vectors. We use this information to derive one cost-sensitive training loss for each cell, which allows us to compute an update for the parameters of the model. The full process for one cell is illustrated in FIG0. Our losses are global-local, in the sense that they appear at the local level but all contain sequence-level information. Our final loss is the sum over the T local losses. We provide the pseudo-code for SEARNN in Algorithm 1. Store the sequence of hidden states in order to perform several roll-outs 8:for t in 1 to T do # Roll-outs for all actions in order to collect the cost vector at the t th cell. 9:for a in 1 to A do 10:Pick a decoding method (e.g. greedy or beam search) Run the RNN from the t th cell to the end by first enforcing action a at cell t, and then following the decoding method. Update the parameters of the network ω by doing a single gradient step 18: end for Choosing a multi-class classifier. SEARNN appears quite similar to L2S, but there are a few key differences that merit more explanation. As the RNN cell can serve as a multi-class classifier, in SEARNN we could pick the cell as a (shallow) shared classifier, whose input are features extracted from the full context by the previous cells of the RNN. Instead, we pick the RNN itself, thus getting a (deep) shared classifier that also learns the features directly from the context. The difference between the two options is more thoroughly detailed in Appendix B. Arbitrarily picking a token a during the roll-out phase can then be done by emulating the teacher forcing technique: if predicted tokens are fed back to the model (say if the roll-out policy requires it), we use a for the next cell (instead of the prediction the cell would have output). We also use a in the output sequence before computing the cost. Choosing a cost-sensitive loss. We now also explain our choice for the training loss function derived from the cost vectors. One popular possibility from L2S is to go the full reduction route down to binary classification. However, this technique involves creating multiple new datasets (which is hard to implement as part of a neural network), as well as training |A| 2 binary classifiers. Instead, we simply work with the multi-class classifier encoded by the RNN cell with training losses defined next. We now introduce two of the more successful losses we used (although we experimented with many others, which are detailed in Appendix C.1). In the following, each loss is defined at the cell level. The global loss is the sum of all T losses. s t (a) refers to the score output by cell t for token a. A central idea in L2S is to learn the target tokens the model should aim for. This is more meaningful than blindly imposing the ground truth as target, in particular when the model has deviated from the ground truth trajectory. BID10 refer to this technique as using dynamic oracles. In the context of RNN training, we call this approach target learning. Our first loss is thus a simple log-loss with the minimal cost token as target: DISPLAYFORM0 It is structurally similar to MLE. The only difference is that instead of maximizing the probability of the ground truth action, we maximize the probability of the best performing action with respect to the cost vector. This similarity is a significant advantage from an optimization perspective: as RNNs have mostly been trained using MLE, this allows us to leverage decades of previous work. Note that when the reference policy is to simply copy the ground truth (which is sometimes optimal, e.g. when the test error is the Hamming loss), a is always the ground truth token. LL with reference roll-in and roll-out is in this case equivalent to MLE. The log-loss approach appears to be relatively wasteful with the structured information we have access to since we are only using the minimal cost value. To exploit this information more meaningfully, we consider the following approach: we convert each cost vector into a probability distribution (e.g. through a softmax operator) and then minimize a divergence between the current model distribution P M and the "target distribution" P C derived from the costs. As the MLE objective itself can be expressed as the KL divergence between D gt (a Dirac distribution with full mass on the ground truth) and P M, we also choose to minimize the KL divergence between P C and P M. Since the costs are considered fixed with respect to the parameters of the model, our loss is equivalent to the cross-entropy between P C and P M. DISPLAYFORM0 α is a scaling parameter that controls how peaky the target distributions are. It can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Compared to LL, KL leverages the structured loss information more directly and thus mitigates the 0/1 nature of MLE better. Optimization. Another difference between SEARN and RNNs is that RNNs are typically trained using stochastic gradient descent, whereas SEARN is a batch method. In order to facilitate training, we decide to adapt the optimization process of LOLS, an online variant of SEARN introduced by BID5. At each round, we select a random mini-batch of samples, and then take a single gradient step on the parameters with the associated loss (contrary to SEARN where the reduced classifier is fully trained at each round).Note that we do not need the test error to be differentiable, as our costs c t (a) are fixed when we minimize our training loss. This corresponds to defining a different loss at each round, which is the way it is done in L2S. In this case our gradient is unbiased. However, if instead we consider that we define a single loss for the whole procedure, then the costs depend on the parameters of the model and we effectively compute an approximation of the gradient. Whether it is possible not to fix the costs and to backpropagate through the roll-in and roll-out remains an open problem. Expected benefits. SEARNN can improve performance because of a few key properties. First, our losses leverage the test error, leading to potentially much better surrogates than MLE.Second, all of our training losses (even plain LL) leverage the structured information that is contained in the computed costs. This is much more satisfactory than MLE which does not exploit this information and ignores nuances between good and bad candidate predictions. Indeed, our hypothesis is that the more complex the error is, the more SEARNN can improve performance. Third, the exploration bias we find in teacher forcing can be mitigated by using a "learned" roll-in policy, which may be the best roll-in policy for L2S applications according to BID5.Fourth, the loss at each cell is global, in the sense that the computed costs contain information about full sequences. This may help with the classical vanishing gradients problem that is prevalent in RNN training and motivated the introduction of specialized cells such as LSTMs BID14 or GRUs BID6. We provide the number of actions A and the maximum sequence length T. Note that we use 0.5 as the mixing probability for the mixed roll-out policy. We ran the ACTOR-CRITIC algorithm from BID0 on our data splits for the spelling task and report the in the AC column (the reported in BID0 were not directly comparable as they used a different random test dataset each time).Experiments. In order to validate these theoretical benefits, we ran SEARNN on two datasets and compared its performance against that of MLE. For a fair comparison, we use the same optimization routine for all methods. We pick the one that performs best for the MLE baseline. Note that in all the experiments of the paper, we use greedy decoding, both for our cost computation and for evaluation. Furthermore, whenever we use a mixed roll-out we always use 0.5 as our mixin parameter, following BID5.The first dataset is the optical character recognition (OCR) dataset introduced in.The task is to output English words given an input sequence of handwritten characters. We use an encoder-decoder model with GRU cells BID6 of size 128. For all runs, we use SGD with constant step-size 0.5 and batch size of 64. The cost used in the SEARNN algorithm is the Hamming error. We report the total Hamming error, normalized by the total number of characters on the test set. The second dataset is the Spelling dataset introduced in BID0. The task is to recover correct text from a corrupted version. This dataset is synthetically generated from a text corpus (One Billion Word dataset): for each character, we decide with some fixed probability whether or not to replace it with a random one. The total number of tokens A is 43 (alphabet size plus a few special characters) and the maximum sequence length T is 10 (sentences from the corpus are clipped).We provide for two sub-datasets generated with the following replacement probabilities: 0.3 and 0.5. For this task, we follow BID0 and use the edit distance as our cost. It is defined as the edit distance between the predicted sequence and the ground truth sequence divided by the ground truth length. We reuse the attention-based encoder-decoder model with GRU cells of size 100 described in BID0. For all runs, we use the Adam optimizer BID16 with learning rate 0.001 and batch size of 128. Results are given in Table 1, including ACTOR-CRITIC BID0 ) runs on our data splits as an additional baseline. Key takeaways. First, SEARNN outperforms MLE by a significant margin on the two different tasks and datasets, which confirms our intuition that taking structured information into account enables better performance. Second, we observed that the best performing losses were those structurally close to MLE -LL and KL -whereas others (detailed in Appendix C.1) did not improve . This might be explained by the fact that RNN architectures and optimization techniques have been evolving for decades with MLE training in mind. Third, the best roll-in/out strategy appears to be combining a learned roll-in and a mixed roll-out, which is consistent with the claims from BID5. Fourth, although we expect SEARNN to make stronger improvements over MLE on hard tasks (where a simplistic roll-out policy -akin to MLE -is suboptimal), we do get improvements even when outputting the ground truth (regardless of the current trajectory) is the optimal policy. While SEARNN does provide significant improvements on the two tasks we have tested it on, it comes with a rather heavy price, since a large number of roll-outs (i.e. forward passes) have to be run in order to compute the costs. This number, |A|T, is proportional both to the length of the sequences, and to the number of possible tokens. SEARNN is therefore not directly applicable to tasks with large output sequences or vocabulary size (such as machine translation) where computing so many forward passes becomes a computational bottleneck. Even though forward passes can be parallelized more heavily than backward ones (because they do not require maintaining activations in memory), their asymptotic cost remains in O(dT), where d is the number of parameters of the model. sLL and sKL are respectively the subsampled version of the LL and the KL losses. All experiments were run with a learned roll-in and a mixed roll-out. There are a number of ways to mitigate this issue. In this paper, we focus on subsampling both the cells and the tokens when computing the costs. That is, instead of computing a cost vector for each cell, we only compute them for a subsample of all cells. Similarly, we also compute these costs only for a small portion of all possible tokens. The speedups we can expect from this strategy are large, since the total number of roll-outs is proportional to both the quantities we are decreasing. Sampling strategies. First, we need to decide how we select the steps and tokens that we sample. We have chosen to sample steps uniformly when we do not take all of them. On the other hand, we have explored several different possibilities for token sampling. The first is indeed the uniform sampling strategy. The 3 alternative samplings we tried use the current state of our model: stochastic current policy sampling (where we use the current state of the stochastic policy to pick at random), a biased version of current policy sampling where we boost the scores of the low-probability tokens, and finally a top-k strategy where we take the top k tokens according to the current policy. Note that the latter strategy (top-k) can be seen as a simplified variant of targeted sampling BID12, another smarter strategy introduced to help L2S methods scale. Finally, in all strategies we always sample the ground truth action to make sure that our performance is at least as good as MLE.Adapting our losses to sampling. Our losses require computing the costs of all possible tokens at a given step. One could still use LL by simply making the assumption that the token with minimum cost is always sampled. However this is a rather strong assumption and it means pushing down the scores of tokens that were not even sampled and hence could not compete with the others. To alleviate this issue, we replace the full softmax by a layer applied only on the tokens that were sampled BID15. While the target can still only be in the sampled tokens, the unsampled tokens are left alone by the gradient update, at least for the first order dependency. This trick is even more needed for KL, which otherwise requires a "default" score for unsampled tokens, adding a difficult to tune hyperparameter. We refer to these new losses as sLL and sKL.Experiments. The main goal of these experiments is to assess whether or not combining subsampling with the SEARNN algorithm is a viable strategy. To do so we ran the method on the same two datasets that we used in the previous section. We decided to only focus on subsampling tokens as the vocabulary size is usually the blocking factor rather than the sequence length. Thus we sampled all cells. We evaluate different sampling strategies and training losses. For all experiments, we use the learned policy for roll-in and the mixed one for roll-out and we sample 5 tokens per cell. Finally, we use the same optimization techniques than in the previous experiment. Key takeaways. Results are given in TAB2. The analysis of this experiment yields interesting observations. First, and perhaps most importantly, subsampling appears to be a viable strategy to obtain a large part of the improvements of SEARNN while keeping computational costs under control. Indeed, we recover all of the improvements of the full method while only sampling a fraction of all possible tokens. Second, it appears that the best strategy for token sampling depends on the chosen loss. In the case of sLL, the top-k strategy performs best, whereas sKL favors the biased current policy. Third, it also seems like the best performing loss is task-dependent. Finally, this sampling technique yields a 5× running time speedup, therefore validating our scaling approach. and ACTOR-CRITIC BID0 on the IWSLT 14 German to English machine translation dataset. The asterisk (*), dagger (†) and apostrophy (') indicate reproduced from Ranzato et al. FORMULA0, and BID0, respectively. We use a reference roll-in and a mixed roll-out for SEARNN, along with the subsampled version of the KL loss and a scaling factor of 200. SEARNN (conv) indicates that we used a convolutional encoder instead of a recurrent one for fair comparison with MIXER.6 NEURAL MACHINE TRANSLATION.Having introduced a cheaper alternative SEARNN method enables us to apply it to a large-scale structured prediction task and to thus investigate whether our algorithm also improves upon MLE in more challenging real-life settings. We choose neural machine translation as out task, and the German-English translation track of the IWSLT 2014 campaign BID4 as our dataset, as it was used in several related papers and thus allows for easier comparisons. We reuse the pre-processing of Ranzato et al. FORMULA0 BID22. We use Adam as our optimizer, with an initial learning rate of 10 −3 gradually decreasing to 10 −5, and a batch size of 64. We select the best models on the validation set and report both without and with dropout (0.3).Regarding the specific settings of SEARNN, we use a reference roll-in and a mixed roll-out. Additionally, we sample 25 tokens at each cell, following a mixed sampling strategy (detailed in Appendix C.2). We use the best performing loss on the validation set, i.e. the KL loss with scaling parameter 200.The traditional evaluation metric for such tasks is the BLEU score BID20. As we cannot use this corpus-wide metric to compute our sentence-level intermediate costs, we adopt the alternative smoothed BLEU score of BID0 as our cost. We use a custom reference policy (detailed in Appendix C.2). We report the corpus-wide BLEU score on the test set in TAB4.Key takeaways. First, the significant improvements SEARNN obtains over MLE on this task (2 BLEU points without dropout) show that the algorithm can be profitably applied to large-scale, challenging structured prediction tasks at a reasonable computational cost. Second, our performance is on par or better than those of related methods with comparable baselines. Our performance using a convolutional encoder is similar to that of MIXER. Compared to BSO , our baseline, absolute performance and improvements are all stronger. While SEARNN presents similar improvements to ACTOR-CRITIC, the absolute performance is slightly worse. This can be explained in part by the fact that SEARNN requires twice less parameters during training. Finally, the learned roll-in policy performed poorly for this specific task, so we used instead a reference roll-in. While this observation seems to go against the L2S analysis from BID5, it is consistent with another experiment we ran: we tried applying scheduled sampling -which uses a schedule of mixed roll-ins -on this dataset, but did not succeed to obtain any improvements, despite using a careful schedule as proposed by their authors in private communications. One potential factor is that our reference policy is not good enough to yield valuable signal when starting from a poor roll-in. Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in. We now contrast SEARNN to several related algorithms, including traditional L2S approaches (which are not adapted to RNN training), and RNN training methods inspired by L2S and RL.Traditional L2S approaches. Although SEARNN is heavily inspired by SEARN, it is actually closer to LOLS BID5, another L2S algorithm. As LOLS, SEARNN is a meta-algorithm where roll-in/roll-out strategies are customizable (we explored most combinations in our experiments). Our findings are in agreement with those of BID5: we advocate using the same combination, that is, a learned roll-in and a mixed roll-out. The one exception to this rule of thumb is when the associated reduced problem is too hard (as seems to be the case for machine translation), in which case we recommend switching to a reference roll-in. Moreover, as noted in Section 4, SEARNN adapts the optimization process of LOLS (the one difference being that our method is stochastic rather than online): each intermediate dataset is only used for a single gradient step. This means the policy interpolation is of a different nature than in SEARN where intermediate datasets are optimized for fully and the ing policy is mixed with the previous one. However, despite the similarities we have just underlined, SEARNN presents significant differences from these traditional L2S algorithms. First off, and most importantly, SEARNN is a full integration of the L2S ideas to RNN training, whereas previous methods cannot be used for this purpose directly. Second, in order to achieve this adaptation we had to modify several design choices, including:• the intermediate dataset construction, which significantly differs from traditional L2S; 3• the careful choice of a classifier (those used in the L2S literature do not fit RNNs well);• the design of tailored surrogate loss functions that leverage cost information while being easy to optimize in RNNs. L2S-inspired approaches. Several other papers have tried using L2S-like ideas for better RNN training, starting with which introduces "scheduled sampling" to avoid the exposure bias problem. The idea is to start with teacher forcing and to gradually use more and more model predictions instead of ground truth tokens during training. This is akin to a mixed roll-in -an idea which also appears in BID8 ).Wiseman & Rush (2016, BSO) adapt one of the early variants of the L2S framework: the "Learning A Search Optimization" approach of Daumé & Marcu (2005, LASO) to train RNNs. However LASO is quite different from the more modern SEARN family of algorithms that we focus on: it does not include either local classifiers or roll-outs, and has much weaker theoretical guarantees. Additionally, BSO's training loss is defined by violations in the beam-search procedure, yielding a very different algorithm from SEARNN. Furthermore, BSO requires being able to compute a meaningful loss on partial sequences, and thus does not handle general structured losses unlike SEARNN. Finally, its ad hoc surrogate objective provides very sparse sequence-level training signal, as mentioned by their authors, thus requiring warm-start. BID1 use a loss that is similar to LL for parsing, a specific task where cost-to-go are essentially free. This property is also a requirement for BID26, in which new gradient procedures are introduced to incorporate neural classifiers in the AGGREVATE BID24 variant of L2S. 4 In contrast, SEARNN can be used on tasks without a free cost-to-go oracle. RL-inspired approaches. In structured prediction tasks, we have access to ground truth trajectories, i.e. a lot more information than in traditional RL. One major direction of research has been to adapt RL techniques to leverage this additional information. The main idea is to try to optimize the expectation of the test error directly (under the stochastic policy parameterized by the RNN): DISPLAYFORM0 Since we are taking an expectation over all possible structured outputs, the only term that depends on the parameters is the probability term (the tokens in the error term are fixed). This allows this loss function to support non-differentiable test errors, which is a key advantage. Of course, actually computing the expectation over an exponential number of possibilities is computationally intractable. To circumvent this issue, BID25 subsample trajectories according to the learned policy, while BID22; BID23 use the REINFORCE algorithm, which essentially approximates the expectation with a single trajectory sample. BID0 adapt the ACTOR-CRITIC algorithm, where a second critic network is trained to approximate the expectation. While all these approaches report significant improvement on various tasks, one trait they share is that they only work when initialized from a good pre-trained model. This phenomenon is often explained by the sparsity of the information contained in "sequence-level" losses. Indeed, in the case of REINFORCE, no distinction is made between the tokens that form a sequence: depending on whether the sampled trajectory is above a global baseline, all tokens are pushed up or down by the gradient update. This means good tokens are sometimes penalized and bad tokens rewarded. In contrast, SEARNN uses "global-local" losses, with a local loss attached to each step, which contains global information since the costs are computed on full sequences. To do so, we have to "sample" more trajectories through our roll-in/roll-outs. As a , SEARNN does not require warm-starting to achieve good experimental performance. This distinction is quite relevant, because warm-starting means initializing in a specific region of parameter space which may be hard to escape. Exploration is less constrained when starting from scratch, leading to potentially larger gains over MLE.RL-based methods often involve optimizing additional models (baselines for REINFORCE and the critic for ACTOR-CRITIC), introducing more complexity (e.g. target networks). SEARNN does not. Finally, while maximizing the expected reward allows the RL approaches to use gradient descent even when the test error is not differentiable, it introduces another discrepancy between training and testing. Indeed, at test time, one does not decode by sampling from the stochastic policy. Instead, one selects the "best" sequence (according to a search algorithm, e.g. greedy or beam search). SEARNN avoids this averse effect by computing costs using deterministic roll-outs -the same decoding technique as the one used at test time -so that its loss is even closer to the test loss. The associated price is that we approximate the gradient by fixing the costs, although they do depend on the parameters. RAML BID19 ) is another RL-inspired approach. Though quite different from the previous papers we have cited, it is also related to SEARNN. Here, in order to mitigate the 0/1 aspect of MLE training, the authors introduce noise in the target outputs at each iteration. The amount of random noise is determined according to the associated reward (target outputs with a lot of noise obtain lower rewards and are thus less sampled). This idea is linked to the label smoothing technique BID28, where the target distribution at each step is the addition of a Dirac (the usual MLE target) and a uniform distribution. In this sense, when using the KL loss SEARNN can be viewed as doing learned label smoothing, where we compute the target distribution from the intermediate costs rather than arbitrarily adding the uniform distribution. Conclusion and future work. We have described SEARNN, a novel algorithm that uses core ideas from the learning to search framework in order to alleviate the known limitations of MLE training for RNNs. By leveraging structured cost information obtained through strategic exploration, we define global-local losses. These losses provide a global feedback related to the structured task at hand, distributed locally within the cells of the RNN. This alternative procedure allows us to train RNNs from scratch and to outperform MLE on three challenging structured prediction tasks. Finally we have proposed efficient scaling techniques that allow us to apply SEARNN on structured tasks for which the output vocabulary is very large, such as neural machine translation. The L2S literature provides several promising directions for further research. Adapting "bandit" L2S alternatives BID5 would allow us to apply SEARNN to tasks where only a single trajectory may be observed at any given point (so trying every possible token is not possible). Focused costing BID12 ) -a mixed roll-out policy where a fixed number of learned steps are taken before resorting to the reference policy -could help us lift the quadratic dependency of SEARNN on the sequence length. Finally, targeted sampling BID12 ) -a smart sampling strategy that prioritizes cells where the model is uncertain of what to do -could enable more efficient exploration for large-scale tasks. Let us consider the case where we perform the roll-in up until the t th cell. In order to be able to perform roll-outs from that t th cell, a hidden state is needed. If we used a reference policy roll-in, this state is obtained by running the RNN until the t th cell by using the teacher forcing strategy, i.e. by conditioning the outputs on the ground truth. Finally, SEARNN also needs to know what the predictions for the full sequence were in order to compute the costs. When the reference roll-in is used, we obtain the predictions up until the t th cell by simply copying the ground truth. Hence, we discard the outputs of the RNN that are before the t th cell. Choosing a classifier: to backpropagate or not to backpropagate? In standard L2S, the classifier and the feature extractor are clearly delineated. The latter is a fixed hand-crafted transformation applied on the input and the partial sequence that has already been predicted. One then has to pick a classifier and its convergence properties carry over to the initial problem. In SEARNN, we choose the RNN itself as our classifier. The fixed feature extractor is reduced to the bare minimum (e.g. one-hot encoding) and the classifier performs feature learning afterwards. In this setting, the intermediate dataset is the initial state and all previous decisions (x, y 1:t−1) combined with the cost vector. An alternative way to look at RNNs, is to consider the RNN cell as a shared classifier in its own right, and the beginning of the RNN (including the previous cells) as a feature extractor. One could then pick the RNN cell (instead of the full RNN) as the SEARNN classifier, in which case the intermediate dataset would be (h t−1, y t−1) 6 (the state at the previous step, combined with the previous decision) plus the cost vector. While this last perspective -seeing the RNN cell as the shared classifier instead of the full RNN -is perhaps more intuitive, it actually fits the L2S framework less well. Indeed, there is no clear delineation between classifier and feature extractor as these functions are carried out by different instances of the same RNN cell (and as such share weights). This means that the feature extraction in this case is learned instead of being fixed. This choice of classifier has a direct consequence on the optimization routine. In case we pick the RNN itself, then each loss gradient has to be fully backpropagated through the network. On the other hand, if the classifier is the cell itself, then one should not backpropagate the gradient updates. Reference policy. The reference policy defined by BID8 picks the action which "minimizes the (corresponding) cost, assuming all future decisions are made optimally", i.e. arg min yt min y t+1:T l(y 1:T, y).For the roll-in phase, this policy corresponds to always picking the ground truth, since it leads to predicting the full ground truth sequence and hence the best possible loss. For the roll-out phase, computing this policy explicitly is easy in a few select cases. However, in the general case it is not tractable. One then has to turn to heuristics, whose performance can be relatively poor. While BID5 tell us that overcoming a bad reference policy can be done through a careful choice of roll-in/roll-out policies, the fact remains that the better the reference policy is, the better performance will be. Choosing this heuristic well is then quite important. The most basic heuristic is to simply use the ground truth. Of course, one can readily see that it is not always optimal. For example, when the model skips a token and outputs the next one, a, instead, it may be more beneficial to also skip a in the roll-out phase rather than to repeat it. Although we mostly chose this basic heuristic in this paper, using tailored alternatives can yield better for tasks where it is suboptimal, such as machine translation (see Appendix C.2).C ADDITIONAL EXPERIMENTAL DETAILS C.1 LOSSES.We now describe other losses we tried but did not perform as well (or at least not better) than the ones presented in the main text. The first two follow the target learning principle, as LL.Log-loss with cost-augmented softmax (LLCAS). LLCAS is another attempt to leverage the structured information we have access to more meaningfully, through a slight modification of LL. We add information about the full costs in the exponential, following e.g. BID21; BID9; BID13. L t (s t ; c t) = − log e st(a)+αct(a) A i=1 e st(i)+αct(i) where a = arg min a∈A c t (a). α is a scaling parameter that ensures that the scores of the model and the costs are not too dissimilar, and can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Although it leverages the structured loss information more directly and thus should in principle mitigate the 0/1 nature of MLE better, we did not observe any significant improvements over LL, even after tuning the scaling parameter α. The LLCAS can be seen as a smooth version of the (cost-sensitive) structured hinge loss used for structured SVMs , that we also consider: L t (s t ; c t) = max a∈A (s t (a) + c t (a)) − s t (a) where a = arg min a∈A c t (a).While this loss did enable the RNNs to learn, the overall performance was actually slightly worse than that of MLE. This may be due to the fact that RNNs have a harder time optimizing the ing objective, compared to others more similar to the traditional MLE objective (which they have been tuned to train well on).Consistent loss. This last loss is inspired from traditional structured prediction. Following BID18, we define:L t (c t) = a∈A c t (a) ln(1 + exp(s t (a))) wheres t (a) = s t (a) − 1 A a∈A s t (a).Unfortunately, we encountered optimization issues and could not get significant improvements over the MLE baseline. KL and label smoothing. We have seen that when the loss function is the Hamming loss, the reference policy is to simply output the ground truth. In this case, LL with a reference roll-in and rollout is equivalent to MLE. Interestingly, in the same setup KL is also equivalent to an existing method: the label smoothing technique. Indeed, the vector of costs can be written as a vector with equal coordinates minus a one-hot vector with all its mass on the ground truth token. After transformation through a softmax operator, this yields the same target distribution as in label smoothing. Custom sampling. For this experiment, we decided to sample 15 tokens per cell according to the top-k policy (as the vocabulary size is quite big, sampling tokens with low probability is not very attractive), as well as 10 neighboring ground truth labels around the cell. The rationale for these neighboring tokens is that skipping or repeating words is quite a common mistake in NMT.Custom reference policy. The very basic reference policy we have been using for the other experiments of the paper is too bad a heuristic for BLEU to perform well. Instead, we try adding every suffix in the ground truth sequence to the current predictions and we pick the one with the highest BLEU-1 score (using this strategy with BLEU-4 leads to unfortunate events when the best suffix to add is always the entire sequence, leading to uninformative costs).Reference roll-in. As mentioned in Section 6, we had to switch from a learned to a reference roll-in. In addition to the existing problems of a weak reference policy (which affects a learned roll-in much more than a reference one), and the introduction of a harder optimization problem, there is another potential source of explanation: this may illustrate a gap in the standard reduction theory from the L2S framework. Indeed, the standard reduction analysis BID8 BID5 guarantees that the level of performance of the classifier on the reduced problem translates to overall performance on the initial problem. However, this does not take into account the fact that the reduced problem may be harder or easier, depending on the choice of roll-in/roll-out combination. In this case, it appears that using a learned roll-in may have lead to a harder reduced problem and thus ultimately worse overall performance.
We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training.
1,760
scitldr
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine \expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against three alternative baselines. As they gain experience, humans develop rich repertoires of motion skills that are useful in different contexts and environments. Recent advances in reinforcement learning provide an opportunity to understand how motion repertoires can best be learned, recalled, and augmented. Inspired by studies on the development and recall of movement patterns useful for different locomotion contexts BID17, we develop and evaluate an approach for learning multi-skilled movement repertoires. In what follows, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation. For long lived applications of complex control tasks a learning system may need to acquire and integrate additional skills. Accordingly, our problem is defined by the sequential acquisition and integration of new skills. Given an existing controller that is capable of one-or-more skills, we wish to: (a) efficiently learn a new skill or movement pattern in a way that is informed by the existing control policy, and (b) to reintegrate that into a single controller that is capable of the full motion repertoire. This process can then be repeated as necessary. We view PLAID as a continual learning method, in that we consider a context where all tasks are not known in advance and we wish to learn any new task in an efficient manner. However, it is also proves surprisingly effective as a multitask solution, given the three specific benchmarks that we compare against. In the process of acquiring a new skill, we also allow for a control policy to be augmented with additional inputs, without adversely impacting its performance. This is a process we refer to as input injection. Understanding the time course of sensorimotor learning in human motor control is an open research problem BID31 ) that exists concurrently with recent advances in deep reinforcement learning. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. Distillation refers to the problem of combining the policies of one or more experts in order to create one single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert policies BID13 BID28 BID19. However, given a new task for which an expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One wellknown technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) BID12, which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomlyinitialized model BID16. Additionally, while learning a new skill, the control policy should not forget how to perform old skills. The core contribution of this paper is a method Progressive Learning and Integration via Distillation (PLAiD) to repeatedly expand and integrate a motion control repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against three alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while preserving existing capabilities. Transfer learning and distillation are of broad interest in machine learning and RL BID12 BID27 BID28. Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the reinforcement learning setting. A popular approach uses supervised learning to combine each policy by regression over the action distribution. This approach yields model compression BID19 as well as a viable method for multi-task policy transfer BID13 on discrete action domains including the Arcade Learning Environment BID0. We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. Transfer Learning Transfer learning exploits the structure learned from a previous task in learning a new task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games BID19 BID13 BID20 BID2 Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, BID23 BID9. Recent work seeks to mitigate this issue using selective learning rates for specific network parameters. A different approach to combining policies is to use a hierarchical structure BID29. In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve control policies that can quickly adapt their behaviour according to current rewards BID4. This work is demonstrated on parameterized task domains. The Powerplay method provides a general framework for training an increasingly general problem solver BID21 BID25. It is based on iteratively: inventing a new task using play or invention; solving this task; and, lastly, demonstrating the ability to solve all the previous tasks. The last two stages are broadly similar to our PLAID approach, although to the best of our knowledge, there are no experiments on motor control tasks of comparable complexity to the ones we tackle. In our work, we develop a specific progressive learning-and-distillation methodology for motor skills, and provide a detailed evaluation as compared to three other plausible baselines. We are specifically interested in understanding issues that arise from the interplay between transfer from related tasks and the forgetting that may occur. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks BID29 This allows for the substitution of network modules for different robot types over a similar tasks BID3. Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks BID7 BID5 BID15 While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips BID11, and that these can then be integrated in a hierarchical controller. In this section we outline the details of the Reinforcement Learning (RL) framework. We also give an introduction to the concepts of TL and distillation. Leveraging the framework of reinforcement learning, we frame the problem as a Markov Decision Processes (MDP): at each time step t, the world (including the agent) is in a state s t ∈ S, wherein the agent is able to perform actions a t ∈ A, sampled from a policy π(s t, a t) = p(a t |s t) and ing in state s t+1 ∈ S according to transition probabilities T (s t, a t, s t+1). Performing action a t from state s t produces a reward r t; the expected cumulative reward earned from following some policy π may then be written as: DISPLAYFORM0 where T is the time horizon, and γ is the discount factor, defining the planning horizon length. The agent's goal is to learn an optimal policy, π *, maximizing J(π). If the policy has parameters θ π, then the goal may be reformulated to identify the optimal parameters θ * π: DISPLAYFORM1 Our policy models a Gaussian distribution with a mean state dependent mean, µ θt (s t). Thus, our stochastic policy may be formulated as follows: DISPLAYFORM2 where Σ is a diagonal covariance matrix with entries σ 2 i on the diagonal, similar to BID15.To optimize our policy, we use stochastic policy gradient methods, which are well-established family of techniques for reinforcement learning BID26. The gradient of the expected reward with respect to the policy parameters, ∇ θπ J(π(·|θ π)), is given by: DISPLAYFORM3 where DISPLAYFORM4 represents the initial state distribution, and p 0 (s 0)(s 0 → s | t, π 0) models the likelihood of reaching state s by starting at state s 0 and following the policy π(a, s|θ π) for T steps BID24. A π (s, a) represents an advantage function BID22. In this work, we use the Positive Temporal Difference (PTD) update proposed by BID30 for A π (s, a): DISPLAYFORM5 DISPLAYFORM6 where V π (s) = E T t=0 γ t r t | s 0 = s is the value function, which gives the expected discounted cumulative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of actions that have a positive advantage has been found to increase the stability of learning BID30. Because the true value function is unknown, an approximation V π (· | θ v) with parameters θ v is learned, which is formulated as the regression problem: DISPLAYFORM7 3.2 POLICY DISTILLATION Given a set of expert agents that have solved/mastered different tasks we may want to combine the skills of these different experts into a single multi-skilled agent. This process is referred to as distillation. Distillation does not necessarily produce an optimal mix of the given experts but instead tries to produce an expert that best matches the action distributions produced by all experts. This method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a new task efficiently. This problem falls in the area of Transfer Learning BID12. Considering the state distribution expert is skilled at solving, (D ωi the source distribution) it can be advantageous to start learning a new, target task ω i+1 with target distribution D ωi+1 using assistance from the expert. The agent learning how to solve the target task with domain D ωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the D ωi and D ωi+1 state distributions. Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 4 overall integration methods for learning multiple skills, the first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately BID13. The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to randomly initialize controllers and train in parallel (Parallel) and then combine the ing policies Figure 1b. We found that learning many skills from scratch was challenging, we were only able to get fair for the flat task. Also, when a new task is to be learned with the Parallel model it would occur outside of the original parallel learning, leading to a more sequential method. A TL-Only method that uses TL while learning tasks in a sequence Figure 1c, possibly ending with a distillation step to combine the learned policies to decrease forgetting. For more details see Appendix: 8.4. The last version (PLAiD) learns each task sequentially using TL from the previous, most skilled policy, in the end ing in a policy capable of solving all tasks Figure 1d. This method works well for both combining learned skills and learning new skills. DISPLAYFORM0 Figure 1: Different curriculum learning process. The red box with a D in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. The larger red boxes with an L terrain−type denotes a learning step where a new skill is learned. In this section, we detail our proposed learning framework for continual policy transfer and distillation (PLAiD). In the acquisition (TL) step, we are interested in learning a new task ω i+1. Here transfer can be beneficial if the task structure is somewhat similar to previous tasks ω i. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern for forgetting. As the agent learns it will develop more skills and the addition of every new skill can increase the probability of transferring knowledge to assist the learning of the next skill. In the integration (distillation) step, we are interested in combining all past skills (π 0, . . ., π i) with the newly acquired skill π i+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always in robust behaviour. This poor behaviour is caused by the student experiencing a different distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student. This allows the expert to suggest behaviour that will pull the state distribution of the student closer to the expert's. This is a common problem in learning a model to reproduce a given distribution of trajectories BID18 BID1 BID10 BID8. We use a method similar to the DAGGER algorithm BID18 which is useful for distilling policies BID13. See Appendix: 8.2.1 for more details. As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. The presented in this work cover a range of tasks that share a similar action space and state space. Our focus is to demonstrate continual learning between related tasks. In addition, the conceptual framework allows for extensions that would permit differing state spaces, described later in Section: 5.2. In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker (pdbiped) learns to traverse. The humanoid walker is trained to navigate multiple types of terrain including flat in FIG5 ), incline FIG5 ), steps FIG5 ), slopes FIG5 ), gaps FIG5 and a combination of all terrains mixed FIG5 ) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to BID14. The pd-biped receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.5. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. We evaluate our approach against three baselines. First, we compare the above learning curriculum from learning new tasks in PLAiD with learning new tasks in Parallel. This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. Last, a method that performs TL between tasks and concludes with a distillation step is evaluated to illustrate the of different TL and distillation schedules. The of the PLAiD controller are displayed in the accompanying Video 1 5.1 TRANSFER LEARNING First, the pd-biped is trained to produce a walking motion on flat ground (flat). In FIG0 PLAiD is compared to the three baselines for training on incline. The TL-Only method learns fast as it is given significant information about how to perform similar skills. The Parallel method is given no prior information leading to a less skilled policy. The first MultiTasker for the incline task is initialized from a terrain injected controller that was trained to walk on flat ground. Any subsequent MultiTasker is initialized from the final MultiTasker model of the preceding task. This controller has to learn multiple tasks together, which can complicate the learning process, as simulation for each task is split across the training and the overall RL task can be challenging. This is in contrast to using PLAiD, that is also initialized with the same policy trained on flat, that will integrate skills together after each new skill is learned. In FIG0 the MultiTasker is learning the new task (steps) with similar speed to PLAiD. However, after adding more tasks the MultiTasker is beginning to struggle in FIG0 and starts to forget in FIG0, with the number of tasks it must learn at the same time. While PLAiD learns the new tasks faster and is able to integrate the new skill required to solve the task robustly. TL-Only is also able to learn the new tasks very efficiently. An appealing property of using distillation in PLAiD is that the combined policy model need not resemble that of the individual expert controllers. For example, two different experts lacking state features and trained without a local map of the terrain can be combined into a single policy that has new state features for the terrain. These new terrain features can assist the agent in the task domain in which it operates. We introduce the idea of input injection for this purpose. We augment a policy with additional input features while allowing it to retain its original functional behaviour similar to BID2. This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behaviour. This is performed when distilling the flat and incline experts. Further details can be found in Appendix: 8.3. Training over multiple tasks at the same time may help the agent learn skills quicker, but this may not scale with respect to the number of tasks. When training the MultiTasker over two or even three tasks (FIG1) the method displays good , however when learning a fourth or more tasks the method struggles, as shown in FIG1 and 3b. Part of the reason for this struggle is when new Table 1: These values are relative percentage changes in the average reward, where a value of 0 is no forgetting and a value of −1 corresponds to completely forgetting how to perform the task. A value > 0 corresponds to the agent learning how to better perform a task after training on other tasks. Here, the final policy after training on gaps compared to the original polices produced at the end of training for the task noted in the column heading. The TL-Only baseline forgets more than PLAiD. The MultiTasker forgets less than PLAiD but has a lower average reward over the tasks.tasks are added the MultiTasker has to make trade-offs between more tasks to maximizes. As more tasks are added, this trade-off becomes increasingly complex ing in the MultiTasker favouring easier tasks. Using PLAiD to combine the skills of many policies appears to scale better with respect to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL solution. This can be seen in FIG1, 3e and especially in 3f where PLAiD combines the skills faster and can find higher value policies in practice. PLAiD also presents zero-shot training on tasks which it has never been trained on. In Figure 7 this generalization is shown as the agent navigates across the mixed environment. This is also reflected in Table 1, that shows the final average reward when comparing methods before and after distillation. The TL-Only is able to achieve high performance but much is lost when learning new tasks. A final distillation step helps mitigate this issue but does not work as well as PLAiD. It is possible performing a large final distillation step can lead to over-fitting. There are some indications that distillation is hindering training during the initial few iterations. We are initializing the network used in distillation with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step, shown in FIG1 and 3e. There also might not exist a smooth transition in policy space between the newly learned policy and the previous policy distribution. MultiTasker vs PLAiD: The MultiTasker may be able to produce a policy that has higher overall average reward, but in practise constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks, the MultiTasker can favour a task with higher rewards, as these tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker may also favour tasks that are easier than other tasks in general. We have shown that the PLAiD scales better with respect to the number of tasks than the MultiTasker. We expect PLAiD would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. In our evaluation we compare the number of iterations PLAiD uses to the number the MultiTasker uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioural cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility of using the value functions for TL. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task, the state distribution has changed and the reward function may be completely different. This makes it unlikely that the value function will be accurate on this new task. In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. Once integrated, the skills for our locomotion tasks are self-selecting based on their context, i.e., the knowledge of the upcoming terrain. It may be that other augmentation and distillation strategies are better for situations where either the reward functions are different or a one-hot vector is used to select the currently active expert. In our transfer learning we could be over fitting the initial expert for the particular task it was learning. Making it more challenging for the policy to learn a new task, ing in negative transfer. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training process to preserve the experts skill well enough. How best to chose which data should be trained on next to best preserve the behaviour of experts is a general problem with multi-task learning. Distillation treats all tasks equally independent of their reward. This can in very low value tasks, receiving potentially more distribution than desired and high value tasks receiving not enough. We have not needed the use a one-hot vector to indicate what task the agent is performing. We want the agent to be able to recognize which task it is given but we do realize that some tasks could be too similar to differentiate, such as, walking vs jogging on flat ground. It would be interesting to develop a method to prioritize tasks during the distillation step. This could assist the agent with forgetting issues or help with relearning tasks. While we currently use the Mean Squared Error (MSE) to pull the distributions of student policies in line with expert polices for distillation, better distance metrics would likely be helpful. Previous methods have used KL Divergence in the discrete action space domain where the state-action value function encodes the policy, e.g., as with Deep Q-Network (DQN). In this work we do not focus on producing the best policy from a mixture of experts, but instead we match the distributions from a number of experts. The difference is subtle but in practice it can be more challengine to balance many experts with respect to their reward functions. It could also be beneficial to use a KL penalty while performing distillation, i.e., something similar to the work in BID28 in order to keep the policy from changing too rapidly during training. We have proposed and evaluated a method for the progressive learning and integration (via distillation) of motion skills. The method exploits transfer learning to speed learning of new skills, along with input injection where needed, as well as continuous-action distillation, using DAGGER-style learning. This compares favorably to baselines consisting of learning all skills together, or learning all the skills individually before integration. We believe that there remains much to learned about the best training and integration methods for movement skill repertoires, as is also reflected in the human motor learning literature. We augment the blind network design by adding features for terrain to create an agent with sight. This network with terrain features has a single convolution layer with 8 filters of width 3. This constitutional layer is followed by a dense layer of 32 units. The dense layer is then concatenated twice, once along each of the original two hidden layers in the blind version of the policy. The policy network models a Gaussian distribution by outputting a state dependant mean. We use a state independent standard deviation that normalized with respect to the action space and multiplied by 0.1. We also use a version of epsilon greedy exploration where with probability an exploration action is generated. For all of our experiments we linearly anneal from 0.2 to 0.1 in 100, 000 iterations and leave it from that point on. Each training simulation takes approximately 5 hours across 8 threads. For network training we use Stochastic Gradient Decent (SGD) with momentum. During the distillation step we use gradually anneal the probability of selecting an expert action from 1 to 0 over 10, 000 iterations. For the evaluation of each model on a particular task we use the average reward achieved by the agent over at most 100 seconds of simulation time. We average this over running the agent over a number of randomly generated simulation runs. For each of the distillation steps we initialize the policy from the most recently trained policy. This policy has seen all of the tasks thus far but may have overfit the most recent tasks. We us a version of the DAGGER algorithm for the distillation process BID18. We anneal from selecting actions from the expert polices to selecting actions from the student policy The probability of selecting an action from the expert is annealed to near zero after 10, 000 training updates. We still add exploration noise to the policies when generating actions to take in the simulation. This is also annealed along with the probability of selecting from the expert policy. The actions used for training always come from the expert policy. Although some actions are applied in the simulation from the student, during a training update those actions will be replaced with ones from the proper expert. The expert used to generate actions for tasks 0 − i is π i and the expert used to generate action for task i + 1 is π i+1. We keep around at most 2 policies at any time. In order to add additional input features to the policy network we construct a new network. This new network has a portion of it that is the same design as the previous network plus additional parameters. First we initialize the new network with random parameters. Then we copy over the values from the previous network into the new one for the portion of the network design that matches the old. Then the weight for the layers that connect the old portion of the network to the new are set to 0. This will allow the network to preserve the previous distribution it modeled. Having the parameters from the old network will also help generate gradients to train the new 0 valued network parameters. We use feature injection to assist the learning method with differentiating between different states. For example, it could be challenging to discover the difference between the flat and incline tasks using only the character features. Therefore, we add new terrain features to allow the controller to better differentiate between these two different tasks. We also evaluate a baseline where we TL for all tasks. In this baseline TL is performed for a number of tasks and then distillation is used to combined these many learned skills. This method can be considered a version of PLAiD where tasks are learned in groups and after some number of tasks, a collection of policies/skills are distilled together. In FIG3 the learning curves for the TL-Only baseline are given. The TL-Only method learns new tasks well. We do not show the incline tasks as the two methods are the same up to starting the steps tasks. In Table 1 the amount of forgetting is compared between methods. To compare the amount of forgetting between TL-Only and PLAiD we show the relative loss in average reward between the original policy trained for the tasks steps and slopes and the final polices for each method on gaps. The TL-Only method shows a larger drop in policy performance corresponding to a large amount of forgeting compared to PLAiD, in particular for the more complex tasks steps and slopes. Interestingly, the final distllation step for TL-Only appears to reduce the performance of the policy. We believe this is related to the final distillation step being more challenging than performing a simpler distillation after each new task. Note that we only compare these two steps because the process for the first two tasks for PLAiD and TL-Only are the same. A comparison of the average rewards for the final policies are given in TAB1. The agent used in the simulation models the dimensions and masses of the average adult. The size of the character state is 50 parameters that include the relative position and velocity of the links in the agent FIG2 ). The action space consists of 11 parameters that indicate target joint positions for the agent. The target joint positions (pd-targets) are turned into joint torques via proportional derivative controllers at each joint. The reward function for the agent consists of 3 primary terms. The first is a velocity term the rewards the agent for going at velocity of 1 m/s The second term is the difference between the pose of the agent and the current pose of a kinematic character controlled via a motion capture clip. The difference between the agent and the clip consists of the rotational difference between each corresponding joint and the difference in angular velocity. The angular velocity for the clip is approximated via finite differences between the current pose of the clip and it's last pose. The last term is an L2 penalty on the torques generated by the agent to help reduce spastic motions. We also impose torque limits on the joints to reduce unrealistic behaviour, limits: Hips 150, knees 125, ankles 100, shoulders 100, elbows 75 and neck 50 N/m. Terrain Types All terrain types are randomly generated per episode, except for the flat terrain. The incline terrain is slanted and the slant of the terrain is randomly sampled between 20 and 25 degrees. The steps terrain consists of flat segments with widths randomly sampled from 1.0 m to 1.5 m followed by sharp steps that have randomly generated heights between 5 cm and 15 cm. The slopes terrain is randomly generated by updating the slope of the previous point in the ground with a value sampled from −20 and 20 degrees to generate a new portion of the ground every 10 cm. The gaps terrain generate gaps of width 25 -30 cm separated by flat segments of widths sampled from 2.0 m to 2.5 m. The mixed terrain is a combination of the above terrains where a portion is randomly chosen from the above terrain types. Figure 7: Still frame shots of the pd-biped traversing the mixed environment. In certain cases the MultiTasker can learn new task faster than PLAiD. In Figure 8a we present the MultiTasker and compare it to PLAiD. In this case the MultiTasker splits its training time across multiple tasks, here we compare the two methods with respect to the time spent learning on the single new task. This is a good baseline to compare our method against but in some ways this is not fair. If the real measure of how efficient a learning method is the number of simulation samples that are needed to learn would fall far behind as the MultiTasker needs to train across all tasks to gain the benefits of improving a single task without forgetting the old tasks.(a) Figure 8: (a) Shows that the MultiTasker can learn faster on steps, flat and incline than PLAiD (expert) learning the single task steps with TL.
A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills.
1,761
scitldr
Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go. However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings. Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent. Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it. However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries. We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions. We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper. We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward. We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3. Deep Reinforcement Learning (DRL) has achieved many recent breakthroughs in challenging domains such as Go . While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces, the learned policies are not interpretable, which hinders their use in safety-critical applications. Some recent works leverage programs and decision trees as representations for interpreting the learned agent policies. PIRL uses program synthesis to generate a program in a Domain-Specific Language (DSL) that is close to the DRL agent policy. The design of the DSL with desired operators is a tedious manual effort and the enumerative search for synthesis is difficult to scale for larger programs. In contrast, Viper learns a Decision Tree (DT) policy by mimicking the DRL agent, which not only allows for a general representation for different policies, but also allows for verification of these policies using integer linear programming solvers. Viper uses the DAGGER imitation learning approach to collect state action pairs for training the student DT policy given the teacher DRL policy. It modifies the DAGGER algorithm to use the Q-function of teacher policy to prioritize states of critical importance during learning. However, learning a single DT for the complete policy leads to some key shortcomings such as i) less faithful representation of original agent policy measured by the number of mispredictions, ii) lower overall performance (reward), and iii) larger DT sizes that make them harder to interpret. In this paper, we present MOËT (Mixture of Expert Trees), a technique based on Mixture of Experts (MOE) (; ;), and reformulate its learning procedure to support DT experts. MOE models can typically use any expert as long as it is a differentiable function of model parameters, which unfortunately does not hold for DTs. Similar to MOE training with Expectation-Maximization (EM) algorithm, we first observe that MOËT can be trained by interchangeably optimizing the weighted log likelihood for experts (independently from one another) and optimizing the gating function with respect to the obtained experts. Then, we propose a procedure for DT learning in the specific context of MOE. To the best of our knowledge we are first to combine standard non-differentiable DT experts, which are interpretable, with MOE model. Existing combinations which rely on differentiable tree or treelike models, such as soft decision trees and hierarchical mixture of experts are not interpretable. We adapt the imitation learning technique of Viper to use MOËT policies instead of DTs. MOËT creates multiple local DTs that specialize on different regions of the input space, allowing for simpler (shallower) DTs that more accurately mimic the DRL agent policy within their regions, and combines the local trees into a global policy using a gating function. We use a simple and interpretable linear model with softmax function as the gating function, which returns a distribution over DT experts for each point in the input space. While standard MOE uses this distribution to average predictions of DTs, we also consider selecting just one most likely expert tree to improve interpretability. While decision boundaries of Viper DT policies must be axis-perpendicular, the softmax gating function supports boundaries with hyperplanes of arbitrary orientations, allowing MOËT to more faithfully represent the original policy. We evaluate our technique on four different environments: CartPole, Pong, Acrobot, and Mountaincar. We show that MOËT achieves significantly better rewards and lower misprediction rates with shallower trees. We also visualize the Viper and MOËT policies for Mountaincar, demonstrating the differences in their learning capabilities. Finally, we demonstrate how a MOËT policy can be translated into an SMT formula for verifying properties for CartPole game using the Z3 theorem prover (De Moura and Bjørner, 2008) under similar assumptions made in Viper. In summary, this paper makes the following key contributions: 1) We propose MOËT, a technique based on MOE to learn mixture of expert decision trees and present a learning algorithm to train MOËT models. 2) We use MOËT models with a softmax gating function for interpreting DRL policies and adapt the imitation learning approach used in Viper to learn MOËT models. 3) We evaluate MOËT on different environments and show that it leads to smaller, more faithful, and performant representations of DRL agent policies compared to Viper while preserving verifiability. Interpretable Machine Learning: In numerous contexts, it is important to understand and interpret the decision making process of a machine learning model. However, interpretability does not have a unique definition that is widely accepted. Accoding to Lipton , there are several properties which might be meant by this word and we adopt the one which Lipton names transparency which is further decomposed to simulability, decomposability, and algorithmic transparency. A model is simulable if a person can in reasonable time compute the outputs from given inputs and in that way simulate the model's inner workings. That holds for small linear models and small decision trees . A model is decomposable if each part of a models admits an intuitive explanation, which is again the case for simple linear models and decision trees . Algorithmic transparency is related to our understanding of the workings of the training algorithm. For instance, in case of linear models the shape of the error surface and properties of its unique minimum towards which the algorithm converges are well understood . MOËT models focus on transparency (as we discuss at the end of Section 5). Explainable Machine Learning: There has been a lot of recent interest in explaining decisions of black-box models (a;). For image classification, activation maximization techniques can be used to sample representative input patterns . TCAV uses human-friendly high-level concepts to associate their importance to the decision. Some recent works also generate contrastive robust explanations to help users understand a classifier decision based on a family of neighboring inputs . LORE (b) explains behavior of a blackbox model around an input of interest by sampling the black-box model around the neighborhood of the input, and training a local DT over the sampled points. Our model presents an approach that combines local trees into a global policy. Tree-Structured Models: Irsoy et al. propose a a novel decision tree architecture with soft decisions at the internal nodes where both children are chosen with probabilities given by a sigmoid gating function. Similarly, binary tree-structured hierarchical routing mixture of experts (HRME) model, which has classifiers as non-leaf node experts and simple regression models as leaf node experts, were proposed in . Both models are unfortunately not interpretable. Knowledge Distillation and Model Compression: We rely on ideas already explored in fields of model compression and knowledge distillation . The idea is to use a complex well performing model to facilitate training of a simpler model which might have some other desirable properties (e.g., interpretability). Such practices have been applied to approximate decision tree ensemble by a single tree , but this is different from our case, since we approximate a neural network. In a similar fashion a neural network can be used to train another neural network , but neural networks are hard to interpret and even harder to formally verify, so this is also different from our case. Such practices have also been applied in the field of reinforcement learning in knowledge and policy distillation (; ;), which are similar in spirit to our work, and imitation learning (; ; ;), which provide a foundation for our work. We now present a simple motivating example to showcase some of the key differences between Viper and MOËT approaches. Consider the N × N Gridworld problem shown in Figure 1a (for N = 5). The agent is placed at a random position in a grid (except the walls denoted by filled rectangles) and should find its way out. To move through the grid the agent can choose to go up, left, right or down at each time step. If it hits the wall it stays in the same position (state). State is represented using two integer values (x, y coordinates) which range from-bottom left to (N − 1, N − 1)-top right. The grid can be escaped through either left doors (left of the first column), or right doors (right of the last column). A negative reward of −0.1 is received for each agent action (negative reward encourages the agent to find the exit as fast as possible). An episode finishes as soon as an exit is reached or if 100 steps are made whichever comes first. The optimal policy (π *) for this problem consists of taking the left (right resp.) action for each state below (above resp.) the diagonal. We used π * as a teacher and used imitation learning approach of Viper to train an interpretable DT policy that mimics π *. The ing DT policy is shown in Figure 1b. The DT partitions the state space (grid) using lines perpendicular to x and y axes, until it separates all states above diagonal from those below. This in a DT of depth 3 with 9 nodes. On the other hand, the policy learned by MOËT is shown in Figure 1c. The MOËT model with 2 experts learns to partition the space using the line defined by a linear function 1.06x + 1.11y = 4 (roughly the diagonal of the grid). Points on the different sides of the line correspond to two different experts which are themselves DTs of depth 0 always choosing to go left (below) or right (above). We notice that DT policy needs much larger depth to represent π * while MOËT can represent it as only one decision step. Furthermore, with increasing N (size of the grid), complexity of DT will grow, while MOËT complexity stays the same; we empirically confirm this for N =. 5, 6, 7, 8, 9, 10 DT depths are 3, 4, 4, 4, 4, 5 and number of nodes are 9, 11, 13, 15, 17, 21 respectively. In contrast, MOËT models of same complexity and structure as the one shown in Figure 1c are learned for all values of N (models differ in the learned partitioning linear function). In this section we provide description of two relevant methods we build upon: Viper, an approach for interpretable imitation learning, and MOE learning framework. Viper. Viper algorithm (included in appendix) is an instance of DAGGER imitation learning approach, adapted to prioritize critical states based on Q-values. Inputs to the Viper training algorithm are environment e which is an finite horizon (T -step) Markov Decision Process (MDP) (S, A, P, R) with states S, actions A, transition probabilities P: S × A × S →, and rewards R: S → R; teacher policy π t: S → A; its Q-function Q πt: S × A → R and number of training iterations N. Distribution of states after T steps in environment e using a policy π is d (π) (e) (assuming randomly chosen initial state). Viper uses the teacher as an oracle to label the data (states with actions). It initially uses teacher policy to sample trajectories (states) to train a student (DT) policy. It then uses the student policy to generate more trajectories. Viper samples training points from the collected dataset D giving priority to states s having higher importance I(s), where. This sampling of states leads to faster learning and shallower DTs. The process of sampling trajectories and training students is repeated for number of iterations N, and the best student policy is chosen using reward as the criterion. MOE is an ensemble model (; ;) that consists of expert networks and a gating function. Gating function divides the input (feature) space into regions for which different experts are specialized and responsible. MOE is flexible with respect to the choice of expert models as long as they are differentiable functions of model parameters (which is not the case for DTs). In MOE framework, probability of outputting y ∈ IR m given an input x ∈ IR n is given by: where E is the number of experts, g i (x, θ g) is the probability of choosing the expert i (given input x), P (y|x, θ i) is the probability of expert i producing output y (given input x). Learnable parameters are θ = (θ g, θ e), where θ g are parameters of the gating function and θ e = (θ 1, θ 2, ..., θ E) are parameters of the experts. Gating function can be modeled using a softmax function over a set of linear models. Let θ g consist of parameter vectors (θ g1, . . ., θ gE), then the gating function can be defined as. In the case of classification, an expert i outputs a vector y i of length C, where C is the number of classes. Expert i associates a probability to each output class c (given by y ic) using the gating function. Final probability of a class c is a gate weighted sum of y ic for all experts i ∈ 1, 2,..., E. This creates a probability vector y = (y 1, y 2, ..., y C), and the output of MOE is arg max i y i. MOE is commonly trained using EM algorithm, where instead of direct optimization of the likelihood one performs optimization of an auxiliary functionL defined in a following way. Let z denote the expert chosen for instance x. Then joint likelihood of x and z can be considered. Since z is not observed in the data, log likelihood of samples (x, z, y) cannot be computed, but instead expected log likelihood can be considered, where expectation is taken over z. Since the expectation has to rely on some distribution of z, in the iterative process, the distribution with respect to the current estimate of parameters θ is used. More precisely functionL is defined by : where θ (k) is the estimate of parameters θ in iteration k. Then, for a specific sample D = {(x i, y i) | i = 1,..., N }, the following formula can be derived : where it holds In this section we explain the adaptation of original MOE model to mixture of decision trees, and present both training and inference algorithms. Considering that coefficients h (k) ij (Eq. 4) are fixed with respect to θ and that in Eq. 3 the gating part (first double sum) and each expert part depend on disjoint subsets of parameters θ, training can be carried out by interchangeably optimizing the weighted log likelihood for experts (independently from one another) and optimizing the gating function with respect to the obtained experts. The training procedure for MOËT, described by Algorithm 1, is based on this observation. First, the parameters of the gating function are randomly initialized (line 2). Then the experts are trained one by one. Each expert j is trained on a dataset D w of instances weighted by coefficients h (k) ij (line 5), by applying specific DT learning algorithm (line 6) that we adapted for MOE context (described below). After the experts are trained, an optimization step is performed (line 7) in order to increase the gating part of Eq. 3. At the end, the parameters are returned (line 8). Our tree learning procedure is as follows. Our technique modifies original MOE algorithm in that it uses DTs as experts. The fundamental difference with respect to traditional model comes from the fact that DTs do not rely on explicit and differentiable loss function which can be trained by gradient descent or Newton's methods. Instead, due to their discrete structure, they rely on a specific greedy training procedure. Therefore, the training of DTs has to be modified in order to take into account the attribution of instances to the experts given by coefficients h (k) ij, sometimes called responsibility of expert j for instance i. If these responsibilities were hard, meaning that each instance is assigned to strictly one expert, they would in partitioning the feature space into disjoint regions belonging to different experts. On the other hand, soft responsibilities are fractionally distributing each instance to different experts. The higher the responsibility of an expert j for an instance i, the higher the influence of that instance on that expert's training. In order to formulate this principle, we consider which way the instance influences construction of a tree. First, it affects the impurity measure computed when splitting the nodes and second, it influences probability estimates in the leaves of the tree. We address these two issues next. A commonly used impurity measure to determine splits in the tree is the Gini index. Let U be a set of indices of instances assigned to the node for which the split is being computed and D U set of corresponding instances. Let categorical outcomes of y be 1,..., C and for l = 1,..., C denote p l fraction of assigned instances for which it holds y = l. More formally, where I denotes indicator function of its argument expression and equals 1 if the expression is true. Then the Gini index G of the set D U is defined by: Considering that the assignment of instances to experts are fractional as defined by responsibility coefficients h (k) ij (which are provided to tree fitting function as weights of instances computed in line 5 of the algorithm), this definition has to be modified in that the instances assigned to the node should not be counted, but instead, their weights should be summed. Hence, we propose the following definition: and compute the Gini index for the set D U as G(p 1, . . .,p C). Similar modification can be performed for other impurity measures relying on distribution of outcomes of a categorical variable, like entropy. Note that while the instance assignments to experts are soft, instance assignments to nodes within an expert are hard, meaning sets of instances assigned to different nodes are disjoint. Probability estimate for y in the leaf node is usually performed by computing fractions of instances belonging to each class. In our case, the modification is the same as the one presented by Eq. 5. That way, estimates of probabilities P (y|x, θ j) needed by MOE are defined. In Algorithm 1, function f it tree performs decision tree training using the above modifications. We consider two ways to perform inference with respect to the obtained model. First one which we call MOËT, is performed by maximizing P (y|x, θ) with respect to y where this probability is defined by Eq. 1. The second way, which we call MOËT h, performs inference as arg max y P (y|x, θ arg max j gj (x,θg) ), meaning that we only rely on the most probable expert. Algorithm 1 MOËT training. for e ← 1 to N E do 4: Adaptation of MOËT to imitation learning. We integrate MOËT model into imitation learning approach of Viper by substituting training of DT with the MOËT training procedure. Expressiveness. Standard decision trees make their decisions by partitioning the feature space into regions which have borders perpendicular to coordinate axes. To approximate borders that are not perpendicular to coordinate axes very deep trees are usually necessary. MOËT h mitigates this shortcoming by exploiting hard softmax partitioning of the feature space using borders which are still hyperplanes, but need not be perpendicular to coordinate axes (see Section 3). This improves the expressiveness of the model. A MOËT h model is a combination of a linear model and several decision tree models. For interpretability which is preserved in Lipton's sense of transparency, it is important that a single DT is used for each prediction (instead of weighted average). Simultability of MOËT h consisting of DT and linear models is preserved because our models are small (2 ≤ depth ≤ 10) and we do not use high dimensional features , so a person can easily simulate the model. Similarly, decomposability is preserved because simple linear models without heavily engineered features and decision trees are decomposable and MOËT h is a simple combination of the two. Finally, algorithmic transparency is achieved because MOËT training relies on DT training for the experts and linear model training for the gate, both of which are well understood. However, the alternating refinement of initial feature space partitioning and experts makes the procedure more complicated, so our algorithmic transparency is partially achieved. Importantly, we define a well-founded translation of MOËT h models to SMT formulas, which opens a new range of possibilities for interpreting and validating the model using automated reasoning tools. SMT formulas provide a rich means of logical reasoning, where a user can ask the solver questions such as: "On which inputs do the two models differ?", or "What is the closest input to the given input on which model makes a different prediction?", or "Are the two models equivalent?", or "Are the two models equivalent in respect to the output class C?". Answers to these and similar questions can help better understand and compare models in a rigorous way. Also note that our symbolic reasoning of the gating function and decision trees allows us to construct SMT formulas that are readily handled by off-the-shelf tools, whereas direct SMT encodings of neural networks do not scale for any reasonably sized network because of the need for non-linear arithmetic reasoning. We now compare MOËT and Viper on four OpenAI Gym environments: CartPole, Pong, Acrobot and Mountaincar. For DRL agents, we use policy gradient model in CartPole, in other environments we use a DQN (training parameters provided in appendix). The rewards obtained by the agents on CartPole, Pong, Acrobot and Mountaincar are 200.00, 21.00, −68.60 and −105.27, respectively (higher reward is better). Rewards are averaged across 100 runs (250 in CartPole). Comparison of MOËT, MOËT h, and Viper policies. For CartPole, Acrobot, and Mountaincar environments, we train Viper DTs with maximum depths of {1, 2, 3, 4, 5}, while in the case of Pong we use maximum depths of {4, 6, 8, 10} as the problem is more complex and requires deeper trees. For experts in MOËT policies we use the same maximum depths as in Viper (except for Pong for which we use depths 1 to 9) and we train the policies for 2 to 8 experts (in case of Pong we train for {2, 4, 8} experts). We train all policies using 40 iterations of Viper algorithm, and choose the best performing policy in terms of rewards (and lower misprediction rate in case of equal rewards). We use two criteria to compare policies: rewards and mispredictions (number of times the student performs an action different from what a teacher would do). High reward indicates that the student learned more crucial parts of the teacher's policy, while a low misprediction rate indicates that in most cases student performs the same action as the teacher. In order to measure mispredictions, we run the student for number of runs, and compare actions it took to the actions teacher would perform. To ensure comparable depths for evaluating Viper and MOËT models while accounting for the different number of experts in MOËT, we introduce the notion of effective depth of a MOËT model as log 2 (E) + D, where E denotes the number of experts and D denotes the depth of each expert. Table 1 compares the performance of Viper, MOËT and MOËT h. The first column shows the depth of Viper decision trees and the corresponding effective depth for MOËT, rewards and mispredictions are shown in R and M columns resp. We show of the best performing MOËT configuration for a given effective depth chosen based on average for rewards and mispredictions, where e.g. E3:D2 denotes 3 experts with DTs of depth 2. All shown are averaged across 10 runs 1. For CartPole, Viper, MOËT and MOËT h all achieve perfect reward with depths of 2 and greater. More interestingly, for depth 2 MOËT and MOËT h obtain significantly lower average misprediction rates of 0.84% and 0.91% respectively compared to 16.65% for Viper. Even for larger depths, the misprediction rates for MOËT and MOËT h remain significantly lower. For Pong, we observe that MOËT and MOËT h consistently outperform Viper for all depths in terms of rewards and mispredictions, whereas MOËT and MOËT h have similar performance. For Acrobot, we similarly notice that both MOËT and MOËT h achieve consistently better rewards compared to Viper for all depths. Moreover, the misprediction rates are also significantly lower for MOËT and MOËT h in majority of the cases. Finally, for Mountaincar as well, we observe that MOËT and MOËT h both consistently outperform Viper with significantly higher rewards and lower misprediction rates. Moreover, in both of these environments, we observe that both MOËT and MOËT h achieve comparable reward and misprediction rates. Additional are presented in appendix. Analyzing the learned Policies. We analyze the learned student policies (Viper and MOËT h) by visualizing their state-action space, the differences between them, and differences with the teacher policy. We use the Mountaincar environment for this analysis because of the ease of visualizing its 2-dimensional state space comprising of car position (p) and car velocity (v) features, and 3 allowed actions left, right, and neutral. We visualize DRL, Viper and MOËT h policies in Figure 2, showing the actions taken in different parts of the state space (additional visualizations are in appendix). The state space is defined by feature bounds p ∈ [−1.2, 0.6] and v ∈ [−0.07, 0.07], which represent sets of allowed feature values in Mountaincar. We sample the space uniformly with a resolution 200 × 200. The actions left, neutral, and right are colored in green, yellow, and blue, respectively. Recall that MOËT h can cover regions whose borders are hyperplanes of arbitrary orientation, while Viper, i.e. DT can only cover regions whose borders are perpendicular to coordinate axes. This manifests in MOËT h policy containing slanted borders in yellow and green regions to capture more precisely the geometry of DRL policy, while the Viper policy only contains straight borders. Furthermore, we visualize mispredictions for Viper and MOËT h policies. While in previous section we calculated mispredictions by using student policy for playing the game, in this analysis we visualize mispredictions across the whole state space by sampling. Note that in some states (critical states) it is more important to get the action right, while in other states choosing non-optimal action does not affect the overall score much. Viper authors make use of this observation to weight states by their importance, and they use difference between Q values of optimal and non-optimal actions as a proxy for calculating how important (critical) state is. Importance score is calculated as follows: I(s) = max a∈A Q(s, a)−min a∈A Q(s, a), where Q(s, a) denotes the Q value of action a in state s, and A is a set of all possible actions. Using I function we weight mispredictions by their importance. We create a vector i consisting of importance scores for sampled points, and normalize it to range. We also create a binary vector z which is 1 in the case of misprediction (student policy decision is different from DRL decision) and 0 otherwise. We visualize m = z i (element-wise multiplication), where higher value indicates misprediction of higher importance and is denoted by a red color of higher intensity. Such normalized mispredictions (m) for Viper and MOËT h policies are shown in Figure 2d and Figure 2e respectively. We can observe that the MOËT h policy has fewer high intensity regions leading to fewer overall mispredictions. To provide a quantitative difference between the mispredictions of two policies, we compute M = (j m j / j i j) · 100, which is measure in bounds such that its value is 0 in the case of no mispredictions, and 100 in the case of all mispredictions. For the policies shown in Figure 2d and Figure 2e, we obtain M = 15.51 for Viper and M = 11.78 for MOËT h policies. We also show differences in mispredictions between Viper and MOËT h (Figure 2f) Translating MOËT to SMT. We now show the translation of MOËT policy to SMT constraints for verifying policy properties. We present an example translation of MOËT policy on CartPole environment with the same property specification that was proposed for verifying Viper policies . The goal in CartPole is to keep the pole upright, which can be encoded as a formula: where s i represents state after i steps, φ is the deviation of pole from the upright position. In order to encode this formula it is necessary to encode the transition function f t (s, a) which models environment dynamics: given a state and action it returns the next state of the environment. Also, it is necessary to encode the policy function π(s) that for a given state returns action to perform. There are two issues with verifying ψ: infinite time horizon; and the nonlinear transition function f t. To solve this problem, use a finite time horizon T max = 10 and linear approximation of the dynamics and we make the same assumptions. To encode π(s) we need to translate both the gating function and DT experts to logical formulas. Since the gating function in MOËT h uses exponential function, it is difficult to encode the function directly in Z3 as SMT solvers do not have efficient decision procedures to solve non-linear arithmetic. The direct encoding of exponentiation therefore leads to prohibitively complex Z3 formulas. We exploit the following simplification of gating function that is sound when hard prediction is used: First simplification is possible since the denominators for gatings of all experts are same, and second simplification is due to the monotonicity of the exponential function. For encoding DTs we use the same encoding as in Viper. To verify that ψ holds we need to show that ¬ψ is unsatisfiable. We run the verification with our MOËT h policies and show that ¬ψ is indeed unsatisfiable. To better understand the scalability of our verification procedure, we report on the verification times needed to verify policies for different number of experts and different expert depths in Figure 3. We observe that while MOËT h policies with 2 experts take from 2.6s to 8s for verification, the verification times for 8 experts can go up to as much as 319s. This directly corresponds to the complexity of the logical formula obtained with an increase in the number of experts. We introduced MOËT, a technique based on MOE with expert decision trees and presented a learning algorithm to train MOËT models. We then used MOËT models for interpreting DRL agent policies, where different local DTs specialize on different regions of input space and are combined into a global policy using a gating function. We showed that MOËT models lead to smaller, more faithful and performant representation of DRL agents compared to previous state-of-the-art approaches like Viper while still maintaining interpretability and verifiability. Algorithm 2 Viper training 1: procedure VIPER (MDP e, TEACHER π t, Q-FUNCTION Q πt, ITERATIONS N) 2: Initialize dataset and student: D ← ∅, π s0 ← π t 3: Sample trajectories and aggregate: Sample dataset using Q values: Train decision tree: return Best policy π s ∈ {π s1, ..., π s N}. Viper algorithm is shown in Algorithm 2. Here we present parameters we used to train DRL agents for different environments. For CartPole, we use policy gradient model as used in Viper. While we use the same model, we had to retrain it from scratch as the trained Viper agent was not available. For Pong, we use a deep Q-network (DQN) network , and we use the same model as in Viper, which originates from OpenAI baselines (OpenAI Baselines). For Acrobot and Mountaincar, we implement our own version of dueling DQN network following . We use 3 hidden layers with 15 neurons in each layer. We set the learning rate to 0.001, batch size to 30, step size to 10000 and number of epochs to 80000. We checkpoint a model every 5000 steps and pick the best performing one in terms of achieved reward. In this section we provide a brief description of environments we used in our experiments. We used four environments from OpenAI Gym: CartPole, Pong, Acrobot and Mountaincar. This environment consists of a cart and a rigid pole hinged to the cart, based on the system presented by Barto et al. . At the beginning pole is upright, and the goal is to prevent it from falling over. Cart is allowed to move horizontally within predefined bounds, and controller chooses to apply either left or right force to the cart. State is defined with four variables: x (cart position),ẋ (cart velocity), θ (pole angle), andθ (pole angular velocity). Game is terminated when the absolute value of pole angle exceeds 12 •, cart position is more than 2.4 units away from the center, or after 200 successful steps; whichever comes first. In each step reward of +1 is given, and the game is considered solved when the average reward is over 195 in over 100 consecutive trials. This is a classical Atari game of table tennis with two players. Minimum possible score is −21 and maximum is 21. This environment is analogous to a gymnast swinging on a horizontal bar, and consists of a two links and two joins, where the joint between the links is actuated. The environment is based on the system presented by Sutton . Initially both links are pointing downwards, and the goal is to swing the end-point (feet) above the bar for at least the length of one link. The state consists of six variables, four variables consisting of sin and cos values of the joint angles, and two variables for angular velocities of the joints. The action is either applying negative, neutral, or positive torque on the joint. At each time step reward of −1 is received, and episode is terminated upon successful reaching the height, or after 200 steps, whichever comes first. Acrobot is an unsolved environment in that there is no reward limit under which is considered solved, but the goal is to achieve high reward. This environment consists of a car positioned between two hills, with a goal of reaching the hill in front of the car. The environment is based on the system presented by Moore . Car can move in a one-dimensional track, but does not have enough power to reach the hill in one go, thus it needs to build momentum going back and forth to finally reach the hill. Controller can choose left, right or neutral action to apply left, right or no force to the car. State is defined by two variables, describing car position and car velocity. In each step reward of −1 is received, and episode is terminated upon reaching the hill, or after 200 steps, whichever comes first. The game is considered solved if average reward over 100 consecutive trials is no less than −110. In this section we provide visualization of a gating function. Figure 4 shows how gating function partitions the state space for which different experts specialize. Gatings of MOËT h policy with 4 experts and depth 1 are shown. E ADDITIONAL TABLES Table 2 shows similar to Table 1, but here in addition to averaging across multiple trained models, it averages across multiple MOËT configurations that have the same effective depth. Table 3 shows the of best performing DRL, MOËT and MOËT h models on the evaluation subjects.
Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts.
1,762
scitldr
We consider the problem of unconstrained minimization of a smooth objective function in $\mathbb{R}^d$ in setting where only function evaluations are possible. We propose and analyze stochastic zeroth-order method with heavy ball momentum. In particular, we propose, SMTP, a momentum version of the stochastic three-point method (STP). We show new complexity for non-convex, convex and strongly convex functions. We test our method on a collection of learning to continuous control tasks on several environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods. SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments. Our second contribution is SMTP with importance sampling which we call SMTP_IS. We provide convergence analysis of this method for non-convex, convex and strongly convex objectives. In this paper, we consider the following minimization problem where f: R d → R is "smooth" but not necessarily a convex function in a Derivative-Free Optimization (DFO) setting where only function evaluations are possible. The function f is bounded from below by f (x *) where x * is a minimizer. Lastly and throughout the paper, we assume that f is L-smooth. DFO. In DFO setting; , the derivatives of the objective function f are not accessible. That is they are either impractical to evaluate, noisy (function f is noisy) or they are simply not available at all. In standard applications of DFO, evaluations of f are only accessible through simulations of black-box engine or software as in reinforcement learning and continuous control environments. This setting of optimization problems appears also in applications from computational medicine and fluid dynamics; Haslinger & Mäckinen; to localization Marsden et al. (2004; 2007) and continuous control; to name a few. The literature on DFO for solving is long and rich. The first approaches were based on deterministic direct search (DDS) and they span half a century of work;;. However, for DDS methods complexity bounds have only been established recently by the work of Vicente and coauthors;. In particular, the work of showed the first complexity on non-convex f and the were extended to better complexities when f is convex. However, there have been several variants of DDS, including randomized approaches; Karmanov (1974a; b);;;. Only very recently, complexity bounds have also been derived for randomized methods;;;;. For instance, the work of; imposes a decrease condition on whether to accept or reject a step of a set of random directions. derived new complexity bounds when the random directions are normally distributed vectors for both smooth and non-smooth f. They proposed both accelerated and non-accelerated zero-order (ZO) methods. Accelerated derivative-free methods in the case of inexact oracle information was proposed in. An extension of for non-Euclidean proximal setup was proposed by for the smooth stochastic convex optimization with inexact oracle. More recently and closely related to our work, proposed a new randomized direct search method called Stochastic Three Points (STP). At each iteration k STP generates a random search direction s k according to a certain probability law and compares the objective function at three points: current iterate x k, a point in the direction of s k and a point in the direction of −s k with a certain step size α k. The method then chooses the best of these three points as the new iterate: The key properties of STP are its simplicity, generality and practicality. Indeed, the update rule for STP makes it extremely simple to implement, the proofs of convergence for STP are short and clear and assumptions on random search directions cover a lot of strategies of choosing decent direction and even some of first-order methods fit the STP scheme which makes it a very flexible in comparison with other zeroth-order methods (e.g. two-point evaluations methods like in , , , that try to approximate directional derivatives along random direction at each iteration). Motivated by these properties of STP we focus on further developing of this method. 1 is a special technique introduced by Polyak in 1964 Polyak (1964 to get faster convergence to the optimum for the first-order methods. In the original paper, Polyak proved that his method converges locally with O L /µ log 1 /ε rate for twice continuously differentiable µ-strongly convex and L-smooth functions. Despite the long history of this approach, there is still an open question whether heavy ball method converges to the optimum globally with accelerated rate when the objective function is twice continuous differentiable, L-smooth and µ-strongly convex. For this class of functions, only non-accelerated global convergence was proved and for the special case of quadratic strongly convex and L-smooth functions Lessard et. al. recently proved asymptotic accelerated global convergence. However, heavy ball method performs well in practice and, therefore, is widely used. One can find more detailed survey of the literature about heavy ball momentum in Loizou & Richtárik. Importance Sampling. Importance sampling has been celebrated and extensively studied in stochastic gradient based methods or in coordinate based methods Richtárik & Takáč. Only very recently, proposed, STP_IS, the first DFO algorithm with importance sampling. In particular, under coordinate-wise smooth function, they show that sampling coordinate directions, can be generalized to arbitrary directions, with probabilities proportional to the function coordinate smoothness constants, improves the leading constant by the same factor typically gained in gradient based methods. Contributions. Our contributions can be summarized into three folds. • First ZO method with heavy ball momentum. Motivated by practical effectiveness of first-order momentum heavy ball method, we introduce momentum into STP method and propose new DFO algorithm with heavy ball momentum (SMTP). We summarized the method in Algorithm 1, with theoretical guarantees for non-convex, convex and strongly convex functions under generic sampling directions D. We emphasize that the SMTP with momentum is not a straightforward generalization of STP and Polyak's method and requires insights from virtual iterates analysis from. To the best of our knowledge it is the first analysis of derivative-free method with heavy ball momentum, i.e. we show that the same momentum trick that works for the first order method could be applied for zeroth-order methods as well. • First ZO method with both heavy ball momentum and importance sampling. In order to get more gain from momentum in the case when the sampling directions are coordinate directions and the objective function is coordinate-wise L-smooth (see Assumption 4.1), we consider importance sampling to the above method. In fact, we propose the first zeroth-order Algorithm 1 SMTP: Stochastic Momentum Three Points Require: learning rates {γ where we propose an importance sampling that improves the leading constant marked in red. Note that r 0 = f (x 0) − f (x *) and that all assumptions listed are in addition to Assumption 2.1. Complexity means number of iterations in order to guarantee E ∇f (z K) D ≤ ε for the non-convex case, E f (z K) − f (x *) ≤ ε for convex and strongly convex cases. R 0 < ∞ is the radius in · * D -norm of a bounded level set where the exact definition is given in Assumption 3.2. We notice that for STP_IS · D = · 1 and · * D = · ∞ in non-convex and convex cases and · D = · 2 in the strongly convex case. momentum method with importance sampling (SMTP_IS) summarized in Algorithm 2 with theoretical guarantees for non-convex, convex and strongly convex functions. The details and proofs are left for Section 4 and Appendix E. • Practicality. We conduct extensive experiments on continuous control tasks from the MuJoCo suite following recent success of DFO compared to modelfree reinforcement learning;. We achieve with SMTP_IS the state-of-the-art on across all tested environments on the continuous control outperforming and policy gradient methods;. We provide more detailed comparison of SMTP and SMTP_IS in Section E.4 of the Appendix. We use · p to define p -norm of the vector for p ≥ 1 and As we mention in the introduction we assume throughout the paper 2 that the objective function f is L-smooth. Assumption 2.1. (L-smoothness) We say that f is L-smooth if From this definition one can obtain and if additionally f is convex, i.e. f (y) ≥ f (x) + ∇f (x), y − x, we have Our analysis of SMTP is based on the following key assumption. Assumption 3.1. The probability distribution D on R d satisfies the following properties: Some examples of distributions that meet above assumption are described in Lemma 3.4 from. For convenience we provide the statement of the lemma in the Appendix (see Lemma F.1). Recall that one possible view on is as following. If we substitute gradient ∇f (x k) in the update rule for the gradient descent by ±s k where s k is sampled from distribution D satisfied Assumption 3.1 and then select x k+1 as the best point in terms of functional value among we will get exactly STP method. However, gradient descent is not the best algorithm to solve unconstrained smooth minimization problems and the natural idea is to try to perform the same substitution-trick with more efficient first-order methods than gradient descent. We put our attention on Polyak's heavy ball method where the update rule could be written in the following form: By definition of z k+1, we get that the sequence {f (z k)} k≥0 is monotone: Now, we establish the key which will be used to prove the main complexity and remaining theorems in this section. Lemma 3.1. Assume that f is L-smooth and D satisfies Assumption 3.1. Then for the iterates of SMTP the following inequalities hold: and 3.1 NON-CONVEX CASE In this section, we show our complexity for Algorithm 1 in the case when f is allowed to be non-convex. In particular, we show that SMTP in Algorithm 1 guarantees complexity bounds with the same order as classical bounds, i.e. 1/ √ K where K is the number of iterations, in the literature. We notice that query complexity (i.e. number of oracle calls) of SMTP coincides with its iteration complexity up to numerical constant factor. For clarity and completeness, proofs are left for the appendix. Theorem 3.1. Let Assumptions 2.1 and 3.1 be satisfied. Let SMTP with γ k ≡ γ > 0 produce points {z 0, z 1, . . ., z K−1} and z K is chosen uniformly at random among them. Then Moreover, if we choose γ = γ0 √ K the complexity reduces to minimizes the right-hand side of and for this choice we have In other words, the above theorem states that SMTP converges no worse than STP for non-convex problems to the stationary point. In the next sections we also show that theoretical convergence guarantees for SMTP are not worse than for STP for convex and strongly convex problems. However, in practice SMTP significantly outperforms STP. So, the relationship between SMTP and STP correlates with the known in the literature relationship between Polyak's heavy ball method and gradient descent. In this section, we present our complexity for Algorithm 1 when f is convex. In particular, we show that this method guarantees complexity bounds with the same order as classical bounds, i.e. 1/K, in the literature. We will need the following additional assumption in the sequel. Assumption 3.2. We assume that f is convex, has a minimizer x * and has bounded level set at x 0: where From the above assumption and Cauchy-Schwartz inequality we get the following implication: Theorem 3.2 (Constant stepsize). Let Assumptions 2.1, 3.1 and 3.2 be satisfied. If we set γ k ≡ γ <, then for the iterates of SMTP method the following inequality holds: If we choose γ = and run SMTP for k = K iterations where In order to get rid of factor ln in the complexity we consider decreasing stepsizes. (1−β)R0 and θ ≥ 2 α, then for the iterates of SMTP method the following inequality holds: where (1−β)R0 and run SMTP for k = K iterations where We notice that if we choose β sufficiently close to 1, we will obtain from the formula that In this section we present our complexity for Algorithm 1 when f is µ-strongly convex. Assumption 3.3. We assume that f is µ-strongly convex with respect to the norm · * D: It is well known that strong convexity implies Theorem 3.4 (Solution-dependent stepsizes). Let Assumptions 2.1, 3.1 and 3.3 be satisfied. If we set, then for the iterates of SMTP, the following inequality holds: Then, If we run SMTP for k = K iterations where where is the condition number of the objective, we will get E f (z Note that the previous uses stepsizes that depends on the optimal solution f (x *) which is often not known in practice. The next theorem removes this drawback without spoiling the convergence rate. However, we need an additional assumption on the distribution D and one extra function evaluation. Assumption 3.4. We assume that for all s ∼ D we have s 2 = 1. Theorem 3.5 (Solution-free stepsizes). Let Assumptions 2.1, 3.1, 3.3 and 3.4 be satisfied. If additionally we compute, then for the iterates of SMTP the following inequality holds: Moreover, for any ε > 0 if we set t such that and run SMTP for k = K iterations where where In this section we consider another assumption, in a similar spirit to , on the objective. Assumption 4.1 (Coordinate-wise L-smoothness). We assume that the objective f has coordinatewise Lipschitz gradient, with Lipschitz constants where ∇ i f (x) is i-th partial derivative of f at the point x. For this kind of problems we modify SMTP and present STMP_IS method in Algorithm 2. In general, the idea behind methods with importance sampling and, in particular, behind SMTP_IS is to adjust probabilities of sampling in such a way that gives better convergence guarantees. In the case when f satisfies coordinate-wise L-smoothness and Lipschitz constants L i are known it is natural to sample direction s k = e i with probability depending on L i (e.g. proportional to L i). One can find more detailed discussion of the importance sampling in and Richtárik & Takáč. Now, we establish the key which will be used to prove the main complexity of STMP_IS. Lemma 4.1. Assume that f satisfies Assumption 4.1. Then for the iterates of SMTP_IS the following inequalities hold: and Due to the page limitation, we provide the complexity of SMTP_IS in the Appendix. Require: stepsize parameters w 1,..., w n > 0, probabilities p 1,..., p n > 0 summing to 1, starting point Select i k = i with probability p i > 0 Choose stepsize γ k i proportional to Experimental Setup. We conduct extensive experiments 3 on challenging non-convex problems on the continuous control task from the MuJoCO suit. In particular, we address the problem of model-free control of a dynamical system. Policy gradient methods for model-free reinforcement learning algorithms provide an off-the-shelf model-free approach to learn how to control a dynamical system and are often benchmarked in a simulator. We compare our proposed momentum stochastic three points method SMTP and the momentum with importance sampling version SMTP_IS against state-of-art DFO based methods as STP_IS and. Moreover, we also compare against classical policy gradient methods as and. We conduct experiments on several environments with varying difficulty Swimmer-v1, Hopper-v1, HalfCheetah-v1, Ant-v1, and Humanoid-v1. Note that due to the stochastic nature of problem where f is stochastic, we use the mean of the function values of f (, see Algorithm 1, over K observations. Similar to the work in , we use K = 2 for Swimmer-v1, K = 4 for both Hopper-v1 and HalfCheetah-v1, K = 40 for Ant-v1 and Humanoid-v1. Similar to , these values were chosen based on the validation performance over the grid that is K ∈ {1, 2, 4, 8, 16} for the smaller dimensional problems Swimmer-v1, Hopper-v1, HalfCheetah-v1 and K ∈ {20, 40, 80, 120} for larger dimensional problems Ant-v1, and Humanoid-v1. As for the momentum term, for SMTP we set β = 0.5. For SMTP_IS, as the smoothness constants are not available for continuous control, we use the coordinate smoothness constants of a θ parameterized smooth functionf θ (multi-layer perceptron) that estimates f. In particular, consider running any DFO for n steps; with the queried sampled 2. for further implementation details as we follow the same experimental procedure. In contrast to STP_IS, our method (SMTP) does not required sampling from directions in the canonical basis; hence, we use directions from standard Normal distribution in each iteration. For SMTP_IS, we follow a similar procedure as and sample from columns of a random matrix B. Similar to the standard practice, we perform all experiments with 5 different initialization and measure the average reward, in continuous control we are maximizing the reward function f, and best and worst run per iteration. We compare algorithms in terms of reward vs. sample complexity. Comparison Against STP. Our method improves sample complexity of STP and STP_IS significantly. Especially for high dimensional problems like Ant-v1 and Humanoid-v1, sample efficiency of SMTP is at least as twice as the STP. Moreover, SMTP_IS helps in some experiments by Table 2 to demonstrate complexity of each method. improving over SMTP. However, this is not consistent in all environments. We believe this is largely due to the fact that SMTP_IS can only handle sampling from canonical basis similar to STP_IS. Comparison Against State-of-The-Art. We compare our method with state-of-the-art DFO and policy gradient algorithms. For the environments, Swimmer-v1, Hopper-v1, HalfCheetah-v1 and Ant-v1, our method outperforms the state-of-the-art . Whereas for Humanoid-v1, our methods in a comparable sample complexity. We have proposed, SMTP, the first heavy ball momentum DFO based algorithm with convergence rates for non-convex, convex and strongly convex functions under generic sampling direction. We specialize the sampling to the set of coordinate bases and further improve rates by proposing a momentum and importance sampling version SMPT_IS with new convergence rates for non-convex, convex and strongly convex functions too. We conduct large number of experiments on the task of controlling dynamical systems. We outperform two different policy gradient methods and achieve comparable or better performance to the best DFO algorithm (ARS) on the respective environments. Assumption A.2. The probability distribution D on R d satisfies the following properties: 2 is positive and finite. 2. There is a constant µ D > 0 and norm We establish the key lemma which will be used to prove the theorems stated in the paper. Lemma A.1. Assume that f is L-smooth and D satisfies Assumption A.2. Then for the iterates of SMTP the following inequalities hold: and Proof. By induction one can show that That is, for k = 0 this recurrence holds and update rules for z k, x k and v k−1 do not brake it. From this we get Similarly, Unifying these two inequalities we get which proves. Finally, taking the expectation E s k ∼D of both sides of the previous inequality and invoking Assumption A.2, we obtain Theorem B.1. Let Assumptions A.1 and A.2 be satisfied. Let SMTP with γ k ≡ γ > 0 produce points {z 0, z 1, . . ., z K−1} and z K is chosen uniformly at random among them. Then Moreover, if we choose γ = γ0 √ K the complexity reduces to minimizes the right-hand side of and for this choice we have Proof. Taking full expectation from both sides of inequality we get Further, summing up the for k = 0, 1,..., K −1, dividing both sides of the obtained inequality by K and using tower property of the mathematical expectation we get The last part where γ = γ0 √ K is straightforward. Assumption C.1. We assume that f is convex, has a minimizer x * and has bounded level set at x 0: where Theorem C.1 (Constant stepsize). Let Assumptions A.1, A.2 and C.1 be satisfied. If we set γ k ≡ γ <, then for the iterates of SMTP method the following inequality holds: If we choose γ = and run SMTP for k = K iterations where then we will get E f (z Proof. From the and monotonicity of {f (z k)} k≥0 we have Taking full expectation, subtracting f (x *) from the both sides of the previous inequality and using the tower property of mathematical expectation we get Since γ < (1−β)R0 is positive and we can unroll the recurrence: Lastly, putting γ = in we have Next we use technical lemma from. We provide the original proof for completeness. Lemma C.1 (Lemma 6 from). Let a sequence {a k} k≥0 satisfy inequality and take C such that N ≤ αθ 4 C and a 0 ≤ C. Then, it holds Proof. We will show the inequality for a k by induction. Since inequality a 0 ≤ C is one of our assumptions, we have the initial step of the induction. To prove the inductive step, consider To show that the right-hand side is upper bounded by θC α(k+1)+θ, one needs to have, after multiplying both sides by (αk + θ)(αk + α + θ)(θC) which is equivalent to The last inequality is trivially satisfied for all k ≥ 0. (1−β)R0 and θ ≥ 2 α, then for the iterates of SMTP method the following inequality holds: where (1−β)R0 and run SMTP for k = K iterations where Proof. In we proved that Having that, we can apply Lemma C.1 to the sequence E f (z k) − f (x *). The constants for the lemma are: α 2 k+2 is equivalent to the choice θ = 2 α. In this case, we have αθ = 2,. Putting these parameters and K from in the we get the . Assumption D.1. We assume that f is µ-strongly convex with respect to the norm · * D: It is well known that strong convexity implies Theorem D.1 (Solution-dependent stepsizes). Let Assumptions A.1, A.2 and D.1 be satisfied. If we set, then for the iterates of SMTP the following inequality holds: If we run SMTP for k = K iterations where where κ def = L µ is the condition number of the objective, we will get E f (z Proof. From and Using θ = inf and taking the full expectation from the previous inequality we get Lastly, from we have ≤ ε., then for the iterates of SMTP the following inequality holds: Moreover, for any ε > 0 if we set t such that and run SMTP for k = K iterations where where Proof. Recall that from we have If we minimize the right hand side of the previous inequality as a function of γ k, we will get that the optimal choice in this sense is γ. However, this stepsize is impractical for derivative-free optimization, since it requires to know ∇f (z k). The natural way to handle this is to and that is what we do. We choose Next we estimate |δ k |: It implies that + Lt 2 8 and after taking full expectation from the both sides of the obtained inequality we get Note that from the tower property of mathematical expectation and Jensen's inequality we have Putting all together we get Lastly, from we have Again by definition of z k+1 we get that the sequence {f (z k)} k≥0 is monotone: Lemma E.1. Assume that f satisfies Assumption 4.1. Then for the iterates of SMTP_IS the following inequalities hold: and Proof. In the similar way as in Lemma A.1 one can show that and It implies that Unifying these two inequalities we get which proves. Finally, taking the expectation E[· | z k] conditioned on z k from the both sides of the previous inequality we obtain Theorem E.1. Assume that f satisfies Assumption 4.1. Let SMTP_IS with γ k i = γ wi k for some γ > 0 produce points {z 0, z 1, . . ., z K−1} and z K is chosen uniformly at random among them. Then Moreover, if we choose γ = in order to minimize right-hand side of, we will get Note that for Li with w i = L i we have that the rates improves to Proof. Recall that from we have Putting it in and taking full expectation from the both sides of obtained inequality we get Summing up previous inequality for k = 0, 1,..., K − 1 and dividing both sides of the by K, we get It remains to notice that As for SMTP to tackle convex problems by SMTP_IS we use Assumption 3.2 with Theorem E.2 (Constant stepsize). Let Assumptions 3.2 and 4.1 be satisfied. If we set γ, then for the iterates of SMTP_IS method the following inequality holds: and run SMTP_IS for k = K iterations where we will get E f (z Li with w i = L i, the rate improves to Proof. Recall that from we have and Putting it in and taking full expectation from the both sides of obtained inequality we get pi wi we have that the factor 1 − pi wi is nonnegative and, therefore, pi wi and k = K from in we have (1−β)R0 and θ ≥ 2 α, then for the iterates of SMTP_IS method the following inequality holds: where (1−β)R0 and run SMTP_IS for k = K iterations where Proof. In we proved that Under review as a conference paper at ICLR 2020 Having that, we can apply Lemma C.1 to the sequence E f (z k) − f (x *). The constants for the lemma are: α. In this case we have αθ = 2 and C = max f (x 0) − f (x *),. Putting these parameters and K from in the we get the . Theorem E.4 (Solution-dependent stepsizes). Let Assumptions 3.3 (with · D = · 1) and 4.1 be satisfied. If we set γ, then for the iterates of SMTP_IS method the following inequality holds: If we run SMTP_IS for k = K iterations where we will get E f (z Proof. Recall that from we have ≤ ε. The previous based on the choice of γ k which depends on the f (z k) − f (x *) which is often unknown in practice. The next theorem does not have this drawback and makes it possible to obtain the same rate of convergence as in the previous theorem using one extra function evaluation. Theorem E.5 (Solution-free stepsizes). Let Assumptions 3.3 (with · D = · 2) and 4.1 be satisfied. Li k t for t > 0, then for the iterates of SMTP_IS method the following inequality holds: Moreover, for any ε > 0 if we set t such that and run SMTP_IS for k = K iterations where we will get E f (z Proof. Recall that from we have 2(1 − β) 2. If we minimize the right hand side of the previous inequality as a function of γ k i, we will get that the optimal choice in this sense is γ. However, this stepsize is impractical for derivative-free optimization, since it requires to know ∇ i k f (z k). The natural way to handle this is to and that is what we do. We choose γ. From this we get Next we estimate |δ k i |: It implies that 8 and after taking expectation E · | z k conditioned on z k from the both sides of the obtained inequality we get Note that, and Putting all together we get Taking full expectation from the previous inequality we get Since µ ≤ L i for all i = 1,..., d we have Lastly, from we have ≤ ε 2 + ε 2 = ε. Here we compare SMTP when D is normal distribution with zero mean and Table 3 summarizes complexities in this case. We notice that for SMTP we have · D = · 2. That is why one needs to compare SMTP with SMTP_IS accurately. At the first glance, Table 3 says that for non-convex and convex cases we get an extra d factor in the complexity of SMTP_IS when L 1 =... = L d = L. However, it is natural since we use different norms for SMTP and SMTP_IS. In the non-convex case for SMTP we give number of iterations in order to guarantee E ∇f (z K) 2 ≤ ε while for SMTP_IS we provide number of iterations in order to guarantee E ∇f (z K) 1 ≤ ε. From Holder's inequality · 1 ≤ √ d · 2 and, therefore, in order to have E ∇f (z K) 1 ≤ ε for SMTP we need to ensure. That is, to guarantee E ∇f (z K) 1 ≤ ε SMTP for aforementioned distribution needs to perform Analogously, in the convex case using Cauchy-Schwartz inequality · 2 ≤ √ d · ∞ we have that R 0, 2 ≤ √ dR 0, ∞. Typically this inequality is tight and if we assume that R 0, ∞ ≥ C
We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control.
1,763
scitldr
Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values. This similarity does not model the full rich knowledge of semantic relations that may be present between data points. In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar. We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings. We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings. We test the ing SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels. Content-Based Image Retrieval (CBIR) on very large datasets typically relies on hashing for efficient approximate nearest neighbor search; see e.g. BID12 for a review. Early methods such as (LSH) BID5 were data-independent, but Data-dependent methods (either supervised or unsupervised) have shown better performance. Recently, Deep hashing methods using CNNs have had much success over traditional methods, see e.g. Hashnet BID1, DADH. Most supervised hashing techniques rely on a pairwise binary similarity matrix S = {s ij}, whereby s ij = 1 for images i and j taken from the same class, and 0 otherwise. A richer set of affinity is possible using semantic relations, for example in the form of class hierarchies. BID13 consider the semantic hierarchy for non-deep hashing, minimizing inner product distance of hash codes from the distance in the semantic hierarchy. In the SHDH method, the pairwise similarity matrix is defined from such a hierarchy according to a weighted sum of weighted Hamming distances. In Unsupervised Semantic Deep Hashing , semantic relations are obtained by looking at embeddings on a pre-trained VGG model on Imagenet. The goal of the semantic loss here is simply to minimize the distance between binarized hash codes and their pre-trained embeddings, i.e. neighbors in hashing space are neighbors in pre-trained feature space. This is somewhat similar to our notion of semantic similarity except for using a pre-trained embedding instead of a pre-labeled semantic hierarchy of relations. BID14 consider class-wise Deep hashing, in which a clustering-like operation is used to form a loss between samples both from the same class and different levels from the hierarchy. Recently BID0 explored image retrieval using semantic hierarchies to design an embedding space, in a two step process. Firstly they directly find embedding vectors of the class labels on a unit hypersphere, using a linear algebra based approach, such that the distances of these embeddings are similar to the supplied hierarchical similarity. In the second stage, they train a standard CNN encoder model to regress images towards these embedding vectors. They do not consider hashing in their work. We also make use of hierarchical relational distances in a similar way to constrain our embeddings. However compared to our work, BID0 consider continuous representations and require the embedding dimension to equal the number of classes, whereas we learn compact quantized hash codes of arbitrary length, which are more practical for real world retrieval performance. Moreover, we do not directly find fixed target embeddings for the classes, but instead require that the neural network embeddings will be learned in conjunction with the network weights, to best match the similarities derived from the labels. And unlike BID14, in our weakly supervised SHREWD method, we do not require explicit class membership, only relative semantic distances to be supplied. Let (x, y) denote a training example pair consisting of an image and some (possibly weakly) supervised target y, which can be a label, tags, captions etc. The embeddings are defined asẑ = f θ (x) for a deep neural network f parameterized by weights θ. Instead of learning to predict the target y, we assume that there exists an estimate of similarity between targets, d(y, y). The task of the network is then to learn this similarity by attempting to match ẑ −ẑ with d(y, y) under some predefined norm in the embedding space. While in this work we use class hierarchies to implicitly inform our loss function via the similarity metric d, in general our formulation is weakly supervised in the sense that these labels themselves are not directly required as targets. We could equally well replace this target metric space with any other metric based on for instance web-mined noisy tag distances in a word embedding space such as GloVe or word2vec, as in BID4, or ranked image similarities according to recorded user preferences. In addition to learning similarities between images, it is important to try to fully utilize the available hashing space in order to facilitate efficient retrieval by using the Hamming distance to rank most similar images to a given query image. Consider for example a perfect ImageNet classifier. We could trivially map all 1000 class predictions to a 10-bit hash code, which would yield a perfect mAP score. The retrieval performance of such a "mAP-miner" model would however be poor, because the model is unable to rank examples both within a given class and between different classes BID3. We therefore introduce an empirical Kullback-Leibler (KL) divergence term between the embedding distribution and a (near-)binary target distribution, which we add as an additional loss term. The KL loss serves an additional purpose in driving the embeddings close to binary values in order to reduce the information loss due to binarizing the embeddings. We next describe the loss function, L(θ), that we minimize in order to train our CNN model. We break down our approach into the following 3 parts: DISPLAYFORM0 L cls represents a traditional categorical cross-entropy loss on top of a linear layer with softmax placed on the non-binarized latent codes. The meaning and use of each of the other two terms are described in more detail below. Similar to BID0 we consider variants with and without the L cls, giving variants of the algorithm we term SHREWD (weakly supervised, no explicit class labels needed) and SHRED (fully supervised). In order to weakly supervise using a semantic similarity metric, we seek to find affinity between the normalized distances in the learned embedding space and normalized distances in the semantic space. Therefore we define DISPLAYFORM0 where B is a minibatch size,... M denotes Manhattan distance (because in the end we will measure similarity in the binary space by Hamming distance), d (y b, y b) is the given ground truth similarity and w bb is an additional weight, which is used to give more weight to similar example pairs (e.g. cat-dog) than distant ones (e.g. cat-moon). τ z and τ y are normalizing scale factors estimated Note that while L cls performs best on supervised classification, L sim allows for better retrieval performance, however this is degraded unless L KL is also included to regularize towards binary embeddings. For measuring classification accuracy on methods that don't include L cls, we measure using a linear classifier with the same structure as in L cls trained on the output of the first network.from the current batch. We use a slowly decaying form for the weight, DISPLAYFORM1 with parameter values γ = 0.1 and ρ = 2. Our empirical loss for minimizing the KL divergence KL(p||q). = dzp(z) log(p(z)/q(z)) between the sample embedding distribution p(z) and a target distribution q(z) is based on the Kozachenko-Leonenko estimator of entropy BID7, and can be defined as DISPLAYFORM0 where ν(ẑ b ; z) denotes the distance ofẑ b to a nearest vector z b, where z is a sample (of e.g. size B) of vectors from a target distribution. We employ the beta distribution with parameters α = β = 0.1 as this target distribution, which is thus moderately concentrated to binary values in the embedding space. The is that our embedding vectors will be regularized towards uniform binary values, whilst still enabling continuous backpropagation though the network and giving some flexibility in allowing the distance matching loss to perform its job. When quantized, the ing embeddings are likely to be similar to their continuous values, meaning that the binary codes will have distances more similar to their corresponding semantic distances. Metrics As discussed in section 2, the mAP score can be a misleading metric for retrieval performance when using class information only. Similarly to other works such as BID2 BID0, we focus on measuring the retrieval performance taking semantic hierarchical relations into account by the mean Average Hierarchical Precision (mAHP). However more in line with other hashing works, we use the hamming distance of the binary codes for ranking the retrieved . We first test on CIFAR-100 BID8 using the same semantic hierarchy and Resnet-110w architecture as in BID0, where only the top fully connected layer is replaced to return embeddings at the size of the desired hash length. See TAB2 for comparisons with previous methods, an ablation study, and effects of hash code length. ILSVRC 2012 We also evaluate on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset. For similarity labels, we use the same tree-structured WordNet hierarchy as in BID0. We use a standard Resnet-50 architecture with a fully connected hashing layer as before. Retrieval are summarized in TAB3. We compare the ing Hierarchical Precision scores with and without L KL, for binarized and continuous values in FIG1. We see that our improve on the previously reported hierarchical retrieval whilst using quantized embeddings, enabling efficient retrieval. We see a substantial drop in the precision after binarization when not using the KL loss. Also binarization does not cause as severe a drop in precision when using the KL loss. We approached Deep Hashing for retrieval, introducing novel combined loss functions that balance code binarization with equivalent distance matching from hierarchical semantic relations. We have demonstrated new state of the art for semantic hierarchy based image retrieval (mAHP scores) on CIFAR and ImageNet with both our fully supervised (SHRED) and weakly-supervised (SHREWD) methods.
We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples
1,764
scitldr
In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new method to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects. C OMPUTING a cross-surface mapping between two surfaces (cross-parameterization) is a fundamental problem in digital geometric processing. A wide range of methods have been developed to find such mappings,,, but no single method in a perfect mapping in every case. Quite often, the mapping may be good overall, but some specific, sometimes subtle, semantic features, such as articulations and facial features, may remain misaligned, as illustrated in Figure 1. These imperfections of the final are often unacceptable in a production setting where the artist needs a high degree of control over the final , and will often sacrifice automation of a method for higher control. Typically, improving using surface mapping methods requires the user to iteratively insert some landmarks and solve for the mapping globally. However, since the imperfections are typically localized to a specific region, a local solution that does not change the mapping globally would be preferred in order to ensure that the method does not introduce artifacts elsewhere on the map. This paper proposes a surface mapping editing approach providing local and precise control over the map adjustments. The process begins with the inspection of an existing vertex-topoint surface mapping between two meshes. In regions where the mapping exhibits some discrepancy, the user sets landmarks positioned at corresponding locations on both meshes. For each such region, we extract a patch on both meshes in order to localize the changes in the mapping, and we flatten them on a common planar domain. The mapping is improved based on a 2D deformation optimization that steers the landmarks toward correspondence while limiting distortion and having theoretical guarantees to maintain the local injectivity of the map. We developed a new 2D deformation approach denoted Iterative Least Squares Conformal Maps (ILSCM), which iteratively minimizes a conformal energy, each iteration ensuring that flips do not occur, and in practice, ensuring progress toward satisfying the constraints. We chose to work with a conformal energy as we want to be able to improve mappings where the deformation between the pair of meshes is not isometric. Our editing approach can successfully align the mapping around landmarks without any degradation of the overall mapping. The local surface maps are extracted from their respective deformed segments and parameterization domains, and are then combined to form an improved global surface mapping. Our approach solves an important practical problem and offers three novel scientific contributions. The first is a practical approach for local surface map editing, which we show, using both qualitative and quantitative metrics, provides better than other stateof-the-art methods. The second involves a compact segmentation which in a compromise between a low-distortion flattening and a low-distortion deformation when aligning the landmarks. The third is a new deformation approach, ILSCM, which preserves conformal energy better than other state-of-the-art methods, and that has theoretical guarantees preventing the introduction of foldovers. While a lot of research has been done on creating correspondences between 3D objects, comparatively fewer methods have been proposed on correspondence editing. Nguyen et al. measure and optimize the consistency of sets of maps between pairs belonging to collections of surfaces. They compute a score for the map, and then apply an optimization to iteratively improve the consistency. The limitation of their method lies in its requirement of having multiple maps instead of a single map for a pair of surfaces. Ovsjanikov et al. propose the functional maps representation to establish correspondences between surfaces based on Laplacian eigenfunctions rather than points. Working in that smooth basis function space makes it easy and efficient to generate smooth mappings, but significant modifications to the underlying method would be required to allow local adjustments of the mapping guided by the user. Another limitation is that their method is limited to near-isometric surfaces since non-isometric deformation overrides the assumption that the change of basis matrix is sparse. Ezuz and Ben-Chen remove the isometric restriction in their proposed The initial mapping between mesh A and B is globally good, but is locally misaligned in the head region. Using our approach, the mapping is locally improved (c). method for the deblurring and denoising of functional maps. They smooth a reconstructed map by mapping the eigenfunctions of the Laplacian of the target surface in the span of the source eigenfunctions. Their technique can be incorporated into existing functional mapping methods, but selecting the right number of eigenfunctions to perform the denoising is difficult. Compared with a ground truth mapping, increasing the number of eigenfunctions decreases the error until a minimum is reached, but adding more eigenfunctions beyond this point increases the error. While this can be observed on a ground truth mapping, there are no methods to achieve the minimum error for an arbitrary mapping. Gehre et al. incorporate curve constraints into the functional map optimization to update the mapping between non-isometric surfaces. They provide an interactive process by proposing a numerical method which optimizes the map with an immediate feedback. While their method is not limited to the isometric surfaces, it does however need several curve constraints to obtain a meaningful functional map. Vestner et al. improve dense mappings even in the case of non-isometric deformations. Their method is an iterative filtering scheme based on the use of geodesic Gaussian kernels. An important restriction of their method is that it requires both surfaces to be discretized with the same number of vertices and vertex densities. Panozzoet al. propose the weighted averages (WA) on surfaces framework. They use WA with landmarks in order to define mappings and then the user can improve the mapping by adjusting the landmarks. Although WA generates good mapping , its improved mapping application cannot use a mapping as input. Since most state-of-the-art methods improve the mapping globally, this makes it hard for the user to fine-tune the mapping without risking modifying areas that should not be affected. Furthermore, some methods face significant limitations such as being constrained to isometric deformations and requiring compatible meshes on both surfaces. An alternative way to frame the correspondence editing problem is as a deformation method in a planar parameterization space. Least Squares Conformal Maps (LSCM) apply a deformation energy which contains a term for preserving surface features and a term for position constraints. In contrast, Jacobson et al. provide a smooth and shape-preserving deformation using biharmonic weights. The main drawback of the LSCM and biharmonic weights methods is that they can introduce fold-overs while deforming the mesh. Injectivity is a key property we want to achieve in our mapping editing, but extracting a mapping from meshes with fold-overs breaks this property. The method of Chen and Weber avoids fold-overs by applying inequality constraints solely to the boundaries. Locally Injective Mappings (LIM) and Scalable Locally Injective Mappings (SLIM) propose a strategy where the energy of a triangle tends to infinity as the triangles become degenerate, and thus, any locally minimal solution will be by construction exempt of fold-overs. While this is an elegant approach, the problem is that in some cases where, due to the user constraints, some triangles will come close to becoming degenerate, they carry a disproportionately high share of the total energy as compared to the rest of the triangles. For our mapping editing application, such cases occur frequently, and we observed that in their presence, LIM and SLIM often produce an inferior both qualitatively and quantitatively. Gollaet al. outperform LIM and SLIM by modifying the Newton iteration for the optimization of nonlinear energies on triangle meshes. They analytically project the per-element Hessians to positive semidefinite matrices for efficient Newton iteration and apply global scaling to the initialization. In this work, we propose to edit a surface mapping by locally adjusting the mapping in a bid to align landmarks set by the user. To move the landmarks toward their expected positions, we deform a local segmented patch of the mesh. We found that current deformation methods had drawbacks (flipped triangles and high distortion) forbidding their use in our mapping editing framework. We thus derived a new deformation approach that iteratively minimizes a conformal energy, making sure that in each iteration we have no flipped triangles. More specifically, our ILSCM approach optimizes the quadratic LSCM energy, but it relaxes the user constraints to avoid flips. Therefore, after each iteration, the user constraints may not be satisfied, but by repeating the process, we reach a configuration that has low conformal energy (lower than LIM or SLIM), and the user constraints are guaranteed to be better satisfied than initially. In practice, the user constraints are always satisfied up to a user-provided epsilon. Our approach opens the door to a new family of deformation methods that optimize the deformation by effectively finding an optimal flow of the vertices based on conformal energy minimization. As explained in the introduction, mappings computed by state-ofthe-art (automatic) methods are often globally good, but locally wrong in a few areas. We provide an approach to locally improve the surface mapping. The user typically inspects the mapping visually through texture transfer. In local regions where the mapping should be improved, the user sets landmarks at locations that should correspond on the pair of surfaces (Fig. 2a). We edit the mapping by deforming parts of the meshes with respect to each other (Fig. 2c) to improve the alignment of the user-provided landmarks, and then we rebuild the mapping from the deformed parts (Fig. 2d). Our main goal is to obtain a low distortion of the meshes at each phase of our approach. To deform the mapping with a good control on distortion, we conduct a planar parameterization of the meshes. Since the planar parameterization of smaller segments of a mesh leads to less distortion versus when computed for the entire mesh, we do a segmentation based on user-identified local regions where the mapping needs to be updated. Afterwards, we deform one segment with respect to the other in the planar parameterization space. The deformation is aided by our new ILSCM, ensuring that the deformation causes limited distortion on the meshes. Finally, the mapping is extracted based on how segments overlap each other. Our approach has four key phases: Our method works with input meshes A and B, along with a vertexto-point surface mapping between the meshes. The mapping links each vertex of a mesh to a barycentric coordinate on the other mesh. It should be noted that our approach works regardless of the initial method used to establish the surface mapping, as long as a dense mapping is provided. Given a mapping, the user will visualize it using texture maps to identify mismatching regions. These correspond to isolated zones of the meshes where the mapping is incorrect. For each region, the user sets corresponding landmarks on both meshes at locations that should match each other. The landmarks for each region i, are expressed as barycentric coordinates on A and B, respectively. The user provides hints where the mapping needs to be modified by setting pairs of landmarks on both meshes. In order to keep the map editing local, a segment is identified on both meshes where the map editing will be performed. Computing such a segment is not trivial as there are a number of requirements: the segment should be a single connected component with disk topology, should be compact, and should contain all the landmarks of the region i. The size of the segment is also important. If the segment is too large we may lose locality, but if it is too small, we may introduce further distortion if the vertices need to move over a long distance. We assume that outside these segments, the mapping is satisfactory, and it can be used to set boundary conditions when deforming a segment with respect to the other to align the landmarks. Our segmentation method has three steps. In the first step (Fig. 3a), we grow an initial patch on the 3D surface from the landmarks, ensuring that it is one connected component, that it encloses all of the landmarks, as well as the positions corresponding to the landmarks from the other mesh. We flatten this patch in 2D (Fig. 3b), where we have more tools available to control the size and shape of the patch. In the second step, we compute a compact 2D patch from the convex hull of the landmarks in the 2D space (Fig. 3c), and ensure that we fill any artificial internal boundaries. In the third step, we grow the compact patch from the previous step to allow enough room between the boundary and the landmarks (Fig. 3d), preparing the ground for a low-distortion deformation phase (Sec. 3.3). This segmentation is applied to each region of meshes A and B independently. We now explain in more detail the process as applied to one region i of mesh A, but the same is also conducted for mesh B and other regions. Based on the mapping from mesh B to A, corresponding landmark positions are calculated on A for each landmark of L B (i), The goal of the first step is to extract an initial patch and to flatten it. To meet the two conditions of having a single connected component and containing all of the landmarks from L A (i) and CP B→A (i), we will compute the union of face groups computed by identifying faces around each landmark l A (i) j and around each corresponding position cp B→A (i) j. For each, we iteratively add rings of faces until the group of faces contains at least half plus one of the landmarks from L A (i) and CP B→A (i). The requirement to include half plus one of the landmarks ensures that when we combine the groups of faces, this initial patch meets the two conditions. This procedure in a "disk with holes" topology, which is sufficient to flatten the patch using ABF++. One disadvantage of the initial patch is that it can contain concavities, and even internal "holes" with polygons from the full mesh missing. From the initial patch of step one, the second step extracts a compact patch that surrounds the landmarks. To this end, we identify the convex hull of the landmarks in the 2D parameterization space. Then, we only consider the faces which have at least one of their vertices within the convex hull (faces identified in black in Fig. 3c). The use of the convex hull in a patch exempt of large concavities in its boundary. Nevertheless, depending on the meshes and the arrangement of landmarks, some of the initial patches have "holes" with polygons from the full mesh missing, creating artificial internal boundaries in the patch (Fig. 4). We add the missing faces by analyzing the inner boundaries (holes). Filling the whole by adding the missing faces from the full mesh has the advantage of preventing unwanted deformation that would from such artificial boundaries, and it ensures that there are no internal areas within the region where the mapping would not be adjusted. The third step tries to balance the conflicting goals of having a small versus a large patch. As can be seen in Fig. 5, the larger the patch, the greater the distortion between the patch in 3D and after flattening to 2D. The distortion would be even higher if flattening the whole mesh (Fig. 6). Conversely, a smaller patch means that the landmarks are closer to the boundary, and the deformation that aligns the landmarks will induce more distortion to the triangles between the boundary and the landmarks. We thus want to grow the compact patch to ensure that there is enough room around each landmark and corresponding landmark position pairs to diffuse the distortion from the deformation phase. We also know that as we get closer to the landmarks, we are getting closer to the areas where the mapping is wrong, and as such, extending outwards is necessary in order to have a good boundary condition for the deformation. Regarding how far away the patch should be extended, we use a distance proportional to the geodesic distance (on the 3D mesh) between the landmark and its corresponding landmark position, adding faces that are within that distance from each landmark of the pair. We compared the distortion between the 3D patch and the 2D patch for 1 to 10 times the geodesic distance. Fig. 7 shows a pattern where very small patches do not have enough room to move the landmarks without high distortion. At the same time, patches that are too large also exhibit large distortion because of the flattening from 3D to 2D. A good compromise between the two sources of distortion is around two times the geodesic distance. It is necessary to apply steps one and two, because step three alone could lead to disconnected components or artificial internal boundaries that do not exist in the mesh, but that exist in the patch because of the patch growth process (Fig. 4). The sequence of steps one to three provides the final segments from A, {A 1, A 2, . . ., A n}, and the same for mesh B, yielding {B 1, B 2, . . ., B n}. These final segments are flattened through ABF++ and we refer to them as A 1, A 2,..., A n and B 1, B 2,..., B n. We selected ABF++ to flatten our segments as we can assume it will yield a parameterization exempt of flipped triangles (injective). Our main observation is that the initial surface mapping is globally adequate, but wrong in localized regions. With this assumption, we line up the boundary of the regions by relying on the surface mapping. We will then deform the interior to align the landmarks while keeping a low distortion. As we have two segmented meshes, there are two ways to align the landmarks: deform A i on B i or B i on A i. We select the one with the lower L 2 distortion (between the segment in 3D and 2D), keeping it fixed and deforming the other. Here, we will explain the deformation of B i with respect to A i, but the deformation of A i with respect to B i proceeds in the same way. We deform B i in order to implicitly adjust the mapping by applying an energy minimization. This is achieved by using positional user constraints (E L -aligning the landmarks, E B -aligning the boundary) coupled with a distortion preventing regularization (E D -globally deforming B i with low distortion), leading to the following equation: The user constraints are by soft constraints as follows: where k(i) are the landmarks of the segment i; l a j are the landmarks on A i; vertices v (j,1), v (j,2), and v (j,3) correspond to the three vertices of the triangle on B i containing the related landmark l b j; and β (j,1), β (j,2), and β (j,3) are the barycentric coordinates. We use Ω(B i) to denote the set of vertices on the boundary of B i, and map(v j) to denote the corresponding position of v j on A i based on the mapping. The energy E B pulls the vertices of the boundary of B i to the positions on A i where they correspond given the mapping. When λ is small, the map will be injective, but the constraints are generally not satisfied (ultimately, if λ = 0, B i stays the same). Conversely, when λ is large, the user constraints are satisfied, but flips may be introduced (λ = ∞ corresponds to using hard constraints). An ideal deformation energy E D (V) must meet three criteria: preserving the shape, maintaining the injectivity of the mapping (i.e., no flipped triangles), and satisfying the user constraints. This is an example where we want to adjust the mapping by moving the red landmarks to the positions of the corresponding green landmarks. All deformations were done with the same patch boundary (the one from the ABF parameterization). as much as possible. For shape preservation, we experimented with several E D: LSCM, LIM, SLIM, and KPNewton. Each of the energies has a number of pros and cons. LSCM preserves the shape the best, but tends to introduce flips, as illustrated in Fig. 8 (c). LIM, SLIM, and KP-Newton on the other hand, guarantee injectivity (no flips), but introduce more distortion (between B i before and after deformation) than LSCM. The graph in Fig. 9 illustrates these observations: LSCM has the least distortion, but flipped triangles would destroy the injectivity of the mapping. LIM, SLIM, and KP-Newton have no flips, but overall, they have more distortion as compared to LSCM. LIM minimizes a joint energy where one term optimizes the distortion and the second term optimizes the flips. In such joint optimization frameworks, no one term may be close to a minimum of its own, as shown in Fig. 9, where the from LIM are worse in terms of the distortion energy than LSCM. Fig. 10. This graph compares in terms of the residual error of LSCM energy for fixed values of λ. The horizontal axis labels present λ and the number of iterations (λ, #iter). The black bars highlight with flipped triangles. Since all these methods have shortcomings, we propose an approach that bridges the gap between the shape preservation of the original LSCM formulation with the injectivity preservation of LIM, SLIM, and KP-Newton. Our approach, Iterative LSCM (ILSCM), is a different approach where we iteratively optimize to decrease E(V), while preventing flips from occurring. ILSCM performs iterative LSCM steps. The first iteration uses the cotangent weights from segment B i. The deformed segment from the first iteration is then used to set the weights for the second iteration, and so on. At each iteration, if a triangle flip is detected, we decrease the value of λ and redo the same iteration. This way, we are guaranteed to eventually find a λ that prevents flips from occurring. We will now explain how we adaptively adjust λ to guarantee that we have no flips, while making as much progress as possible toward achieving the user constraints. In order to measure if the constraints are satisfied, we consider the initial maximal distance between any landmark and corresponding landmark position pair dist 0 = max j l A (i) j − cp B→A (i) j, and iterate until the current maximal distance is below the threshold ε = dist 0 /250. The Appendix A demonstrates that since the progression of landmarks is continuous with respect to λ, the approach will always find a λ that prevents having any flips and that enables progress toward the user constraints. The progress could asymptotically stop, but in all cases, we are guaranteed to prevent triangles from flipping and we limit the mesh distortion. For the mapping adjustment application, all of the examples we tested converged to meet the user constraints. A larger λ will converge faster, but increases the likelihood of flipped triangles (Fig. 10, black bars). A small λ decreases the probability of flipped triangles, but increases the number of iterations needed to satisfy the user constraints. As can be seen in Fig. 10, whether using a small or large λ, the conformal residual is almost the same and it plateaus for smaller values of λ. Consequently, even in the theoretical case where our approach would take very small incremental steps, the solution remains valid in the sense that it meets the constraints and the conformal residual remains close to the solution with larger steps. In our experiments, we start with λ = 1000. After each iteration, we automatically detect if there are flipped triangles, and if so, we redo the deformation of the iteration with λ = λ /2. Fig. 11a demonstrates that the movement of a landmark is continuous with respect to different values of λ. Fig. 11b further shows that even with small values of λ, we make progress toward satisfying the constraints. We see that we make progress toward satisfying the user constraints even for small values of λ. We also see that it is to our advantage to begin with a large value of λ to reduce the number of iterations before convergence. As the last phase of our approach, we update the surface mapping between A and B from the planar parameterizations. We first extract the mappings from each pair (B i, A i). Then, we aggregate and transfer them to A and B. With the mapping being expressed as barycentric coordinates on the other mesh, we can update it by simply getting the barycentric coordinates of vertices from B i to faces of A i and vice-versa. We validate our approach with various cases of faces, as well as with a wider range of objects with different morphologies from different data sets and artist contributions (see Table 1). Experiments are presented based on different initial mapping methods: orbifold tutte embeddings, Elastiface, deformation transfer, functional mapping method, WA method, and joint planar method. The number of landmarks and segments is proportional to the quality of the initial surface mapping and the complexity of the objects (see Table 1). We evaluate the capabilities of our approach based on a qualitative evaluation by visual inspection and a quantitative evaluation based on geodesic distance. This method prevents flipped triangles, and, essentially, it preserves the shape, while satisfying the user constraints. Furthermore, it distributes the deformation error more uniformly across the mesh surface. As can be seen in Fig. 9, our distortion energy is lower than SLIM, is often lower than LIM, and is only slightly greater than LSCM. We believe that this optimization strategy is more suited to this type of problem than a joint optimization strategy. Fig. 12 compares LSCM, LIM, SLIM, and KP-Newton to our ILSCM (iterated 257 times and final λ was 31.25) for the example from Fig. 8. ILSCM distributed errors more uniformly over the whole deformed mesh, as compared to LIM, SLIM, and KP-Newton. The accompanying video shows how our iterative approach progressively conducts the deformation, in comparison to LSCM, LIM, SLIM, and KP-Newton. The meshes we deform in the video are the same as some of the examples from the LIM paper. For a fair comparison, we perform SLIM, LIM, and KP-Newton, all using the LSCM energy. For LIM, we apply a 1E−12 barrier weight, which is sufficient to prevent flips. We experimented with barrier weights of LIM ranging from 1E−4 to 1E−20. Barrier weights smaller than 1E−12 had an imperceptible impact, while those equal to or lower than 1E−20 did not converge. For each deformation energy, we experimented with two different initial states: weights from the 3D triangles and weights from the flattened B i. The distortion between B i before and after deformation was lowest when deforming using the weights from the flattened B i. We thus used the weights from the flattened B i. Visual inspection of is a common form of validation for mapping problems. We use a visualization method based on texture transfer. We copy texture coordinates from one mesh to the other using mapping, setting the uv coordinates of a vertex to the barycentric interpolation of the uv coordinates of the other mesh. For this visualization, we used two different types of textures. The first type was a grid texture. Figs. 1, 20, and 13 qualitatively show that we obtain considerably better mappings using our editing approach. An important assumption of our approach is that we can edit the mapping locally. This implies that it is important to have a smooth transition at the boundary of the regions where we conduct local editing. Fig. 14 shows a typical example of the smoothness of our edited mapping across the boundary of the segmented region. The accompanying video also compares the transition of the mapping across the boundary by transferring texture from mesh A to mesh B using both initial mapping and edited mapping for the test cases of Fig. 13, Fig. 20 (top row), and Fig. 21. For the specific case of faces, we use realistic facial textures, making it easier to highlight important semantic facial features. These features are derived from three important considerations: modeling, texturing, and animation. A realistic facial texture is often enough to highlight modeling and texturing issues. Problems around the nose (Figs. 15 and 23), lips (Figs. 15 and 23), and These examples show cases that are ideal for our approach: the initial mappings are globally good, with few local misalignments. Instead of solving for the mapping globally, our approach provides a local solution for these specific semantic regions. For facial animation, other features need to be identified in the textures. Accordingly, some of our texture visualizations use a set of curves that are positioned relative to the areas that deform during animation, based on the facial anatomy. Fig. 15 illustrates the improvement in the correspondence of these animation-related features as compared against the initial surface mapping. Our approach assumes that the segments can be flattened to 2D without any flipped triangles. While the hypothesis is essential to get injective mappings, our approach is still robust to cases where the flattened segments would contain flipped faces. Meshes used in the industry often exhibit small discrepancies such as cracks, holes, and handles. Fig. 16 presents such a case where one of the meshes is of a different genera (contains two handles in the ear region). Although it is not possible to get injective mappings when dealing with different genera, our approach behaves robustly: it can improve the mapping in the region with different genera and it does not degrade the mapping in the edited region nor in its vicinity. Furthermore, even if it is not possible to achieve injective mappings in such cases, our edited mappings have reasonable properties: the mapping from the lower genera (A → B) is injective, and the mapping from the higher genera (B → A) is surjective. While the qualitative evaluations of Sec. 4.1 demonstrate that our approach in clear improvements, we also quantitatively measure how our approach improves the mappings. We first use the same process as in the paper of Kim et al. in order to measure the accuracy of the surface mapping. Their method transfers vertex positions to the other mesh using the mapping under evaluation and a ground truth mapping. It then computes the geodesic distances from the corresponding positions. Fig. 17 shows the error of the initial mapping after our editing approach. The comparative evaluation shown here relies on the ground truth mapping from SCAPE (Fig. 17 (a) ) and TOSCA (Fig. 17 (b) ) data sets. We can see that applying our approach improves the mapping in the related regions without causing a degradation of the overall mapping. Another way to measure the quality of a mapping is to morph a mesh into the shape of the other using the mapping. Then, we evaluate the mapping by computing L 2 and L in f between the mesh and the morphed mesh to estimate the distortion which occurs in the mapping-based morphing process. Fig. 18 shows the morphing of mesh A into mesh B using both the initial and new mappings. With our updated mapping (Fig. 18 (d) ), the vertices of the head A are pulled back to the correct place. This has the advantage of mapping the right density of vertices where needed, which is very important for morphing and in any transfer related to animation attributes (e.g., bones, vertex weights, and blend shapes). Table 2 illustrates an evaluation of the quality of the edited mapping in comparison to the initial mapping between 3D shapes. It shows that our edited mapping is as good as or better than the initial mapping when considering the distortion of the morphed mesh. We can see that there is a single case where this measurement of distortion is slightly higher after the map is edited. Even in this case, while the distortion is slightly higher, the edited mapping is clearly superior, as can be seen in Fig. 13. We performed a qualitative comparison of the mapping editing versus the methods of Ezuz and Ben-Chen. We also did comparisons using LIM, SLIM, KM-Newton, and ILSCM to conduct the mapping editing. We finally compared local editing to global editing using the method of Panozzo et al. and the joint planar method. For the comparison to the method of Ezuz and Ben-Chen, we established an initial mapping using a state-of-the-art functional mapping method. Note that we use the raw functional map, without the high-dimensional iterative closest point post-process refinement. Fig. 19 compares the mappings improved using our approach and the method of Ezuz and Ben-Chen (which improves the mapping without any landmark). Note how the added control of the landmarks provides a significantly improved mapping, exactly where intended. Fig. 21 presents when LIM, SLIM, KP-Newton, and ILSCM are used to conduct the mapping editing. The comparison through texture transfer visualization shows that ILSCM is superior in adjusting the mapping as compared to LIM, SLIM, and KPNewton. The accompanying video also compares LIM, SLIM, KP-Newton, and ILSCM in editing the mapping for the test case of Fig. 13. Adjusting the mapping globally requires having the initial constraints, the initial parameters of the method, and the method itself, which is constraining. In addition, some mapping methods, such as that of Nogneng and Ovsjanikov, do not let the user guide the process with landmarks, while others, such as OBTE, only support a fixed number of landmarks (three or four landmarks for OBTE ), which will be insufficient in many cases. Furthermore, we believe that it is advantageous to ensure that changes occur locally, avoiding unexpected changes elsewhere in the mapping. Fig. 20c (top row) shows that solving for the mapping globally is sometimes as effective as solving it locally. Conversely, Fig. 20c (bottom row) shows that improving the mapping globally introduced artifacts on the fish head as compared to our local refinement (Fig. 20d bottom row), which is exempt of such artifacts away from the edited region. Fig. 22 compares the mappings improved using our approach as compared to solving globally using the WA method of Panozzo et al.. We established an initial mapping using the WA method. Afterwards, with the WA method, we added two additional landmarks to improve the initial mapping. For our method, we only consider the two new landmarks in improving the mapping. It can be seen in Fig. 22 that editing the mapping locally was beneficial for this test case as well. Several applications rely on a mapping between surfaces: texture transfer, animation setup transfer, and deformation transfer. We use the methods of Sumner et al. and Avril et al. to illustrate how the proposed approach can significantly improve the of techniques relying on a mapping. Fig. 23 shows a facial transfer before and after editing. Results demonstrate several issues and unpleasant deformations for fine features, such as strange deformations on the corners of the mouth. With the corrected mapping, these problems disappear. Fig. 24 shows a skeleton transfer before and after the mapping is edited. Results demonstrate that the joint that was erroneously positioned outside of the thumb moves to the right place when improving the surface mapping locally in the thumb region instead of improving the mapping globally over the mesh. Our approach works even for surfaces with boundaries inside the segments. Such boundaries are commonly encountered with the ears, eyes, nostrils, and mouths of characters. While we constrain the segment boundaries to prevent them from moving, an initial mesh boundary lying inside a segment will be free to move. Leaving these inner boundaries completely free has a negative impact on the deformation. Fig. 25 shows the deformation of the mouth without (c) and with (d) inner boundary fillings. Note here the improvement of the mouth deformation when filling the inner boundary. Our approach carves a new path in between the more classical shape-preserving methods, which often lose local injectivity, and the more current methods, which formulate the injectivity constraint as part of the optimization. These latter approaches typically do not have a bound on the shape-preserving error. In our approach, we are minimizing only the shape-preserving term (i.e., LSCM energy) and iteratively improving the user constraints while maintaining a locally injective map in each iteration. We achieve this by carefully controlling the λ parameter in Eq. 1. At one extreme, if λ is very large (i.e., infinity), the formulation is equivalent to the LSCM formulation. If λ is very small, it takes many iterations for the user constraints to be satisfied, or in some cases, the user constraints may ultimately not be satisfied. Our iterative scheme relies on two important observations. If λ is 0, the solution is the same as the initial configuration. Therefore, if we start in a locally injective configuration, the final will be a locally injective configuration. If the initial configuration is locally injective, there always exists a λ (however small) that will in a locally injective configuration, where the user constraints are closer to the target. This scheme will converge to a locally injective configuration. Consequently, we iteratively repeat the optimization to fight against flipped faces, but convergence cannot be guaranteed. It is always possible to design a landmark configuration in which the constraints cannot be met without flipped faces. This is true for the other deformation methods as well. Appendix B demonstrates different failure cases using different deformation methods. In our experiments, the constraints are satisfied (up to numerical precision), even for extreme deformations. In our , we improved mappings which were initially computed from a variety of methods,,,,,. Even if these initial mappings minimize different deformation energies, the fact that we rely on the LSCM conformal energy to edit them did not prevent our approach to improve the mappings. One must keep in mind that the goal of the editing is not to strictly minimize a deformation energy, but to align important semantic features of the objects and maintain injectivity. We analyzed our to verify the degree to which the deformation deteriorates the shape of the triangles. We checked 13 of the found in this paper, and we considered that a detrimental deformation is one in which the angle becomes more than 20 times narrower after deformation. Eleven cases had no such triangles, while the two other cases had two and three, respectively. The worst triangle in our 13 test cases was 24 times narrower than before deformation. Any deformation method is prone to in thin triangles, so we compared our approach to LIM, SLIM, and KP-Newton for six examples. When looking at the worst triangle found in the deformed meshes, ILSCM performed best for four of the test cases, while KP-Newton performed best for two of the test cases. SLIM and LIM were systematically in third and fourth place behind ILSCM and KP-Newton. Furthermore, our were better than LIM, SLIM, and KP-Newton in terms of shape preservation and final triangulation, as can be seen in Fig. 12 and in the video. We ran our experiments on a 3.40 GHz Intel Core-i7-4770 CPU with 12 GB of memory. The presented approach was implemented with MATLAB, taking advantage of its sparse matrices and linear solvers. Table 1 shows computation times for the segmentation and the deformation (including mapping extraction) phases. Since our deformation phase is an iterative method, the time to edit a mapping depends on the size of the mismatching regions and iterations. We have presented a novel approach for improving surface mappings locally. Our approach is based on a low-distortion region-growing segmentation followed by an independent planar parameterization of each segment. The mapping is then optimized based on an alignment of the user-prescribed landmarks in the parameterization space of each segment. Our joint planar parameterization deformation for the segments is robust, and in low distortion. Our new iterative LSCM approach can be reused in several contexts where a deformation with low distortion is required. From a practical perspective, our approach has several (a) Mesh A (b) Mesh B, initial mapping (c) Mesh B, WA (d) Mesh B, our edited mapping (e) Mesh A (f) Mesh B, initial mapping (g) Mesh B, WA (h) Mesh B, our edited mapping advantages. It can be used to improve the mapping ing from (a) Mesh A skeleton (b) Mesh B, initial skeleton (c) Mesh B, edited skeleton Fig. 24. When using the mapping to retarget attributes, in this case the skeleton, an incorrect mapping will lead to problems, here putting the thumb joint outside of the mesh. By locally editing the mapping, it is easy to fix such issues. any surface mapping method. It also provides a great deal of control, allowing the user to restrict editing to a specific region and to add as few or as many landmarks as necessary to achieve a desired . Our local editing leads to interesting questions which open many avenues for future work. One such prospective area is higherlevel landmarks such as lines. This will lead to challenges in terms of easing the interactive placement of these lines on both meshes, but will provide a better set of constraints for the deformation. Another avenue would be to extend the scope to editing deformation transfer. This will combine deformation with editing and enable the user to control animation retargeting. To ensure that there is always a single solution, even if λ is arbitrarily small, we add a new term E B to Eq. 1: where v old j denotes the position of vertex v j at the previous iteration. The energy E B pulls the vertices of the boundary to where they correspond given the mapping. The term E B pulls the vertices on the boundary of B i to their position at the previous iteration. Eq. 3b is weighted by a small constant ξ = 0.001 such that in practice the vertices will converge to map(v j). The previous position v old j is initialized with the position on the boundary of the ABF of B i. Our deformation method proceeds iteratively by finding a sequence of 2D embeddings V i of a given patch. We show that if the initial embedding of the mesh V 0 has no fold-overs, then the ing embedding at every iteration V i also has no fold-overs. We prove this by induction. The base case for i = 0 is given by the hypothesis, and thus, we are showing that if V i has no foldovers, our procedure will yield a configuration V i+1 that also has no fold-overs. At every iteration, the new set of vertex positions V i+1 is obtained by solving Eq. 1: arg min, where V i is the embedding in the current iteration, V i+1 is the new embedding we are computing, andλ > 0 is a parameter of the algorithm, constant w.r.t. this minimization. We selectλ as follows: we create a monotonically decreasing positive sequence λ j > 0 such that lim j→∞ (λ j) = 0; we solve the optimization problem for the λ j in the sequence and stop at the first element in the sequenceλ = λ k that yields a fold-over free configuration, and we now show that such a λ k always exists. Let B(x) ∈ R n×n, x ∈ R and B i j (x) is continuous in x ∀i, j. Lemma.1. det(B(x)) is a continuous function in x. Proof: We prove by induction on n. If n = 1, B(x) is a continuous real function. det(B(x)) = B 1 1 (x) is also continuous. We assume that the statement is true for n − 1 and we prove for n. We write det(B(x)) using the Laplace formula: where M i j is the minor of the entry (i, j) defined as the determinant of the sub-matrix obtained by removing row i and column j from B. As each element of this matrix is also continuous in x and this reduced matrix is n − 1 × n − 1, it follows from the inductive hypothesis that M i j is also continuous in x. As det(B(x)) is obtained by using addition and multiplication of continuous functions, it follows that det(B(x)) is continuous in x. Proof: If det(B(x)) = 0 ∀x then B(x) is invertible ∀x the inverse of a matrix has the following analytic expression: where C is the matrix of co-factors: C i j = (−1) i+ j · M i j and M i j is the minor of the entry (i, j). M i j (x) is continuous in x from Lemma.1. It follows trivially that C i j is continuous in x. Since det(B(x)) = 0 ∀x it follows from Corollary 1 that is continuous in x. Since B −1 (x) is obtained by multiplying a scalar function continuous in x by a matrix whose entries are all continuous in x, it follows that B −1 i j (x) is continuous in x ∀i, j. V i+1 (λ) is the minimizer of a quadratic energy function, and therefore, it has the standard least squares analytical solution: where the matrix A and vector b are computed from Eq. 1 in a standard way for a least squares solution. The matrix A(λ) has the following structure: where: 1) A 1 corresponds to E L and is a k × n matrix that encodes the landmark constraints of the patch. These constraints are weighted by λ (Eq. 2a). 2) A 2 corresponds to E B and is a b × n matrix that encodes the boundary constraints of the patch. These constraints are weighted by λ (Eq. 2b). 3) A 3 corresponds to E B, and is a 2×n matrix that constrains two boundary vertices to their positions in the previous iteration. These constraints are weighted by a small constant ξ independent of λ (Eq. 2c). 4) A 4 corresponds to E D and is the m × n matrix from the original LSCM formulation, where n is the number of vertices in the patch and m > n. Lemma Proof: The LSCM paper shows that if we constrain exactly 2 vertices, we obtain a unique solution, which means that: has rank n. Since the matrix A 3 4 does not depend on λ, it follows that rank(A 3 4) = n ∀λ. Since the rank of A 3 4 cannot be larger than n, and the rank of the ing matrix does not decrease by adding rows to the matrix, it follows that when stacking A 1 and A 2 to A 3 4 to form the final matrix A, rank(A(λ)) = n ∀λ. Since the rank of the Gramm matrix is the same as that of the matrix, it follows that rank(A t (λ) · A(λ)) = n ∀λ. Since A t (λ) · A(λ) is a n × n matrix, this means that A has full rank, therefore det(A(λ)) = 0 ∀λ. Lemma.5. V i+1 (λ) is continuous in λ. Proof: λ = 0 reduces the linear system to only A 3 and A 4. Theorem.7. ∃k > 0 s.t. V i+1 (λ k) has no fold-overs. Proof: From Lemma.6, if λ = 0, then V i+1 has no fold-overs. Since V i+1 is continuous in λ (Lemma .5), it follows that for all vertex positions ∃λ > 0 s.t. ∀λ, 0 < λ <λ,V i+1 (λ) has no foldovers. Since the sequence λ j is monotonically decreasing and lim j→∞ (λ j) = 0, it follows that ∃k s.t.0 < λ k <λ. It follows that V i+1 (λ k) has no fold-overs. By proving Theorem.7, we show that at every iteration, our embedding V i+1 has no fold-overs and thus yields an injective map. Our deformation method guarantees progress toward meeting the landmarks, constraints free of flipped faces, but it cannot guarantee that the user constraints will be satisfied. In fact, there are cases where it is impossible to meet these constraints, such as the example in Fig. 26. There are also "hard" cases (Fig. 27) where, while it might be possible to find a deformation that meets the constraints, deformation methods, such as LIM, SLIM, KP-Newton and our approach, are not able to find it.
We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints.
1,765
scitldr
Understanding the groundbreaking performance of Deep Neural Networks is one of the greatest challenges to the scientific community today. In this work, we introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities. By studying the Information Plane, the plane of the mutual information between the input variable and the desired label, for each hidden layer. Specifically, we show that the training of the network is characterized by a rapid increase in the mutual information (MI) between the layers and the target label, followed by a longer decrease in the MI between the layers and the input variable. Further, we explicitly show that these two fundamental information-theoretic quantities correspond to the generalization error of the network, as a of introducing a new generalization bound that is exponential in the representation compression. The analysis focuses on typical patterns of large-scale problems. For this purpose, we introduce a novel analytic bound on the mutual information between consecutive layers in the network. An important consequence of our analysis is a super-linear boost in training time with the number of non-degenerate hidden layers, demonstrating the computational benefit of the hidden layers. Deep Neural Networks (DNNs) heralded a new era in predictive modeling and machine learning. Their ability to learn and generalize has set a new bar on performance, compared to state-of-the-art methods. This improvement is evident across almost every application domain, and especially in areas that involve complicated dependencies between the input variable and the target label BID19. However, despite their great empirical success, there is still no comprehensive understanding of their optimization process and its relationship to their (remarkable) generalization abilities. This work examines DNNs from an information-theoretic viewpoint. For this purpose we utilize the Information Bottleneck principle BID37. The Information Bottleneck (IB) is a computational framework for extracting the most compact, yet informative, representation of the input variable (X), with respect to a target label variable (Y). The IB bound defines the optimal trade-off between representation complexity and its predictive power. Specifically, it is achieved by minimizing the mutual information (MI) between the representation and the input, subject to the level of MI between the representation and the target label. Recent BID35, demonstrated that the layers of DNNs tend to converge to the IB optimal bound. The pointed to a distinction between two phases of the training process. The first phase is characterized by an increase in the MI with the label (i.e. fitting the training data), whereas in the second and most important phase, the training error was slowly reduced with a decrease in mutual information between the layers and the input (i.e. representation compression). These two phases appear to correspond to fast convergence to a flat minimum (drift) following a random walk, or diffusion, in the vicinity of the training error's flat minimum, as reported in other studies (e.g. BID39).These observations raised several interesting questions: (a) which properties of the SGD optimization cause these two training phases? (b) how can the diffusion phase improve generalization perfor-mance? (c) can the representation compression explain the convergence of the layers to the optimal IB bound? (d) can this diffusion phase explain the benefit of many hidden layers?In this work we attempt to answer these questions. Specifically, we draw important connections between recent inspired by statistical mechanics and information-theoretic principles. We show that the layers of a DNN indeed follow the behavior described by BID35. We claim that the reason may be found in the Stochastic Gradient Decent (SGD) optimization mechanism. We show that the first phase of the SGD is characterized by a rapid decrease in the training error, which corresponds to an increase in the MI with the labels. Then, the SGD behaves like non-homogeneous Brownian motion in the weights space, in the proximity of a flat error minimum. This non-homogeneous diffusion corresponds to a decrease in MI between the layers and the input variable, in "directions" that are irrelevant to the target label. One of the main challenges in applying information theoretic measures to real-world data is a reasonable estimation of high dimensional joint distributions. This problem has been extensively studied over the years (e.g. BID28), and has led the that there is no "efficient" solution when the dimension of the problem is large. Recently, a number of studies have focused on calculating the MI in DNNs using Statistical Mechanics. These methods have generated promising in a variety of special cases BID8, which support many of the observations made by BID35.In this work we provide an analytic bound on the MI between consecutive layers, which is valid for any non-linearity of the units, and directly demonstrates the compression of the representation during the diffusion phase. Specifically, we derive a Gaussian bound that only depends on the linear part of the layers. This bound gives a super linear dependence of the convergence time of the layers, which in turn enables us to prove the super-linear computational benefit of the hidden layers. Further, the Gaussian bound allows us to study mutual information values in DNNs in real-world data without estimating them directly. Let X ∈ X and Y ∈ Y be a pair of random variables of the input patterns and their target label (respectively). Throughout this work, we consider the practical setting where X and Y are continuous random variables that are represented in a finite precision machine. This means that both X and Y are practically binned (quantized) into a finite number of discrete values. Alternatively, X, Y may be considered as continuous random variables that are measured in the presence of small independent additive (Gaussian) noise, corresponding to their numerical precision. We use these two interpretations interchangeably, at the limit of infinite precision, where the limit is applied at the final stage of our analysis. We denote the joint probability of X and Y as p(x, y), whereas their corresponding MI is defined as DISPLAYFORM0. We use the standard notation D[p||q] for the Kullback-Liebler (KL) divergence between the probability distributions p and q. Let f W K (x) denote a DNN, with K hidden layers, where each layer consists of d k neurons, each with some activation function σ k (x), for k = 1,..., K. We denote the values of the k th layer by the ranom vector T k. The DNN mapping between two consecutive layers is defined as DISPLAYFORM1 Note that we consider both the weights, W k and the layer representations, T k, as stochastic entities, because they depend on the stochastic training rule of the network and the random input pattern (as described in Section 2.1). However, when the network weights are given, the weights are fixed realizations of the random training process (i.e. they are "quenched"). Note that given the weights, the layers form a Markov chain of successive internal representations of the input variable X: Y → X → T 1 →... → T K, and their MI values obey a chain of Data Processing Inequalities (DPI), as discussed by BID35.We denote the set of all K layers weight matrices as W K = {W 1, . . ., W K}. Let the training sample, S n = {(x 1, y 1),..., (x n, y n)} be a collection of n independent samples from p(x, y). Let W K (x i, y i) be a (differentiable) loss function that measures the discrepancy between a prediction of the network f W K (x i) and the corresponding true target value y i, for a given set of weights W K. Then, the empirical error is defined as DISPLAYFORM2 The corresponding error gradients (with respect to the weights) are denoted as DISPLAYFORM3 2.1 TRAINING THE NETWORK -THE SGD ALGORITHM Training a DNN corresponds to the process of setting the values of weights W K from a given set of samples S n. This is typically done by minimizing the empirical error, which approximates the expected loss. The SGD algorithm is a common optimization method for this purpose BID30. Let S (m) be a random set of m samples drawn (uniformly, with replacement) from S n, where m < n. We refer to S (m) as a mini-batch of S n. Define the corresponding empirical error and gradient of the mini-batch as DISPLAYFORM0 respectively. Then, the SGD algorithm is defined by the update rule: DISPLAYFORM1, where W K (l) are the weights after l iterations of the SGD algorithm and η ∈ R + is the learning rate. The SGD algorithm plays a key role in the astonishing performance of DNNs. As a , it has been extensively studied in recent years, especially in the context of flexibility and generalization BID4. Here, we examine the SGD as a stochastic process, that can be decomposed into two separate phases. This idea has been studied in several works BID22 BID16 BID9. Murata argued that stochastic iterative procedures are initiated at some starting state and then move through a fast transient phase towards a stationary phase, where the distribution of the weights becomes time-independent. However, this may not be the case when the SGD induces non-isotropic state dependent noise, as argued, for example, by BID2.In contrast, BID35 described the transient phase of the SGD as having two very distinct dynamic phases. The first is a drift phase, where the means of the error gradients in every layer are large compared to their batch-to-batch fluctuations. This behaviour is indicative of small variations in the gradient directions, or high-SNR gradients. In the second part of the transient phase, which they refer to as diffusion, the gradient means become significantly smaller than their batch-to-batch fluctuations, or low-SNR gradients. The transition between the two phases occurs when the training error saturates and weights growth is dominated by the gradient batch-to-batch fluctuations. Typically, most SGD updates are expended in the diffusion phase before reaching Murata's stationary phase. In this work we rigorously argue that this diffusion phase causes the representation compression; the observed reduction in I(T k ; X), for most hidden layers. It is well known that the discrete time SGD (2.1) can be considered as an approximation of a continuous time stochastic gradient flow if the discrete-time iteration parameter l is replaced by a continuous parameter τ. BID20 recently showed that when the mini-batch gradients are unbiased with bounded variance, the discrete-time SGD is an approximation of a continuous-time Langevin dynamics, DISPLAYFORM0 where C W K (τ) is the sample covariance matrix of the weights, B(τ) is a standard Brownian motion (Wiener process) and β is the Langevin temperature constant. The first term in is called the gradient flow or drift component, and the second term corresponds to random diffusion. Although, this stochastic dynamics hold for the entire SGD training process, the first term dominates the process during the high SNR gradient phase, while the second term becomes dominant when the gradients are small, due to saturation of the training error in the low SNR gradient phase. Hence, these two SGD phases are referred to as drift and diffusion. The mean L 2 displacement (MSD) measures the Euclidean distance from a reference position over time, which is used to characterize a diffusion process. Normal diffusion processes are known to exhibit a power-law MSD in time, DISPLAYFORM1 γ is related to the diffusion coefficient, and 0 < α ≤ 0.5 is the diffusion exponent. For a standard flat space diffusion, the MSD increases as a square root of time (α = 0.5). BID14 showed (empirically) that the weights' MSD, in a DNNs trained with SGD, indeed behaves (asymptotically) like a normal diffusion, where the diffusion coefficient γ depends on the batch size and learning rate. In contrast, BID13 showed that the weights' MSD demonstrates a much slower logarithmic increase. This type of dynamics is also called "ultra-slow" diffusion. Following BID36 and BID35, we study the layer representation dynamics in the two-dimensional (I(X; T k), I(T k ; Y)) plane. Specifically, for any input and target variables, X, Y, let T T (X) denote a representation, or an encoding (not necessarily deterministic), of X. Clearly, T is fully characterized by its encoder, the conditional distribution p(t|x). Similarly, let p(y|t) denote any (possibly stochastic) decoder of Y from T. Given a joint probability function p(x, y), the Information Plane is defined the set of all possible pairs I(X; T) and I(T ; Y) for any possible representation, p(T |X).It is evident that not all points on the plane are feasible (achievable), as there is clearly a tradeoff between these quantities; the more we compress X (reduce I(X; T)), the less information can be maintained about the target, I(T ; Y).Our analysis is based on the fundamental role of these two MI quantities. We argue that for large scale (high dimensional X) learning, for almost all (typical) input patterns, with mild assumptions (ergodic Markovian input patterns): (i) the MI values concentrate with the input dimension; (ii) the minimal sample complexity for a given generalization gap is controlled by I(X; T); and (iii) the accuracy -the generalization error -is governed by I(T ; Y), with the Bayes optimal decoder representation. Here, we argue that the sample-size -accuracy trade-off, of all large scale representation learning, is characterized by these two MI quantities. For DNNs, this amounts to a dramatic reduction in the complexity of the analysis of the problem. We discuss these ideas in the following sections and prove the connection between the input representation compression I(T ; X), the generalization gap (the difference between training and generalization errors), and the minimal sample complexity (Theorem 1 below). Optimizing mutual information quantities is by no means new in either supervised and unsupervised learning BID7 BID21 BID26. This is not surprising, as it can be shown that I(T ; Y) corresponds to the irreducible error when minimizing the logarithmic loss BID25 BID10. Here, we emphasize that I(T ; Y), for the optimal decoder of the representation T, governs all reasonable generalization errors (under the mild assumption that label y is not completely deterministic; p(y|x) is in the interior of the simplex, ∆(Y), for all typical x ∈ X). First, note that with the Markov chain DISPLAYFORM0. By using the Pinsker inequality BID6 ) the variation distance between the optimal and the representation decoders can be bound by their KL divergence, DISPLAYFORM1 Hence, by maximizing I(T ; Y) we minimize the expected variation risk between the representation decoder p(y|t) and p(y|x). For more similar bounds on the error measures see BID24. The Minimum Description Length (MDL) principle BID29 suggests that the best representation for a given set of data is the one that leads to the minimal code-length needed to represent of the data. This idea has inspired the use of I(X; T) as a regularization term in many learning problems (e.g. BID5 ; BID27). Here, we argue that I(X; T) plays a much more fundamental role; we show that for large scale (high dimensional X) learning and for typical input patterns, I(X; T) controls the sample complexity of the problem, given a generalization error gap. Theorem 1 (Input Compression bound). Let X be a d-dimensional random variable that obeys an ergodic Markov random field probability distribution, asymptotically in d. Let T T (X) be a representation of X and denote by T m = {(t 1, y 1),..., (t m, y m)} an m-sample vector of T and Y, generated with m independent samples of x i, with p(y|x i) and p(t|x i). Assume that p(x, y) is bounded away from 0 and 1 (strictly inside the simplex interior). Then, for large enough d, with probability 1 − δ, the typical expected squared generalization gap satisfies DISPLAYFORM0 where the typicality follows the standard Asympthotic Equipartition Property (AEP) BID6.A proof of this Theorem is given in Appendix A. This Theorem is also related to the bound proved by BID33, with the typical representation cardinality, |T (X)| ≈ 2 I(T ;X). The ergodic Markovian assumption is common in many large scale learning problems. It means that p(DISPLAYFORM1 The consequences of this input-compression bound are quite striking: the generalization error decreases exponentially with I(X; T), once I(T ; X) becomes smaller than log 2m -the query samplecomplexity. Moreover, it means that M bits of representation compression, beyond log 2m, are equivalent to a factor of 2 M training examples. The tightest bound on the generalization bound is obtained for the most compressed representation, or the last hidden layer of the DNN. The inputcompression bound can yield a tighter and more realistic sample complexity than any of the worstcase PAC bounds with any reasonable estimate of the DNN class dimensionality, as typically the final hidden layers are compressed to a few bits. Nevertheless, two important caveats are in order. First, the layer representation in Deep Learning are learned from the training data; hence, the encoder, the partition of the typical patterns X, and the effective "hypothesis class", depend on the training data. This can lead to considerable over-fitting. Training with SGD avoids this potential over-fitting because of the way the diffusion phase works. Second, for low I(T ; Y) there are exponentially (in d) many random encoders (or soft partitions of X) with the same value of I(T ; X). This seems to suggest that there is a missing exponential factor in our estimate of the hypothesis class cardinality. However, note that the vast majority (almost all) of these possible encoders are never encountered during typical SGD optimization. In other words, they act like a "dark hypothesis space" which is never observed and does not affect the generalization bound. Moreover, as I(T ; Y) increases, the number of such random encoders rapidly collapses all the way to O when I(T ; Y) approaches the optimal IB limit, as we show next. As presented above, we are interested in the boundary of the achievable region in the information plane, or in encoder-decoder pairs that minimize the sample complexity (minimize I(X; T)) and generalize well (maximize I(T ; Y)).These optimal encoder-decoder pairs are given precisely by the Information Bottleneck framework BID37, which is formulated by the following optimization problem: min p(t|x) I (X; T) − βI (T ; Y), over all possible encoders-decoders pairs that satisfy the Markov condition Y − X − T. Here β is a positive Lagrange multiplier associated with the decoder information on I(T ; Y), which also determines the complexity of the representation. The Information Bottleneck limit defines the set of optimal encoder-decoder pairs, for the joint distribution p(x, y). Furthermore, it characterizes the achievable region in the Information Plane, similar to Shannon's Rate Distortion Theory (RDT) BID6. By our previous analysis it also determines the optimal tradeoff between sample complexity and generalization error. The IB can only be solved analytically in very special cases (e.g., jointly Gaussian X, Y BID3). In general, a (locally optimal) solution can be found by iterating the self-consistent equations, similar to the Arimoto-Blahut algorithm in RDT BID37. For general distributions, no efficient algorithm for solving the IB is known, though there are several approximation schemes BID1 BID23. The self-consistent equations are exactly satisfied along the IB limit, aka the Information Curve. By applying the DPI to the Markov chain of the DNN layers we obtain the following chains: I(X; T 1) ≥ I(X; T 2) ≥ · · · ≥ I(X; T k) ≥ I(X;Ŷ) and I(X; Y) ≥ I(T 1 ; Y) ≥ · · · ≥ I(T k ; Y) ≥ I(Ŷ ; Y) whereŶ is the output of the network. The pairs (I(X; T k), I(T k, Y)), for each SGD update, form a unique concentrated Information Path for each layer of a DNN, as demonstrated by BID35.For any fixed realization of the weights, the network is, in principle, a deterministic map. This does not imply that information is not lost between the layers; the inherent finite precision of the layers, with possible saturation of the nonlinear activation functions σ k, can in non-invariable mapping between the layers. Moreover, we argue below that for large networks this mapping becomes effectively stochastic due to the diffusion phase of the SGD.On the other hand, the Information Plane layer paths are invariant to invertible transformations of the representations T k. Thus the same paths are shared by very different weights and architectures, and possibly different encoder-decoder pairs. This freedom is drastically reduced when the target information, I(T k, Y), increases and the layers approach the IB limit. Minimizing the training error (ERM), together with standard uniform convergence arguments clearly increase I(T ; Y), but what in the SGD dynamics can lead to the observed representation compression which further improves generalization? Moreover, can the SGD dynamics push the layer representations all the way to the IB limit, as claimed in?We provide affirmative answers to both questions, using the properties of the drift and diffusion phases of the SGD dynamics. In this section we quantify the roles of the drift and diffusion SGD phases and their influence on the MI between consecutive layers. Specifically, we show that the drift phase corresponds to an increase in information with the target label I(T k ; Y), whereas the diffusion phase corresponds to representation compression, or reduction of the I(X; T k). The representation compression is accompanied by further improvement in the generalization. The general idea is as follows: the drift phase increases I(T k ; Y) as it reduces the cross-entropy empirical error. On the other hand, the diffusion phase in high dimensional weight space effectively adds an independent non-uniform random component to the weights, mostly in the directions that do not influence the loss -i.e, irrelevant directions. This in a reduction of the SNR of the irrelevant features of the patterns, which leads to a reduction in I(X; T k), or representation compression. We further argue that different layers filter out different irrelevant features, ing in their convergence to different locations on the Information Plane. First, we notice that the DPI implies that I(X; T k) ≤ I(T k−1 ; T k). We focus on the second term during the diffusion phase and prove an asymptotic upper bound for I(T k−1 ; T k), which reduces sub-linearly with the number of SGD updates. For clarity, we describe the case where T k ∈ R d k is a vector and T k+1 ∈ R is a scalar. The generalization to higher d k+1 is straightforward. We examine the network during the diffusion phase, after τ iterations of the SGD beyond the drift-diffusion transition. For each layer, k, the weights matrix, W k (τ) can be decomposed as follows, DISPLAYFORM0 The first term, W k, denotes the weights at the end of the drift phase (τ 0 = 0) and remains constant with increasing τ. As we assume that the weights converge to a (local, flat) optimum during the drift phase, W k is close to the weights at this local optimum. The second term, δW k (τ), is the accumulated Brownian motion in τ steps due to the batch-to-batch fluctuations of the gradients near the optimum. For large τ we know that δW DISPLAYFORM1 ) where τ 0 is the time that the diffusion phase began. Note that for any given τ, we can treat the weights as a fixed (quenched) realization, w k (τ), of the random Brownian process W k (τ). We can now model the mapping between the layers T k and T k+1 at that time as DISPLAYFORM2 where w * ∈ R d k is the SGD's empirical minimizer, and δw ∈ R d k is a realization from a Gaussian vector δw ∼ N (0, C δw), of the Brownian process discussed in Section 2.3. In addition, we consider Z ∼ N (0, σ 2 z) to be the small Gaussian measurement noise, or quantization, independent of δw k and T k. This standard additive noise allows us to treat all the random variables as continuous. For simplicity we first assume that the d k components of T k have zero mean and are asymptotically independent for d k → ∞.Proposition 2. Under mild technical conditions which are met with probability 1 in standard deep learning (see Appendix B), we have that DISPLAYFORM3 almost surely, where σ 2 T k is the variance of the components of T k.A proof for this CLT proposition is given in Appendix B.Proposition 2 shows that under standard conditions, w * T T k and δw T T k are asymptotically jointly Gaussian and independent, almost surely. We stress that the components of T k do not have to be identically distributed to satisfy this property; Proposition 2 may be adjusted for this case with different normalization factors. Similarly, the independence assumption on T k can easily be relaxed to Markovian ergodic, as we assume the input patterns. Finally, it is easy to verify that Proposition 2 can be extended to the general case where w *, δw ∈ R d k ×d k+1, under similar technical conditions. We can now bound the mutual information between T k+1 and the linear projection of the previous layer W * T k, during the diffusion phase, for sufficiently high dimensions d k, d k+1, under the conditions above. Note that in this case, behaves like an additive Gaussian channel where w * T T k is the signal and δw T T k + Z is an independent additive Gaussian noise (i.e., independent of signal and normally distributed). Hence, for sufficiently large d k and d k+1, we have DISPLAYFORM4 almost surely, where the first inequality is due to DPI for the Markov chain T k − w * T T k − T k+1. Finally, we apply an orthogonal eigenvalue decomposition to this multivariate Gaussian channel. Let δw T δw = QΛQ T where QQ T = I and Λ is a diagonal matrix whose diagonal elements are the corresponding eigenvalues, λ i, of δw T δw. Then, we have that DISPLAYFORM5 The last inequality is due to the Hadamard inequality. Plugging into yields that for sufficiently large d k and d k+1, DISPLAYFORM6 As previously established, δw is a Brownian motion along the SGD iterations during the diffusion phase. This process is characterized by a low (and fixed) variance of the informative gradients (relevant dimensions), whereas the remaining irrelevant directions suffer from increasing variances as the diffusion proceeds (see, for example, BID31 ; BID15). In other words, we expect the "informative" λ i to remain fixed, while the irrelevant consistently grow, sub-linearly with time. Denote the set of "informative/relevant" directions as Λ * and the set of "non-informative" as Λ N I. Then our final limit, as the number of SGD steps grow, is DISPLAYFORM7. Note that the directions that are compressed and the ones that are preserved depend on the required compression level. This is the reason that different layers converge to different values of I(T k ; X). The analysis above suggests that the SGD compresses during the diffusion phase in many directions of the gradients. We argue that these directions are the ones in which the variance of the gradients is increasing (non-informative) whereas the information is preserved in the directions where the variance of the gradients remain small. This statement is consistent with recent (independent) work on the statistical properties of gradients and generalization. BID31;; BID40 showed that typically, the covariance matrix of the gradients is highly non-isotropic and that this is crucial for generalization by SGD. They suggested that the reason lies in the proximity of the gradients' covariance matrix to the Hessian of the loss approximation. Furthermore, it was argued by BID40; BID17 BID15 that SGD tends to converge to flat minima. These flat minima often correspond to a better generalization. BID40 emphasized that SGD converges to flat minima values characterized by high entropy due to the non-isotropic nature of the gradients' covariance and its alignment with the error Hessian at the minima. In other words, all of the finding above suggest that good generalization performance is typically characterized by non-isotropic gradients and Hessian, that are in orthogonal directions to the flat minimum of the training error objective. Our Gaussian bound on the representation compression allows us to relate the convergence time of the layer representation information, I(T k ; X), to the diffusion exponent α, defined in section 2.3.Denote the representation information at the diffusion time τ as I(X; T k)(τ). It follows from that I(X; T k)(τ) ≤ C + 1 2 DISPLAYFORM0 where C depends on the informative information for this layer, but not on τ.Notice that λ i (τ) are the singular values of the weights of a diffusion process, which grow as τ α where α is the diffusion exponent. Hence, DISPLAYFORM1 The change of weights, the SNR of the gradients, the MI and the Gaussian bound during the training for one layer. In log-log scale. Note that R depends solely on the problem, f (x) or p(y, x), and not on the architecture. The idea behind this argument is as follows -one can expand the function in any orthogonal basis (e.g. Fourier transform). The expansion coefficients determine both the dimensionality of the relevant/informative dimensions and the total trace of the irrelevant directions. Since these traces are invariant to the specific function basis, these traces remain the same when expanding the function in the network functions using the weights. Now, with K hidden layers, where each layer only needs to compress from the previous (compressed) layer, by ∆I k and the total compression is ∆I X = k ∆I k. Under these assumptions, even if the layers compress one after the other, the total compression time breaks down into K smaller steps, as at DISPLAYFORM2 if the ∆I k are similar, we obtain a super-linear boost in the computational time by a factor K 1 α. Since α ≤ 0.5 this is at least a quadratic boost in K. For ultra-slow diffusion we obtain an exponential boost (in K) in the convergence time to a good generalization. This is consistent with the observations reported by BID35. We now illustrate our in a series of experiments. We examine several different setups. MNIST dataset-In the first experiment, we evaluate the MNIST handwritten digit recognition task BID18. For this data set, we use a fully-connected network with 5 hidden layers of width 500 − 250 − 100 − 50 − 20, with an hyperbolic tangent (tanh) activation function. The relative low dimension of the network and the bounded activation function allow us to empirically measure the MI in the network. The MI is estimated by binning the neurons' output into the interval [−1, 1].The discretized values are then used to estimate the joint distributions and the corresponding MI, as described by BID35. Figure 1a depicts the norms of the weights, the signal-to-noise ratio (the ratio between the means of the gradients and their standard deviations), the compression rate I(X; T) and the Gaussian upper bound on I(X; T), as defined in. As expected, the two distinct phases correspond to the drift and diffusion phases. Further, these two phases are evident by independently observing the SNR, the change of the weights ||W (l) − W||, the MI and the upper bound. In the first phase, the weights grow almost linearly with the iterations, the SNR of the gradients is high, and there is almost no change in the MI. Then, after the transition point (that accrues almost at the same iteration for all (a) CIFAR-10 (b) CIFAR-100Figure 2: Change in the SNR of the gradients and the Gaussian bound on the MI during the training of the network for one layer on ResNet-32, in log-log scale.the measures above), the weights behave as a diffusion process, and the SNR and the MI decrease remarkably. In this phase, there is also a clear-cut reduction of the bound. CIFAR-10 and CIFAR-100 -Next, we validate our theory on large-scale modern networks. In the second experiment we consider two large-scale data sets, CIFAR-10 and CIFAR-100. Here, we train a ResNet-32 network, using a standard architecture (including ReLU activation functions as described in BID11 . In this experiment we do not estimate the MI directly, due to the large scale of the problem. Figure 2 shows the SNR of the gradients and the Gaussian bound for one layer in CIFAR-10 and CIFAR-100 on the ResNet-32 network, averaged over 50 runs. Here, we observed similar behavior as reported in the MNIST experiment. Specifically, there is a clear distinction between the two phases and a reduction of the MI bound along the diffusion phase. Note that the same behavior was observed in most of the 32 layers in the network. Recently there have been several attempts to characterize the correspondence between the diffusion rate of the SGD and the size of the mini-batch BID14 BID13). In these articles, the authors claimed that a larger mini-batch size corresponds to a lower diffusion rate. Here, we examined the effect of the mini-batch size on the transition phase in the Information Plane. For each mini-batch size, we found both the starting point of the information compression and the gradient phase transition (the iteration where the derivative of the SNR is maximal). Figure 1b illustrates the . The X-axis is the iteration where the compression started, and the Y -axis is the iteration where the phase transition in the gradients accrued for different mini-batch sizes. There is a clear linear trend between the two. This further justifies our suggested model, since that the two measures are strongly related. Next, we validate our on the computational benefit of the layers. We train networks with a different number of layers (1-5 layers) and examine the iteration for which the network converge. Then, we find the α which fits the best trend K 1 α where K is the number of layers. FIG1 shows the for two data-sets -MNIST and the symmetric dataset from BID35. As our theory suggest, as we increase the number of layers, the convergence time decreases with a factor of k 1 α for different values of α. In this work we study DNNs using information-theoretic principles. We describe the training process of the network as two separate phases, as has been previously done by others. In the first phase (drift) we show that I(T k ; Y) increases, corresponding to improved generalization with ERM. In the second phase (diffusion), the representation information, I(X; T k) slowly decreases, while I(T K ; Y) continues to increase. We rigorously prove that the representation compression is a direct consequence of the diffusion phase, independent of the non-linearity of the activation function. We provide a new Gaussian bound on the representation compression and then relate the diffusion exponent to the compression time. One key outcome of this analysis is a novel proof of the computational benefit of the hidden layers, where we show that they boost the overall convergence time of the network by at least a factor of K 2, where K is the number of non-degenerate hidden layers. This boost can be exponential in the number of hidden layers if the diffusion is "ultra slow", as recently reported.1 m m i=1 h (x i, y i) be the empirical error. Hoeffding's inequality BID12 shows that for every h ∈ H, DISPLAYFORM0 Then, we can apply the union bound and conclude that DISPLAYFORM1 We want to control the above probability with a confidence level of δ. Therefore, we ask that 2 H exp −2 2 m ≤ δ. This leads to a PAC bound, which states that for a fixed m and for every h ∈ H, we have with probability 1 − δ that DISPLAYFORM2 Note that under the definitions stated in Section 1.1, we have that |H| ≤ 2 X. However, the PAC bound above also holds for a infinite hypotheses class, where log |H| is replaced with the VC dimension of the problem, with several additional constants BID38 BID34 BID32.Let us now assume that X is a d-dimensional random vector which follows a Markov random field structure. As stated above, this means that p(x i) = i p(x i |P a(x i)) where P a(X i) is a set of components in the vector X that are adjacent to X i. Assuming that the Markov random field is ergodic, we can define a typical set of realizations from X as a set that satisfies the Asymptotic Equipartition Property (AEP) BID6. Therefore, for every > 0, the probability of a sequence drawn from X to be in the typical set A is greater than 1 − and |A | ≤ 2 H(X)+. Hence, if we only consider a typical realization of X (as opposed to every possible realization), we have that asymptotically H ≤ 2 H(X). Finally, let T be a mapping of X. Then, 2 H(X|T) is the number of typical realizations of X that are mapped to T. This means that the size of the typical set of T is bounded from above by 2 H(X) 2 H(X|T) = 2 I(X;T). Plugging this into the PAC bound above yields that with probability 1 − δ, the typical squared generalization error of T,
Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities
1,766
scitldr
We propose and study a method for learning interpretable representations for the task of regression. Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions. Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation. The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation. We compare several stochastic optimization approaches within this framework. We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches. Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting). We also report a negative in which attempts to directly optimize the disentanglement of the representation in more highly correlated features. The performance of a machine learning (ML) model depends primarily on the data representation used in training BID3, and for this reason the representational capacity of neural networks (NN) is considered a central factor in their success in many applications BID19. To date, there does not seem to be a consensus on how the architecture should be designed. As problems grow in complexity, the networks proposed to tackle them grow as well, leading to an intractable design space. One design approach is to tune network architectures through network hyperparameters using grid search or randomized search BID4 with cross validation. Often some combination of hyperparameter tuning and manual design by expertise/intuition is done BID19. Many approaches to network architecture search exist, including weight sharing BID53 and reinforcement learning BID70. Another potential solution explored in this work (and others) is to use population-based stochastic optimization (SO) methods, also known as metaheuristics BID44. In SO, several candidate solutions are evaluated and varied over several iterations, and heuristics are used to select and update the candidate networks until the population produces a desirable architecture. The general approach has been studied at least since the late 80s in various forms BID45 BID69 BID60 for NN design, with several recent applications BID55 BID28 BID9 BID54.In practice, the adequacy of the architecture is often dependent on conflicting objectives. For example, interpretability may be a central concern, because many researchers in the scientific community rely on ML models not only to provide predictions that match data from various processes, but to provide insight into the nature of the processes themselves. Approaches to interpretability can be roughly grouped into semantic and syntactic approaches. Semantic approaches encompass methods that attempt to elucidate the behavior of a model under various input conditions as a way of explanation (e.g. BID56). Syntactic methods instead focus on the development of concise models that offer insight by virtue of their simplicity, in a similar vein to models built from first-principles (e.g. BID63 BID57). Akin to the latter group, our goal is to discover the simplest description of a process whose predictions generalize as well as possible. Good representations should also disentangle the factors of variation BID3 in the data, in order to ease model interpretation. Disentanglement implies functional modularity; i.e., sub-components of the network should encapsulate behaviors that model a sub-process of the task. In this sense, stochastic methods such as evolutionary computation (EC) appear well-motivated, as they are premised on the identification and propagation of building blocks of solutions BID23. Experiments with EC applied to networks suggest it pressures networks to be modular BID24 BID29. Although the identification functional building blocks of solutions sounds ideal, we have no way of knowing a priori whether a given problem will admit the identification of building blocks of solutions via heuristic search BID49. Our goal in this paper is thus to empirically assess the performance of several SO approaches in a system designed to produce intelligible representations from NN building blocks for regression. In Section 2, we introduce a new method for optimizing representations that we call the feature engineering automation tool (FEAT) 1. The purpose of this method is to optimize an archive of representations that characterize the trade-off between conciseness and accuracy among representations. Algorithmically, two aspects of the method distinguish FEAT from previous work. First, it represents the internal structure of each NN as a set of syntax trees, with the goal of improving the transparency of the ant representations. Second, it uses weights learned via gradient descent to provide feedback to the variation process at a more granular level. We compare several multi-objective variants of this approach using EC and non-EC methods with different sets of objectives. We discuss related work in more detail in Section 3. In section 4 and 5, we describe and conduct an experiment that benchmarks FEAT against state-of-the-art ML methods on 100 open-source regression problems. Future work based on this analysis is discussed in Section 6, and additional detailed are provided in the Appendix. We are interested in the task of regression, for which the goal is to build a predictive modelŷ(x) using DISPLAYFORM0. The regression modelŷ(x) associates the inputs x ∈ R d with a real-valued output y ∈ R. The goal of feature engineering / representation learning is to find a new representation of x via a m-dimensional feature mapping φ(x): R d → R m, such that a model y(φ(x)) outperforms the modelŷ(x). We will assume that each predictor in x is scaled to zero mean, unit-variance. When applying a NN to a traditional ML task like regression or classification, a fixed NN architecture φ(x, θ), parameterized by θ, is chosen and used to fit a model DISPLAYFORM1 T is a NN representation with m nodes in the final hidden layer and a linear output layer with estimated coefficientsβ = [β 1 . . .β m]T. Typically the problem is then cast as a parameter optimization problem that minimizes a loss function via gradient descent. In order to tune the structure of the representation, we instead wish to solve the joint optimization problem DISPLAYFORM2 whereφ(x,θ) is chosen to minimize a cost function L, with global optimum φ * (x, θ *). (L may depend on θ and β in the case of regularization.) S is the space of possible representations realizable by the search procedure, and φ * is the true structure of the process underlying the data. The assumption of SO approaches such as evolutionary computation (EC) and simulated annealing (SA) BID11. FEAT uses a typical µ + λ evolutionary updating scheme, where µ = λ = P. The method optimizes a population of potential representations, N = {n 1 . . . n P}, where n is an "individual" in the population, iterating through these steps:1. Fit a linear modelŷ = x Tβ. Create an initial population N consisting of this initial representation, φ = x, along with P − 1 randomly generated representations that sample x proportionally toβ. (a) Select parents P ⊆ N using a selection algorithm. (b) Apply variation operators to parents to generate P offspring O; N = N ∪ O (c) Reduce N to P individuals using a survival algorithm.3. Select and return n ∈ N with the lowest error on a hold-out validation set. Individuals are evaluated using an initial forward pass, after which each representation is used to fit a linear model (Eqn. 1) using ridge regression BID22. The weights of the differentiable features in the representation are then updated using stochastic gradient descent. The salient aspect of the proposed method is its use of syntax trees to represent the internal architecture of the network, as shown in FIG0. FEAT constructs these trees from elementary boolean-and continuousvalued functions and literals (see TAB0). This scheme is inspired by symbolic regression (SR) BID34. In contrast to typical SR, each individual n is a set of such trees, the output of which is interpreted as a candidate representation, i.e. φ(x) = [φ 1 . . . φ m] for an individual with m trees. The second difference from traditional SR is that the weights of differentiable nodes are encoded in the edges of the graph, rather than as independent nodes. We include instructions typically used as activation functions used in NN, e.g. tanh, sigmoid, logit and relu nodes, elementary arithmetic and boolean operators. Although a fully connected feedforward NN could be represented by this construction, representations in FEAT are biased to be thinly connected. Our hypothesis is that by doing so, we will improve the representation's legibility without sacrificing its capacity for modelling nonlinear relationships. During variation, the representations are perturbed using a set of mutation and crossover methods. FEAT chooses among 6 variation operators that are as follows. Point mutation changes a node type to a random one with matching output type and arity. Insert mutation replaces a node with a randomly generated depth 1 subtree. Delete mutation removes a feature or replaces a sub-program with an input node, with equal probability. Insert/Delete dimension adds or removes a new feature. Sub-tree crossover replaces a sub-tree from one parent with the sub-tree of another parent. Dimension crossover swaps two features between parents. The exact probabilities of each variation operator will affect the performance of the algorithm, and others have proposed methods for adjusting these probabilities, e.g. BID26. For the purposes of our study, we use each operator with uniform probability. Feedback The use of an ML model to assess the fitness of each representation can be used to provide information about the elements of the representation that should be changed. In particular, we assume that programs in the representation with small coefficients are the best candidates for mutation and crossover. With this in mind, let n be an m-dimensional candidate representation with associated coefficients DISPLAYFORM0 The probability of mutation for tree i in n is denoted P M i (n), and defined as follows: DISPLAYFORM1 The normalized coefficient magnitudesβ ∈ are used to define softmax-normalized probabilities, s in Eqn. 3. The smaller the coefficient, the higher the probability of mutation. The parameter f is used to control the amount of feedback used to weight the probabilities; 1 m in this case represents uniform probability. Among nodes in tree m, mutation occurs with uniform probability. This weighting could be extended for differentiable nodes by weighting the within-tree probabilities by the magnitude of the weights associated with each node. However we expect this would yield diminishing returns. The selection step selects among P parents those representations that will be used to generate offspring. Following variation, the population consists of 2P representations of parents and offspring. The survival step is used to reduce the population back to size P, at which point the generation is finished. In our initial study, we empirically compared five algorithms for selection and survival: 1) -lexicase selection (Lex) BID39, 2) non-dominated sorting genetic algorithm (NSGA2) BID11, 3) a novel hybrid algorithm using Lex for selection and NSGA2 for survival, 4) simulated annealing BID31, and 5) random search. These comparisons are described in Appendix Section A.2. We found that the hybrid algorithm performed the best; it is described below. Parents are selected using Lex. Lex was proposed for regression problems BID39 BID40 as an adaption of lexicase selection BID58 for continuous domains. Under -lexicase selection, parents are chosen by filtering the population according to randomized orderings of training samples with the threshold defined relative to the sample loss among the selection pool. This filtering strategy scales probability of selection for an individual based on the difficulty of the training cases the individual performs well on. Lex has shown strong performance among SR methods in recent tests, motivating our interest in studying it BID50. The survival step for Lex just preserves offspring plus the best individual in the population. Survival is conducted using the survival sub-routine of NSGA2, a popular strategy for multi-objective optimization BID11. NSGA2 applies preference for survival using Pareto dominance relations. An individual (n i) is said to dominate another (n j) if, for all objectives, n i performs at least as well as n j, and for at least one objective, n i strictly outperforms n j. The Pareto front is the set of individuals in N that are non-dominated in the population and thus represent optimal trade-offs between objectives found during search. Individuals are assigned a Pareto ranking that specifies the number of individuals that dominate them, thereby determining their proximity to the front. The survival step of NSGA2 begins by sorting the population according to their Pareto front ranking and choosing the lowest ranked individuals for survival. To break rank ties, NSGA2 assigns each individual a crowding distance measure, which quantifies an individual's distance to its two adjacent neighbors in objective space. If a rank level does not completely fit in the survivor pool, individuals of that rank are sorted by highest crowding distance and added in order until P individuals are chosen. We consider three objectives in our study corresponding to three goals: first, to reduce model error; second, to minimize complexity of the representation; and third, to minimize the entanglement of the representation. We test the third objective using two different metrics: the correlation of the transformation matrix φ(x) and its condition number. These metrics are defined below. The first objective always corresponds to the mean squared loss function for individual n, and the second corresponds to the complexity of the representation. There are many ways to define complexity of an expression; one could simply look at the number of operations in a representation, or look at the behavioral complexity of the representation using a polynomial order BID66 ). The one we use, which is similar to that used by BID33, is to assign a complexity weight to each operator (see TAB0), with higher weights assigned to operators considered more complex. If the weight of operator o is c o, then the complexity of an expression tree beginning at node o is defined recursively as DISPLAYFORM0 where o has k arguments, and C(a) is the complexity of argument a. The complexity of a representation is then defined as the sum of the complexities of its output nodes. The goal of defining complexity in such a way is to discourage deep sub-expressions within complex nodes, which are often hard to interpret. It's important to note that the choice of operator weights is bound to be subjective, since we lack an objective notion of interpretability. For this reason, although we use Eqn. 4 to drive search, our experimental comparisons with other algorithms rely on the node counts of the final models for benchmarking interpretability of different methods. Disentanglement is a term used to describe the notion of a representation's ability to separate factors of variation in the underlying process BID3. Although a thorough review is beyond the scope of this section, there is a growing body of literature addressing disentanglement, primarily with unsupervised learning and/or image analysis BID46 BID67 BID21 BID17 BID20 BID36. There are various ways to quantify disentanglement. For instance, Brahma et al. FORMULA2 proposed measuring disentanglement as the difference between geodesic and Euclidean distances among points on a manifold (i.e. training instances). If the latent structure is known, the information-theoretic metrics proposed by BID13 may be used. In the case of regression, a disentangled representation ideally contains a minimal set of features, each corresponding to a separate latent factor of variation, and each orthogonal to each other. In this regard, we attempt to minimize the collinearity between features in φ as a way to promote disentanglement. We tested two measurements of collinearity (a.k.a. multicollinearity) in the derived feature space. The first is the average squared Pearson's correlation among features of φ, i.e., DISPLAYFORM1 Eqn. 5 is relatively inexpensive to compute but only captures bivariate correlations in φ. As a we also test the condition number (CN). Consider the N × m representation matrix Φ. The CN of Φ is defined as DISPLAYFORM2 where µ max and µ min are the largest and smallest singular values of Φ. Unlike Corr, CN can capture higher-order dependencies in the representation. CN is also related directly to the sensitivity of Φ to perturbations in the training data BID2 BID8, and thus captures a notion of network invariance explored in previous work by BID18. We consider another common measure of multicollinearity, the variance inflation factor , to be too expensive for our purposes. The idea to evolve NN architectures is well established in literature, and is known as neuroevolution. Popular methods of neuroevolution include neuroevolution of augmenting topologies (NEAT BID60 and Hyper-NEAT BID61, and compositional pattern producing networks BID59 . The aforementioned approaches eschew the parameter learning step common in other NN paradigms, although others have developed integrations BID14 . In addition, they have been developed predominantly for other task domains such as robotics and control BID16, image classification BID55 BID54, and reinforcement learning BID25 BID9 . Reviews of these methods are available BID15 BID62 .Most neuroevolution strategies do not have interpretability as a core focus, and thus do not attempt to use multi-objective methods to update the networks. An exception is the work of BID68, in which a template NN was optimized using a multi-objective EC method with size as an objective. In this case, the goal was to reduce computational complexity in face detection. Neuroevolution is a part of a broader research field of neural architecture search (NAS) BID70 BID42 BID43. NAS methods vary in approach, including for example parameter sharing BID53, sequential model-based optimization BID43, reinforcement learning BID70, and greedy heuristic strategies BID10.FEAT is also related to SR approaches to feature engineering BID35 BID0 BID41 BID47 ) that use EC to search for possible representations and couple with an ML model to handle the parametrization of the representations. SR methods have been successful in developing intelligible models of physical systems BID57 BID38. FEAT differs from these methods in the following ways. A key challenge in SR is understanding functional modularity within representations/programs that can be exploited for search. FEAT is designed with the insight that ML weights can be leveraged during variation to promote functional building blocks, an exploit not used in previous methods. Second, FEAT uses multiple type representations, and thus can learn continuous and rule-based features within a single representation, unlike previous methods. This is made possible using a stack-based encoding with strongly-typed operators. Finally, FEAT incorporates two elements of NN learning to improve its representational capacity: activation functions commonly used in NN and edge-based encoding of weights. Traditionally, SR operates with standard mathematical operators, and treats constants as leaves in the expression trees rather than edge weights. An exception is MRGP BID0, which encodes weights at each node but updates them via Lasso instead of using gradient descent with backpropagation. SR methods have also been paired with various parameter learning strategies, including those based on backpropagation BID64 BID32 BID27. It should be noted that non-stochastic methods for SR exist, such as mixed integer non-linear programming, which has been demonstrated for small search spaces BID1. Our goals with the experiment are to 1) robustly compare FEAT to state-of-the-art regression methods, including hyperparameter optimization of feedforward NNs; 2) characterize the complexity of the models; and 3) assess whether disentanglement objectives lead to less correlated representations. For the regression datasets, we use 100 real-world and simulated datasets available from OpenML (BID65 . The datasets are characterized in terms of number of features and sample sizes in FIG3 . We use the standardized versions of the datasets available in the Penn Machine Learning Benchmark repository BID48 . We compare the FEAT configurations to multi-layer perceptron (MLP), random forest (RF) regression, kernel ridge (KernelRidge) regression, and elastic net (ElasticNet) regression, using implementations from scikit-learn BID52. In addition, we compare to XGBoost (XGB), a gradient boosting method that has performed well in recent competitions BID6. Code to reproduce these experiments is available online. Each method's hyperparameters are tuned according to TAB1 in Appendix A.1. For FEAT, we limit optimization to 200 iterations or 60 minutes, whichever comes first. We also stop sooner if the median validation fitness stops improving. For each method, we use grid search to tune the hyperparameters with 10-fold cross validation (CV). We use the R 2 CV score for assessing performance. In our we report the CV scores for each method using its best hyperparameters. The algorithms are ranked on each dataset using their median CV score over 5 randomized shuffles of the dataset. For comparing complexity, we count the number of nodes in the final model produced by each method for each trial on each dataset. To quantify the "entanglement" of the feature spaces, we report Eqn. 5 in the raw data and in the final hidden layer of FEAT and MLP models. We also test two additional versions of Feat, denoted FeatCorr and FeatCN, that include a third objective corresponding to Eqn. 5 and 6, respectively. Finally, we examine the FEAT in detail for one of the benchmark datasets. For this dataset we plot the final population of models, illustrate model selection and compare the ant features to from linear and ensemble tree-based . This gives practical insight into the method and provides a sense of the intelligibility of an example representation. The score statistics for each method are shown in Fig. 3 across all datasets. Full statistical comparisons are reported in Appendix A.3. Over all, FEAT and XGBoost produce the best predictive performance across datasets without significant differences between the two (p=1.0). FEAT significantly outperforms MLP, RF, KernelRidge and ElasticNet (p ≤1.18e-4), as does XGBoost (p ≤1.6e-3).As measured by the number of nodes in the final solutions, the models produced by FEAT are significantly less complex than XGBoost, RF, and MLP, as shown in FIG0 ). FEAT's final models tend to be within 1 order of magnitude of the linear models (ElasticNet), and 2-4 orders of magnitude smaller than the other non-linear methods. A comparison of wall-clock time is given in FIG5 in the appendix. FEAT and MLP take approximately the same time to run, followed by XGBoost, RF, KernelRidge, and ElasticNet, in that order. FIG4 shows the average pairwise correlations of the representations produced by Feat variants and MLP in comparison to the correlation structure of the original data. In general, MLP and FEAT tend to produce correlated feature spaces, and Feat's representations tend to contain more bivariate correlations than MLP. Furthermore, the suggest that explicitly minimizing collinearity DISPLAYFORM0 64.8 Figure 6: (Left) Representation archive for the visualizing galaxies dataset. (Right) Selected model and its weights. Internal weights omitted.(FeatCorr and FeatCN) tends to produce representations that exhibit equivalent or higher levels of correlation. This conflicts with our hypothesis, and is discussed more in Section 6. This paper proposes a feature engineering archive tool that optimizes neural network architectures by representing them as syntax trees. FEAT uses model weights as feedback to guide network variation in an EC optimization algorithm. We conduct a thorough analysis of this method applied to the task of regression in comparison to state-of-the-art methods. The suggest that FEAT achieves state-of-the-art performance on regression tasks while producing representations that are significantly less complex than those ing from similarly performing methods. This improvement comes at an additional computational cost, limited in this study to 60 minutes per training instance. We expect this limitation to be reasonable for many applications where intelligibility is the prime motivation. Future work should consider the issue of representation disentanglement in more depth. Our attempts to include additional search objectives that explicitly minimize multicollinearity were not successful. Although more analysis is needed to confirm this, we suspect that the model selection procedure (Section 2.1, step 3) permits highly collinear representations to be chosen. This is because multicollinearity primarily affects the standard errors ofβ BID2, and is not necessarily detrimental to validation error. Therefore it could be incorrect to expect the model selection procedure to effectively choose more disentangled representations. Besides improving the model selection procedure, it may be fruitful to pressure disentanglement at other stages of the search process. For example, the variation process could prune highly correlated features, or the disentanglement metric could be combined with error into a single loss function with a tunable parameter. We hope to pursue these ideas in future studies. This work was supported by NIH grants AI116794 and LM012601.A APPENDIX A.1 ADDITIONAL EXPERIMENT INFORMATION TAB1 details the hyperparameters for each method used in the experimental described in Sections 4 and 5.Runs are conducted in a heterogenous computing environment, with one core assigned to each CV training per dataset. As such, wall-clock times are a flawed measure of computational complexity. With this caveat in mind, we report the wall-clock run times for each method in FIG5. The Feat variants are terminated at 200 generations or 60 minutes, which explains their uniformity. Note that methods are unable to take advantage of parallelization in this experiment. Our initial analysis sought to determine how different SO approaches performed within this framework. We tested five methods: 1) NSGA2, 2) Lex, 3) LexNSGA2, 4) Simulated annealing, and 5) random search. The simulated annealing and random search approaches are described below. Simulated annealing Simulated annealing (SimAnn) is a non-evolutionary technique that instead models the optimization process on the metallurgical process of annealing. In our implementation, offspring compete with their parents; in the case of multiple parents, offspring compete with the program with which they share more nodes. The probability of an offspring replacing its parent in the population is given by the equation DISPLAYFORM0 The probability of offspring replacing its parent is a function of its fitness, F, in our case the mean squared loss of the candidate model. In Eqn. 7, t is a scheduling parameter that controls the rate of "cooling", i.e. the rate at which steps in the search space that are worse are tolerated by the update rule. In accordance with BID31, we use an exponential schedule for t, defined as t g = (0.9) g t 0, where g is the current generation and t0 is the starting temperature. t0 is set to 10 in our experiments. Random search We compare the selection and survival methods to random search, in which no assumptions are made about the structure of the search space. To conduct random search, we randomly sample S using the initialization procedure. Since FEAT begins with a linear model of the process, random search will produce a representation at least as good as this initial model on the internal validation set. A note on archiving When FEAT is used without a complexity-aware survival method (i.e., with Lex, SimAnn, Random), a separate population is maintained that acts as an archive. The archive maintains a Pareto front according to minimum loss and complexity (Eqn 4). At the end of optimization, the archive is tested on a small hold-out validation set. The individual with the lowest validation loss is the final selected model. Maintaining this archive helps protect against overfitting ing from overly complex / high capacity representations, and also can be interpreted directly to help understand the process being modelled. We benchmarked these approaches in a separate experiment on 88 datasets from PMLB BID48. The are shown in FIG0. FIG6, we see that LexNSGA2 achieves the best average R 2 value while producing small solutions in comparison to Lex. NSGA2, SimAnneal, and Random search all produce less accurate models. The runtime comparisons of the methods in FIG0 show that they are mostly within an order of magnitude, with NSGA2 being the fastest (due to its maintenance of small representations) and Random search being the slowest, suggesting that it maintains large representations during search. The computational behavior of Random search suggests the variation operators tend to increase the average size of solutions over many iterations. We perform pairwise comparisons of methods according to the procedure recommended by BID12 for comparing multiple estimators. In TAB3, the CV R 2 rankings are compared. In 1.00e+00 KernelRidge 1.16e-09* 1.18e-04* 4.37e-03* 1.14e-03* MLP 5.24e-09* 3.80e-04* 2.08e-02* 1.28e-03* 1.00e+00 RF 1.08e-09* 2.09e-07* 2.19e-05* 1.30e-06* 1.00e+00 1.00e+00 XGB 1.47e-13* 1.00e+001.00e+00 1.00e+00 3.41e-04* 1.60e-03* 8.49e-13* 1.58e-07* MLP 6.24e-18* 4.26e-17* 3.09e-17* 3.98e-17* RF 9.28e-20* 2.05e-17* 5.61e-18* 2.37e-17* 3.54e-17* XGB 9.14e-18* 4.05e-17* 2.46e-17* 3.71e-17* 1.00e+00 3.94e-18*
Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models.
1,767
scitldr
Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication. In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale. Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments. In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates. Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature. The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\sqrt{T}). With the advent of big data and complex models, there is a growing body of works on scaling machine learning under synchronous and non-synchronous 1 distributed execution BID8 BID11 BID29. These works, however, point to seemingly contradictory on whether non-synchronous execution outperforms synchronous counterparts in terms of absolute convergence, which is measured by the wall clock time to reach the desired model quality. For deep neural networks, BID2; BID8 show that fully asynchronous systems achieve high scalability and model quality, but others argue that synchronous training converges faster BID1 BID5. The disagreement goes beyond deep learning models:; BID49; BID26; BID31; BID41 empirically and theoretically show that many algorithms scale effectively under non-synchronous settings, but BID36;; demonstrate significant penalties from asynchrony. The crux of the disagreement lies in the trade-off between two factors contributing to the absolute convergence: statistical efficiency and system throughput. Statistical efficiency measures convergence per algorithmic step (e.g., a mini-batch), while system throughput captures the performance of the underlying implementation and hardware. Non-synchronous execution can improve system throughput due to lower synchronization overheads, which is well understood BID1 BID4 BID2. However, by allowing various workers to use stale versions of the model that do not always reflect the latest updates, non-synchronous systems can exhibit lower statistical efficiency BID1 BID5. How statistical efficiency and system throughput trade off in distributed systems, however, is far from clear. The difficulties in understanding the trade-off arise because statistical efficiency and system throughput are coupled during execution in distributed environments. Non-synchronous executions are in general non-deterministic, which can be difficult to profile. Furthermore, large-scale experiments 2 RELATED WORK Staleness is reported to help absolute convergence for distributed deep learning in BID2; BID8; and has minimal impact on convergence BID31 BID6 BID51 BID32. But BID1; BID5 show significant negative effects of staleness. LDA training is generally insensitive to staleness BID44 BID47 BID7, and so is MF training BID48 BID33 BID4 BID49. However, none of their evaluations quantifies the level of staleness in the systems. By explicitly controlling the staleness, we decouple the distributed execution, which is hard to control, from ML convergence outcomes. We focus on algorithms that are commonly used in large-scale optimization BID11 BID1 BID8, instead of methods specifically designed to minimize synchronization BID39 BID43 BID20. Non-synchronous execution has theoretical underpinning BID30 BID49 BID31 BID41. Here we study algorithms that do not necessarily satisfy assumptions in their analyses. We study six ML models and focus on algorithms that lend itself to data parallelism, which a primary approach for distributed ML. Our algorithms span optimization, sampling, and black box variational inference. TAB1 summarizes the studied models and algorithms. Simulation Model. Each update generated by worker p needs to be propagated to both worker p's model cache and other worker's model cache. We apply a uniformly random delay model to these updates that are in transit. Specifically, let u each worker p (including p itself), our delay model applies a delay r BID40 BID9 BID23 BID16 BID12, and dataset BID24 BID34 BID27 BID14 BID42 in our study. η denotes learning rate, which, if not specified, are tuned empirically for each algorithm and staleness level, β1, β2 are optimization hyperparameters (using common default values). α, β in LDA are Dirichlet priors for document topic and word topic random variables, respectively. Convolutional Neural Networks (CNNs) have been a strong focus of large-scale training, both under synchronous BID11 BID5 BID3 and non-synchronous BID2 BID8 BID1 training. We consider residual networks with 6n + 2 weight layers BID15. The networks consist of 3 groups of n residual blocks, with 16, 32, and 64 feature maps in each group, respectively, followed by a global pooling layer and a softmax layer. The residual blocks have the same construction as in BID15. We measure the model quality using test accuracy. For simplicity, we omit data augmentation in our experiments. Deep Neural Networks (DNNs) are neural networks composed of fully connected layers. Our DNNs have 1 to 6 hidden layers, with 256 neurons in each layer, followed by a softmax layer. We use rectified linear units (ReLU) for nonlinearity after each hidden layer BID38. Multiclass Logistic Regression (MLR) is the special case of DNN with 0 hidden layers. We measure the model quality using test accuracy. Matrix factorization (MF) is commonly used in recommender systems and have been implemented at scale BID48 BID33 BID4 BID49 BID22. Let D ∈ R M ×N be a partially filled matrix, MF factorizes D into two factor matrices L ∈ R M ×r and R ∈ R N ×r (r min(M, N) is the user-defined rank). The 2 -penalized optimization problem is: DISPLAYFORM0 where || · || F is the Frobenius norm and λ is the regularization parameter. We partition observations D to workers while treating L, R as shared model parameters. We optimize MF via SGD, and measure model quality by training loss defined by the objective function above. Latent Dirichlet Allocation (LDA) is an unsupervised method to uncover hidden semantics ("topics") from a group of documents, each represented as a bag of tokens. LDA has been scaled under non-synchronous execution BID0 BID33 BID47 with great success. Further details are provided in Appendix. Variational Autoencoder (VAE) is commonly optimized by black box variational inference, which can be considered as a hybrid of optimization and sampling methods. The inputs to VAE training include two sources of stochasticity: the data sampling x and samples of random variable. We measure the model quality by test loss. We use DNNs with 1∼3 layers as the encoders and decoders in VAE, in which each layer has 256 units furnished with rectified linear function for non-linearity. The model quality is measured by the training objective value, assuming continuous input x and isotropic Gaussian prior p(z) ∼ N (0, I). We use batch size 32 for CNNs, DNNs, MLR, and VAEs 34. For MF, we use batch size of 25000 samples, which is 2.5% of the MovieLens dataset (1M samples). We study staleness up to s = 50 on 8 workers, which means model caches can miss updates up to 8.75 data passes. For LDA we use DISPLAYFORM0 as the batch size, where D is the number of documents and P is the number of workers. We study staleness up to s = 20, which means model caches can miss updates up to 2 data passes. We measure time in terms of the amount of work performed, such as the number of batches processed. Convergence Slowdown. Perhaps the most prominent effect of staleness on ML algorithms is the slowdown in convergence, evident throughout the experiments. FIG1 shows the number of batches needed to reach the desired model quality for CNNs and DNNs/MLR with varying network depths and different staleness (s = 0, ..., 16). FIG1 (d) show that convergence under higher level of staleness requires more batches to be processed in order to reach the same model quality. This additional work can potentially be quite substantial, such as in FIG1 where it takes up to 6x more batches compared with settings without staleness (s = 0). It is also worth pointing out that while there can be a substantial slowdown in convergence, the optimization still reaches desirable models under most cases in our experiments. When staleness is geometrically distributed FIG3 ), we observe similar patterns of convergence slowdown. We are not aware of any prior work reporting slowdown as high as observed here. This finding has important ramifications for distributed ML. Usually, the moderate amount of workload increases due to parallelization errors can be compensated by the additional computation resources and higher system throughput in the distributed execution. However, it may be difficult to justify spending large 3 Non-synchronous execution allows us to use small batch sizes, eschewing the potential generalization problem with large batch SGD BID21 BID35. 4 We present RNN in the Appendix.amount of resources for a distributed implementation if the statistical penalty is too high, which should be avoided (e.g., by staleness minimization system designs or synchronous execution).Model Complexity. FIG1 also reveals that the impact of staleness can depend on ML parameters, such as the depths of the networks. Overall we observe that staleness impacts deeper networks more than shallower ones. This holds true for SGD, Adam, Momentum, RMSProp, Adagrad FIG1, and other optimization schemes, and generalizes to other numbers of workers (see Appendix) 5.This is perhaps not surprising, given the fact that deeper models pose more optimization challenges even under the sequential settings BID10 BID15, though we point out that existing literature does not explicitly consider model complexity as a factor in distributed ML BID31 BID11. Our suggest that the staleness level acceptable in distributed training can depend strongly on the complexity of the model. For sufficiently complex models it may be more advantageous to eliminate staleness altogether and use synchronous training. Algorithms' Sensitivity to Staleness. Staleness has uneven impacts on different SGD variants. Fig. 2 shows the amount of work (measured in the number of batches) to reach the desired model quality for five SGD variants. Fig. 2 (d)(e)(f) reveals that while staleness generally increases the number of batches needed to reach the target test accuracy, the increase can be higher for certain algorithms, such as Momentum. On the other hand, Adagrad appear to be robust to staleness 6. Our finding is consistent with the fact that, to our knowledge, all existing successful cases applying non-synchronous training to deep neural networks use SGD BID8 BID2. In contrast, works reporting subpar performance from non-synchronous training often use momentum, such as RMSProp with momentum BID1 and momentum BID5. Our suggest that these different outcomes may be partly driven by the choice of optimization algorithms, leading to the seemingly contradictory reports of whether non-synchronous execution is advantageous over synchronous ones. Effects of More Workers. The impact of staleness is amplified by the number of workers. In the case of MF, Fig. 3(b) shows that the convergence slowdown in terms of the number of batches (normalized by the convergence for s = 0) on 8 workers is more than twice of the slowdown on 4 workers. For example, in Fig. 3 (b) the slowdown at s = 15 is ∼3.4, but the slowdown at the same staleness level on 8 workers is ∼8.2. Similar observations can be made for CNNs (Fig. 3). This can be explained by the fact that additional workers amplifies the effect of staleness by generating updates that will be subject to delays, and missing updates from other workers that are subject to delays. Fig. 3 (c)(d) show the convergence curves of LDA with different staleness levels for two settings varying on the number of workers and topics. Unlike the convergence curves for SGD-based algorithms (see Appendix), the convergence curves of Gibbs sampling are highly smooth, even under high staleness and a large number of workers. This can be attributed to the structure of log likelihood objective function BID12. Since in each sampling step we only update the count statistics based on a portion of the corpus, the objective value will generally change smoothly. Staleness levels under a certain threshold (s ≤ 10) lead to convergence, following indistinguishable log likelihood trajectories, regardless of the number of topics (K = 10, 100) or the number of workers (2-16 workers, see Appendix). Also, there is very minimal variance in those trajectories. However, for staleness beyond a certain level (s ≥ 15), Gibbs sampling does not converge to a fixed point. The convergence trajectories are distinct and are sensitive to the number of topics and the number of workers. There appears to be a "phase transition" at a certain staleness level that creates two distinct phases of convergence behaviors 7. We believe this is the first report of a staleness-induced failure case for LDA Gibbs sampling. Fig. 3 (e)(f), VAEs exhibit a much higher sensitivity to staleness compared with DNNs (FIG1). This is the case even considering that VAE with depth 3 has 6 weight layers, which has a comparable number of model parameters and network architecture to DNNs with 6 layers. We hypothesize that this is caused by the additional source of stochasticity from the sampling procedure, in addition to the data sampling process. We now provide theoretical insight into the effect of staleness on the observed convergence slowdown. We focus on the challenging asynchronous SGD (Async-SGD) case, which characterizes the neural network models, among others. Consider the following nonconvex optimization problem DISPLAYFORM0 where f i corresponds to the loss on the i-th data sample, and the objective function is assumed to satisfy the following standard conditions:Assumption 1. The objective function F in the problem (P) satisfies:1. Function F is continuously differentiable and bounded below, i.e., inf x∈R d F (x) > −∞; 2. The gradient of F is L-Lipschitz continuous. Notice that we allow F to be nonconvex. We apply the Async-SGD to solve the problem (P). Let ξ(k) be the mini-batch of data indices sampled from {1, . . ., n} uniformly at random by the algorithm at iteration k, and |ξ(k)| is the mini-batch size. Denote mini-batch gradient as ∇f ξ(k) (x k):= i∈ξ(k) ∇f i (x k). Then, the update rule of Async-SGD can be written as DISPLAYFORM1 where η k corresponds to the stepsize, τ k denotes the delayed clock and the maximum staleness is assumed to be bounded by s. This implies that DISPLAYFORM2 The optimization dynamics of Async-SGD is complex due to the nonconvexity and the uncertainty of the delayed updates. Interestingly, we find that the following notion of gradient coherence provides insights toward understanding the convergence property of Async-SGD.Definition 1 (Gradient coherence). The gradient coherence at iteration k is defined as DISPLAYFORM3 Parameter µ k captures the minimum coherence between the current gradient ∇F (x k) and the gradients along the past s iterations 8. Intuitively, if µ k is positive, then the direction of the current gradient is well aligned to those of the past gradients. In this case, the convergence property induced by using delayed stochastic gradients is close to that induced by using synchronous stochastic gradients. Note that Definition 1 only requires the gradients to be positively correlated over a small number of iterations s, which is often very small (e.g. <10 in our experiments). Therefore, Definition 1 is not a global requirement on optimization path. Even though neural network's loss function is non-convex, recent studies showed strong evidences that SGD in practical neural network training encourage positive gradient coherence BID28 BID32. This is consistent with the findings that the loss surface of shallow networks and deep networks with skip connections are dominated by large, flat, nearly convex attractors around the critical points BID28 BID21, implying that the degree of non-convexity is mild around critical points. We show in the sequel that µ k > 0 through most of the optimization path, especially when the staleness is minimized in practice by system optimization FIG3. Our theory can be readily adapted to account for a limited amount of negative µ k (see Appendix), but our primary interest is to provide a quantity that is easy to compute empirically during the course of optimization 9, and informative for the impact of staleness and can potentially be used to control synchronization levels. We now characterize the convergence property of Async-SGD. Theorem 1. Let Assumption 1 hold. Suppose for some µ > 0, the gradient coherence satisfies µ k ≥ µ for all k and the variance of the stochastic gradients is bounded by σ 2 > 0. Choose stepsize DISPLAYFORM4. Then, the iterates generated by the Async-SGD satisfy DISPLAYFORM5 8 Our gradient coherence bears similarity with the sufficient direction assumption in BID19. However, sufficient direction is a layer-wise and fixed delay, whereas our staleness is a random variable that is subject to system level factors such as communication bandwidth 9 It can be approximated by storing a pre-selected batch of data on a worker. The worker just needs to compute gradient every T mini-batches to obtain approximate ∇F (x k), ∇F (xt) in Definition 1. TAB1. Shaded region is 1 standard deviation over 3 runs. For computational efficiency, we approximate the full gradient ∇F (x k) by gradients on a fixed set of 1000 training samples D f ixed and use ∇D f ixed F (x k). (c) The number of batches to reach 71% test accuracy on CIFAR10 for ResNet8-32 using 8 workers and SGD under geometric delay distribution (details in Appendix).We refer readers to Appendix for the the proof. Theorem 1 characterizes several theoretical aspects of Async-SGD. First, the choice of the stepsize η k = µ sL √ k is adapted to both the maximum staleness and the gradient coherence. Intuitively, if the system encounters a larger staleness, then a smaller stepsize should be used to compensate the negative effect. On the other hand, the stepsize can be accordingly enlarged if the gradient coherence along the iterates turns out to be high. In this case, the direction of the gradient barely changes along the past several iterations, and a more aggressive stepsize can be adopted. In summary, the choice of stepsize should trade-off between the effects caused by both the staleness and the gradient coherence. depths optimized by SGD using 8 workers. The x-axis m is defined in FIG3 Furthermore, Theorem 1 shows that the minimum gradient norm decays at the rate O(DISPLAYFORM6), implying that the Async-SGD converges to a stationary point provided a positive gradient coherence, which we observe empirically in the sequel. On the other hand, the bound in Eq. captures the trade-off between the maximum staleness s and the gradient coherence µ. Specifically, minimizing the right hand side of Eq. with regard to the maximum staleness s yields the optimal choice s * = σµ log T L(F (x0)−infx F (x)), i.e., a larger staleness is allowed if the gradients remain to be highly coherent along the past iterates. Empirical Observations. Theorem 1 suggests that more coherent gradients along the optimization paths can be advantageous under non-synchronous execution. FIG3 shows the cosine similarity sim(a, b):= a·b a b between gradients along the convergence path for CNNs and DNNs 10. We observe the followings: Cosine similarity improves over the course of convergence FIG3 ). Except the highest staleness during the early phase of convergence, cosine similarity remains positive 11. In practice the staleness experienced during run time can be limited to small staleness, which minimizes the likelihood of negative gradient coherence during the early phase. FIG4 shows that cosine similarity decreases with increasing CNN model complexity. Theorem 1 implies that lower gradient coherence amplifies the effect of staleness s through the factor s µ 2 in Eq.. This is consistent with the convergence difficulty encountered in deeper models FIG1. In this work, we study the convergence behaviors under delayed updates for a wide array of models and algorithms. Our extensive experiments reveal that staleness appears to be a key governing parameter in learning. Overall staleness slows down the convergence, and under high staleness levels the convergence can progress very slowly or fail. The effects of staleness are highly problem 10 Cosine similarity is closely related to the coherence measure in Definition 1. 11 Low gradient coherence during the early part of optimization is consistent with the common heuristics to use fewer workers at the beginning in asynchronous training. BID31 also requires the number of workers to follow DISPLAYFORM0 where K is the iteration number.dependent, influenced by model complexity, choice of the algorithms, the number of workers, and the model itself, among others. Our empirical findings inspire new analyses of non-convex optimization under asynchrony based on gradient coherence, matching the existing rate of O(1/ √ T).Our findings have clear implications for distributed ML. To achieve actual speed-up in absolute convergence, any distributed ML system needs to overcome the slowdown from staleness, and carefully trade off between system throughput gains and statistical penalties. Many ML methods indeed demonstrate certain robustness against low staleness, which should offer opportunities for system optimization. Our support the broader observation that existing successful nonsynchronous systems generally keep staleness low and use algorithms efficient under staleness. A.1 PROOF OF THEOREM 1Theorem 2. Let Assumption 1 hold. Suppose the gradient coherence µ k is lower bounded by some µ > 0 for all k and the variance of the stochastic gradients is upper bounded by some σ 2 > 0. DISPLAYFORM0. Then, the iterates generated by the Async-SGD satisfy DISPLAYFORM1 Proof. By the L-Lipschitz property of ∇F, we obtain that for all k DISPLAYFORM2 Taking expectation on both sides of the above inequality and note that the variance of the stochastic gradient is bounded by σ 2, we further obtain that DISPLAYFORM3 Telescoping the above inequality over k from 0 to T yields that DISPLAYFORM4 Rearranging the above inequality and note that DISPLAYFORM5 Note that the choice of stepsize guarantees that η k µ − > 0 for all k. Thus, we conclude that DISPLAYFORM6 where the last inequality uses the fact that DISPLAYFORM7 into the above inequality and simplifying, we finally obtain that DISPLAYFORM8 A.2 HANDLING NEGATIVE GRADIENT COHERENCE IN THEOREM 1Our assumption of positive gradient coherence (GC) is motivated by strong empirical evidence that GC is largely positive FIG3 in the main text). Contrary to conventional wisdom, GC generally improves when approaching convergence for both SGD and Adam. Furthermore, in practice, the effective staleness for any given iteration generally concentrates in low staleness for the non-stragglers.When some µ k are negative at some iterations, in eq. 11 in the Appendix we can move the negative terms in k η k µ k to the right hand side and yield a higher upper bound (i.e., slower convergence). This is also consistent with empirical observations that higher staleness lowers GC and slows convergence. A.3 EXPONENTIAL DELAY DISTRIBUTION.We consider delays drawn from geometric distribution (GD), which is the discrete version of exponential distribution. For each iterate we randomly select a worker to be the straggler with large mean delay (p = 0.1), while all other non-straggler workers have small delays. The non-straggler delay is drawn from GD with p chosen to achieve the same mean delay as in the uniform case (after factoring in straggler) in the main text. The delay is drawn per worker for each iteration, and thus a straggler's outgoing updates to all workers suffer the same delay. FIG3 (c) in the main text shows the convergence speed under the corresponding staleness s with the same mean delay (though s is not a parameter in GD). It exhibits trends analogous to FIG1 in the main text: staleness slows convergence substantially and overall impacts deeper networks more. We present additional for DNNs. Fig. 6 shows the number of batches, normalized by s = 0, to reach convergence using 1 hidden layer and 1 worker under varying staleness levels and batch sizes. Overall, the effect of batch size is relatively small except in high staleness regime (s = 32). Fig. 7 shows the number of batches to reach convergence, normalized by s = 0 case, for 5 variants of SGD using 1 worker. The are in line with the analyses in the main text: staleness generally leads to larger slow down for deeper networks than shallower ones. SGD and Adagrad are more robust to staleness than Adam, RMSProp, and SGD with momentum. In particular, RMSProp exhibit high variance in batches to convergence (not shown in the normalized plot) and thus does not exhibit consistent trend. The are consistent with the observations and analyses in the main text, namely, that having more workers amplifies the effect of staleness. We can also observe that SGDS is more robust to staleness than Adam, and shallower networks are less impacted by staleness. In particular, note that staleness sometimes accelerates convergence, such as in FIG6. This is due to the implicit momentum created by staleness.A.5 LDA AND ADDITIONAL FOR LDAIn LDA each token w ij (j-th token in the i-th document) is assigned with a latent topic z ij from totally K topics. We use Gibbs sampling to infer the topic assignments z ij. The Gibbs sampling step involves three sets of parameters, known as sufficient statistics: FORMULA7 Figure 6: The number of batches to reach 95% test accuracy using 1 hidden layer and 1 worker, respectively normalized by s = 0.vector φ w ∈ R K where φ wk is the number of topic assignments to topic k = 1,..., K for word (vocabulary) w across all documents;φ ∈ R K whereφ k = W w=1 φ wk is the number of tokens in the corpus assigned to topic k. The corpus (w ij, z ij) is partitioned to workers, while φ w andφ are shared model parameters. We measure the model quality using log likelihood. We present additional of LDA under different numbers of workers and topics in FIG12 and FIG1. These panels extends Fig. 3(c)(d) in the main text. See the main text for experimental setup and analyses and experimental setup. We show the convergence curves for MF under different numbers of workers and staleness levels in FIG1. It is evident that higher staleness leads to a higher variance in convergence. Furthermore, the number of workers also affects variance, given the same staleness level. For example, MF with 4 workers incurs very low standard deviation up to staleness 20. In contrast, MF with 8 workers already exhibits a large variance at staleness 15. The amplification of staleness from increasing number of Figure 7: The number of batches to reach 92% test accuracy using DNNs with varying numbers of hidden layers under 1 worker. We consider several variants of SGD algorithms (a)-(e). Note that with depth 0 the model reduces to MLR, which is convex. The numbers are averaged over 5 randomized runs. We omit the whenever convergence is not achieved within the experiment horizon (77824 batches), such as SGD with momentum at depth 6 and s = 32. Recurrent Neural Networks (RNNs) are widely used in recent natural language processing tasks. We consider long short-term memory (LSTM) BID18 applied to the language modeling task, using a subset of Penn Treebank dataset (PTB) BID34 containing 5855 words. The dataset is pre-processed by standard de-capitalization and tokenization. We evaluate the impact of staleness for LSTM with 1 to 4 layers, with 256 neurons in each layer. The maximum length for each sentence is 25. Note that 4 layer LSTM is about 4x more model parameters than the 1 layer LSTM, which is the same ratio between ResNet32 and Resnet 8. We use batch size 32 similar to other experiments. We consider staleness s = 0, 4, 8, 16 on 8 workers. The model quality is measured in perplexity. FIG1 shows the number of batches needed to reach the desired model quality for RNNs with varying network depths. We again observe that staleness impacts deeper network variants more than shallower counterparts, which is consistent with our observation in CNNs and DNNs.
Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms.
1,768
scitldr
System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs. It is a key step for model-based control, estimator design, and output prediction. This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable. The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters. We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches. We also use SISL to identify the dynamics of an aerobatic helicopter. By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches. The performance of controllers and state-estimators for non-linear systems depends heavily on the quality of the model of system dynamics . System-identification tackles the problem of learning or calibrating dynamics models from data , which is often a timehistory of observations of the system and control inputs. In this work, we address the problem of learning dynamics models of partially observed, high-dimensional non-linear systems. That is, we consider situations in which the system's state cannot be inferred from a single observation, but instead requires inference over a time-series of observations. The problem of identifying systems from partial observations arises in many robotics domains (; ;). Though we often have direct measurements of a robot's pose and velocity, in many cases we cannot directly observe relevant quantities such as the temperature of actuators or the state the environment around the robot. Consider learning a dynamics model for an aerobatic helicopter. attempted to map only the helicopter's pose and velocity to its acceleration and they found their model to be inaccurate when predicting aggressive maneuvers. They posited that the substantial airflow generated by the helicopter affected the dynamics. Since it is often impossible to directly measure the state of the airflow around a vehicle, identification must be performed in a partially observed setting. System-identification is a mature field with a rich history (; 2010). Various techniques can be classified by whether they apply to linear or non-linear systems, with partially or fully observed states. Additionally, techniques are applied in an online or batch-offline setting. This work presents an approach to offline identification of non-linear and partially observed systems. When a system is fully observed, i.e. its full state is observed but corrupted by noise, a set of techniques called equation-error methods are typically employed (Åström &). In such cases, we can consider observations as independent, and minimize the error between the observed statederivatives and those predicted by the model given the control input and observed states. In partially observed settings, merely knowing the current input is insufficient to accurately predict the observation. Several black-box approaches exist to predict observations from time-series of inputs. Autoregressive approaches directly map a time-history of past inputs to observations . Recurrent neural networks and subspace-identification methods can also be used to learn blackbox dynamical systems from this data. However, in many cases prior knowledge can be used to specify structured, parameterized models of the system . Such models can be trained with less data and used with a wider array of control and state-estimation techniques than non-linear black-box models (; b; a). Techniques used to identify partially observed structured models are often based on Expectation-Maximization (EM) (; Schön et al., 2011; ;). An alternating procedure is performed in which a smoothing step uses the current system dynamics estimate to infer the distribution over state-trajectories, and is followed by a learning step that uses this distribution to update the system dynamics estimate. In the non-linear or non-Gaussian case, it is typically not possible to analytically characterize the distribution over trajectories, and thus methods based on Sequential Monte-Carlo such as Particle Smoothing (PS) (Schön et al., 2011;), or Extended Kalman Smoothing (EKS) are employed in the E-step. Though considered state-of-theart for this problem, both methods become intractable in high-dimensional state spaces. PS suffers from the curse of dimensionality, requiring an intractably large number of particles if the state space is high-dimensional , and an M-step that can be quadratic in complexity with respect to the number of particles (Schön et al., 2011). EKS-based methods are fast during the E-step, but the M-step requires approximations to integrate out state uncertainty, such as fitting non-linearities with Radial Basis Function approximators, and scales poorly with the dimension of the state-space . In this work, we present a system-identification algorithm that is suited for high-dimensional, nonlinear, and partially observed systems. By assuming that the systems are close to deterministic, as is often the case in robotics, we approximate the distribution over unobserved states using only their maximum-likelihood (ML) point-estimate. Our algorithm, called SISL (System-identification via Iterative Smoothing and Learning) performs the following two steps until convergence: • In the smoothing or E-step, we use non-linear programming to tractably find the ML pointestimate of the unobserved states. • In the learning or M-step, we use the estimate of unobserved states to improve the estimate of system parameters. The idea to use an ML point-estimate in lieu of the distribution over unobserved states in the EM procedure's E-step is not new, and, in general, does not guarantee monotonic convergence to a local optimum . However, such an approximation is equivalent to regular EM if the ML point-estimate is the only instance of unobserved variables with non-negligible probability . We apply this idea to the problem of system-identification for nearly deterministic systems, in which ML point-estimates can serve as surrogates for the true distribution over unobserved state-trajectories. The primary contribution of this work is an algorithm for identifying non-linear, partially observed systems that is able to scale to high-dimensional problems. In Section 2, we specify the assumptions underpinning our algorithm and discuss the computational methodology for using it. In Section 3, we empirically demonstrate that it is able to identify the parameters of a high-dimensional system of coupled Lorenz attractors, a problem that proves intractable for particle-based methods. We also demonstrate our algorithm on the problem of identifying the dynamics of an aerobatic helicopter, and compare against various approaches including the state-of-the-art approach . In this work, we assume that we are given a batch of trajectories containing observations y 1:T ∈ R m×T of a dynamical system as it evolves over a time horizon T, possibly forced by some known input sequence u 1:T −1. We assume that this dynamical system has a state x ∈ R n that evolves and generates observations according to the following equations, where w t is referred to as the process noise and v t as the observation noise. w t and v t are both assumed to be additive for notational simplicity, but this is not a required assumption. Without loss of generality, we can drop the dependence on u t, absorbing it into the dependence on t. We further assume that we are provided a class of parameterized models f θ (x, t) and g θ (x, t) for θ ∈ Θ that approximate the dynamical system's evolution and observation processes. The goal of our algorithm is to find the parameters θ that maximize the likelihood of the observations. That is, we seek to find: Assuming, the system is Markovian in x t, we can factorize the distributions as: In order to tractably solve the maximization problem in Equation, particle-based EM techniques typically approximate the integral as an expectation over a particle set (Schön et al., 2011;). However, since particle-based methods can struggle with high-dimensional spaces , this work seeks point estimates for the likelihoodmaximizing state sequence, which can be found using non-linear programming. Instead of solving the full maximum likelihood problem shown in Equation, we solve a surrogate problem: The maximized surrogate objective is equivalent to the original objective if the distribution over x 1:T given y 1:T and θ is well-approximated by its ML point-estimate, i.e. the distribution is a Dirac delta function. We expect this assumption to hold if the trajectories are sufficiently long, the dynamics are close to deterministic, and all relevant dynamic modes are persistently excited. Making these assumptions, we now present an algorithm for solving Equation. By taking the logarithm of the likelihood-objective in Equation, and using the factorization from Equation, we get the SISL objective J(x 1:T, θ) as follows: Jointly maximizing this objective over x 1:T and θ yields θ SISL. Though this objective can be optimized as is by a non-linear optimizer, it is not necessarily efficient to do so since θ and x 1:T are highly coupled, leading to inefficient and potentially unstable updates. For this reason, we take an iterative approach akin to EM, which performs block-coordinate ascent on the J(x 1:T, θ). At iteration k, we first perform smoothing by holding θ constant, and finding the ML point-estimate for x 1:T as follows: Here ρ x scales a soft trust-region regularizer similar to damping terms found in LevenbergMarquardt methods . The smoothing problem can be solved in O(n 2 T) by taking advantage of sparsity in the smoothing objective's Hessian matrix. In the learning step, we hold x 1:T constant and find: Again, ρ θ scales a soft trust-region regularizer, and specifying log p(θ) allows us to regularize θ toward a prior. The above optimization problem can be solved using any non-linear optimizer, such as a first or second-order gradient descent scheme. The SISL algorithm iterates between the smoothing and learning steps until convergence. The objective of our experiments is to demonstrate that the SISL algorithm is capable of identifying high-dimensional non-linear systems in partially observed settings. We do so in simulation by identifying the parameters of a system of partially observed coupled Lorenz attractors, as well as by identifying the dynamics of a real aerobatic helicopter. In the second experiment, we build on previous analysis of the dataset by attempting to characterize the interaction of the helicopter with the fluid around it, without having any direct observation of the fluid state. In this didactic experiment, we will show that: 1. SISL learns unbiased parameter estimates of systems that are close to deterministic, and, 2. SISL scales to high-dimensional problems in which particle-smoothing is intractable. To justify these claims, we use a system that is sufficiently non-linear and partially observable to make particle-based smoothing methods intractable. We choose a system of coupled Lorenz attractors for this purpose, owing to their ability to exhibit chaotic behavior and their use in non-linear atmospheric and fluid flow models (Bergé et al., 1984). Arbitrary increase in state dimensionality can be achieved by coupling multiple individual attractors. The state of a system with K coupled Lorenz attractors is x ∈ R 3K = {. . ., x 1,k, x 2,k, x 3,k, . . .}. The dynamics of the system are as follows: where H is an R 3K×3K matrix. We nominally set the parameters (σ k, ρ k, β k) to the values (10, 28, 8/3), and randomly sample the entries of H from a normal distribution to generate chaotic and coupled behavior between attractors, while avoiding self-coupling. These parameters are estimated during identification. In order to make the system partially observed, the observation y ∈ R (3K−2) is found from x as follows: where C ∈ R (3K−2)×3K is a known matrix with full row-rank, and v is the observation noise sampled from a Gaussian with diagonal covariance σ 2 v I. The entries of C are also randomly sampled from a standard normal distribution. In the following experiments, we simulate the system for T = 128 timesteps at a sample rate of ∆t = 0.04s, and integrate the system using a 4 th -order Runge-Kutta method. Initial conditions for each trajectory are sampled such that x 1,k ∼ N (−6, 2.5 2), x 2,k ∼ N (−6, 2.5 2), x 3,k ∼ N (24, 2.5 2). To test the conditions under which SISL learns unbiased parameter estimates, we simulate a single Lorenz system with H = 0, and known C ∈ R 2×3. We introduce and vary the process noise w ∼ N (0, σ 2 w I), and vary the observation noise coefficient σ v, and then attempt to estimate the parameters (σ, ρ, β). Using initial guesses within 10% of the system's true parameter values, we run SISL on a single sampled trajectory. For each choice of σ w and σ v, we repeat this process for 10 random seeds. Table 1 shows the mean and standard errors of parameter estimates for various σ w and σ v. We highlight in red the mean estimates that are not within two standard errors of their true value. We see that σ and ρ are estimated without bias for all scenarios. However, the estimate of β appears to become biased as the process noise is increased, but not as the observation noise is increased. This supports the assumption that the objective used in SISL is sound when systems evolve close to deterministically, but can be biased if it is not. State-of-the-art methods for parameter identification of partially observed systems rely on particlesmoothing (PS) to estimate a distribution over states x 1:T given a trajectory of observations y 1:T. We will experimentally demonstrate that the performance of PS does not scale to high-dimensional systems. To do so, we compare the number of particles required for PS to reliably characterize the distribution over x 1:T for a system of two and six coupled Lorenz systems. The systems are simulated with observation noise σ v = 0.01 but without process noise. We implement a PS as specified by Schön et al.. Letx t,n, w t,n be the estimated system-state and particle weight corresponding to the nth particle at the tth timestep. In order to test whether PS can reliably characterize the posterior distribution over x 1:T, we measure the weighted root mean square error (RMSE), ξ as follows: If PS is reliable using N particles, we should see that ξ N is tightly distributed across random seeds. These experimentally demonstrate that, for a sufficiently non-linear and partially observable system, the number of particles required to reliably characterize the posterior distribution over hidden states grows intractably with the dimension of the system. Since particle-based EM methods for system-identification are typically super-linear in complexity with respect to the number of particles (Schön et al., 2011), these methods are ill-suited to high-dimensional problems. To demonstrate that SISL is capable of identifying high-dimensional systems, we show that we can estimate the dynamics of an 18 dimensional system of six coupled Lorenz attractors. Moreover, as the number of trajectories provided to SISL increases, it converges to more accurate estimates. To test this claim, we sample 2, 4, and 8 trajectories from a system with parameters θ true, and σ v = 0.01. We randomly initialize each element of the parameters being optimized (θ = [σ 1:K, ρ 1:K, β 1:K, H]) to within 10% of the their value in θ true. We then run SISL on each batch, tracking the error in the estimated dynamics as training proceeds. We measure this error, which we call (θ), as follows: In the learning step, we do not regularize θ to a prior and set ρ θ = 0. In Figure 2, we see the of this experiment for four random seeds for each batch size. We can see that, as the number of trajectories used in training increases, the error in the estimated dynamics tends toward zero. Furthermore, we see that SISL convergences monotonically to a local optimum in all cases. This experiment supports our claim that SISL is able to identify the parameters of a high-dimensional, non-linear, and partially observable system that is intractable for particle-based methods. The experiments conducted thus far have demonstrated that SISL can learn unbiased parameter estimates of nearly-deterministic systems, and can scale to high-dimensional problems for which particle-based methods are intractable. In the next experiment, we use SISL to characterize the effect of unobserved states on the dynamics of an aerobatic helicopter. Characterizing the dynamics of a helicopter undergoing aggressive aerobatic maneuvers is widely considered to be a challenging system-identification problem . The primary challenge is that the forces on the helicopter depend on the induced state of the fluid around it. The state of the fluid cannot be directly observed and its dynamics model is unknown. Merely knowing the state of the helicopter and the control commands at a given time does not contain enough information to accurately predict the forces that act on it. In order to address this issue, use an approach based on Takens theorem, which suggests that a system's state can be reconstructed with a finite number of lagged-observations of it . Instead of attempting to estimate the unobserved fluid state, they directly learn a mapping from a 0.5 s long history of observed state measurements and control commands to the forces acting on the helicopter. This approach is sensible, and is equivalent to considering the past 0.5 s of observations as the system's state. However, it can require a very large number of lagged observations to represent complex phenomena. Having such a high dimensional state can make the control design and stateestimation more complicated. To avoid large input dimensions, a trade-off between the duration of the history and sample frequency is necessary. This trade-off will either hurt the resolution of low-frequency content or will alias high-frequencies. We attempt to instead explicitly model the unobserved states affecting the system. The objective of this learning problem is to predict y t, the helicopter's acceleration at time t, from an input vector u t containing the current measured state of the helicopter (its velocity and rotation rates) and the control commands. We use data collected by the Stanford Autonomous Helicopter Project . Trajectories are split into 10 s long chunks and then randomly distributed into train, test, and validation sets according to the protocol established by; and summarized in Appendix A.1. The train, test and validation sets respectively contain 466, 100 and 101 trajectories of 500 time-steps each. A simple success metric on a given trajectory is the root mean squared prediction error, RMSE =, where y (measured) t is the measured force from the dataset, y (pred) t is the force predicted by the model, and T is the number of time-steps in each trajectory. We first consider a naive baseline that does not attempt to account for the time-varying nature of the fluid-state. We train a neural-network to map only the current helicopter state and control commands to the accelerations: y t = NN θn (u t), where NN θn is a neural-network with parameters θ n. We refer to this model as the naive model. We also compare to the work of. They predict y t using a time-history u t−H:t of H lagged observations of the helicopter's measured state and control commands. This input is passed through a ReLU-activated neural network with a single hidden-layer combined with what they call a Quadratic Lag Model. As a baseline, we reproduce their performance with a single deep neural network y t = NN θ h (u t−H:t) with parameters θ h. We call this neural network model the H25 model. Both of these models can be trained via stochastic gradient descent to minimize the Mean-Squared-Error (MSE) of their predictions for y. The optimization methodology for these models is described in Appendix A.2. As a third baseline, we compare with subspace-identification methods . We letỹ t = y t − NN θn (u t) be the prediction errors of the trained naive model. We use the MATLAB command n4sid to fit a linear dynamical system of the following form: Here, x ∈ R d is the unobserved state with arbitrary dimension d. The learned parameters are We use a state dimension of 10 and call this model the SID model. The n4sid algorithm scales super-linearly with the amount of data supplied, and thus we train on 10 randomly sampled subsets of 100 trajectories each, and report the distribution in prediction performance. Particle-based EM methods are not presented as baselines because they are intractable on problems with large state-spaces, as shown in Section 3.1.2. Similar to the parameterization used for subpace-identification, we fit the prediction errors of the naive model using the following dynamical system: where NNθ NL is a neural network, and θ NL = [A NL, B NL, C NL, D NL,θ NL] are the learned parameters. While learning, we assume that both process and observation noise are distributed with diagonal Gaussian covariance matrices σ w I and σ v I respectively. The values of σ w and σ v are treated as hyperparmeters of SISL. Here as well, we use a state dimension of 10 and call this model the NL model. The optimization methodology for this model is described in Appendix A.2. It should be noted that the system we learn need not actually correspond to an interpretable model of the fluid-state, but only of time-varying hidden-states that are useful for predicting the accelerations of the helicopter. Expert knowledge of helicopter aerodynamics could be used to further inform a gray-box model trained with SISL. The test RMSE of the naive and H25 models can be evaluated directly on the test trajectories using next-step prediction. However, the SID and NL models require an estimate of the unobserved state before making a prediction. The natural analog of next-step prediction is extended Kalman filtering (EKF), during which states are recursively predicted and corrected given observations. At a given time-step, a prediction ofỹ t is made using the current estimate of x t, and is used in the computation of RMSE. The state-estimate is then corrected with the measuredỹ t. Figure 3a shows the RMSE of the compared models on trajectories in the test-set. We see that the NL model is able to consistently predict the accelerations on the helicopter with better accuracy than any of the other models. The naive model performs on average 2.9 times worse than the H25 model, and its can be found in Appendix A.3. The SID model notably outperforms the state-of-the-art H25 model, suggesting that a large linear dynamical system can be used to approximate a non-linear and partially observable system (Korda & Mezić, 2018). However, introducing non-linearity as in the NL model noticeably improves performance. Figure 3b depicts the errors in prediction over a sample trajectory in the test-set. Here, we also see that the NL model is able to attenuate the time-varying error present in predictions made by the H25, suggesting that it has accurately characterized the dynamics of unobserved, time-varying states. This experiment validates the effectiveness of SISL to identify a non-linear dynamical model of unobserved states that affect the forces acting an aerobatic helicopter. This paper presented an algorithm for system identification of non-linear systems given partial state observations. The algorithm optimizes system parameters given a time history of observations by iteratively finding the most likely state-history, and then using it to optimize the system parameters. The approach is particularly well suited for high-dimensional and nearly deterministic problems. In simulated experiments on a partially observed system of coupled Lorenz attractors, we showed that our algorithm can perform identification on a problem that particle-based EM methods are fundamentally ill-suited for. We also validated that our algorithm is an effective replacement for identification methods based on EM if the system is close to deterministic, but can yield biased parameter estimates if it is not. We then used our algorithm to model the time-varying hiddenstates that affect the dynamics of an aerobatic helicopter. Our approach outperforms state-of-the-art methods because it is able to fit large non-linear models to unobserved states. We aim to apply our algorithm to system identification problems in a number of domains. There has recently been interest in characterizing the dynamics of aircraft with high aspect ratios, for which the difficult-to-observe bending modes substantially impact dynamics. Additionally, the inability to measure friction forces in dynamic interactions involving contact typically stands in the way of system identification, and thus requires algorithms that are capable of identification under partial observation. A APPENDIX In this work we use the dataset gathered by and available at http://heli. stanford.edu/. A gas-powered helicopter was flown by a professional pilot to collect a large dataset of 6290s of flight. There are four controls: the longitudinal and lateral cyclic pitch, the tail rotor pitch and the collective pitch. The state is measured thanks to an accelerometer, a gyroscope, a magnetometer and vision cameras. provide the raw data, as well as states estimates in the Earth reference frame obtained with extended Kalman smoothing.'s protocol, we use the fused sensor data and downsample it from 100Hz to 50Hz. From the Earth frame accelerations provided in the dataset, we compute body frame accelerations (minus gyroscopic terms) which are the prediction targets for our training. Using the notations from , we can write the helicopter dynamics in the following form: where s ∈ R 13 is the helicopter state consisting of its position r, quaternion-attitude q, linear velocity v, angular velocity ω, and δ ∈ R 4 to be the control command. C 12 is the rotation-matrix from the body to Earth reference frame, and f v and f ω are the linear and angular accelerations caused by aerodynamic forces, and are what we aim to predict. This notation connects with the one used in our paper in the following way: • We define u as the concatenation of all inputs to the model, including the relevant state variables v and ω and control commands δ. • We define y as the output predicted, which would correspond to a concatenation of f v and f ω. • We define x as the vector of unobserved flow states to be estimated and is not present in their model. Neural-networks in the naive and H25 models have eight hidden layers of size 32 each, and tanh non-linearities. We optimize these models using an Adam optimizer with a harmonic learning rate decay, and mini-batch size of 512. The neural network in the NL model has two hidden layers of size 32 each, and tanh non-linearity. We train the NL model with SISL, using ρ x = ρ θ = 0.5, σ w = σ v = 1.0, and use an Adam optimizer to optimize Equation in the learning step. The learning rate for dynamics parameters in θ NL is 5.0 × 10 −4 and observation parameters in θ NL is 1.0 × 10 −3. For its relative robustness, we optimize Equation using a non-linear least squares optimizer with a Trust-Region Reflective algorithm (-) in the smoothing step. This step can be solved very efficiently by providing the solver with the block diagonal sparsity pattern of the Jacobian matrix. To evaluate the test metric, running an EKF is required. The output of an EKF depends on several user-provided parameters: • x 0: value of the initial state • Σ 0: covariance of error on initial state • Q: covariance of process noise • R: covariance of observation noise In this work, we assume that Q, R and Σ 0 are all set to the identity matrix. x 0 is assumed to be 0 on all dimensions. A well-tuned EKF with an inaccurate initial state value converges to accurate estimations in only a few time steps of transient behavior. Since the H25 model needs 25 past inputs to predict its first output prediction, we drop the first 25 predictions from the EKF when computing RMSE, thereby omitting some of the transient regime.
This work presents a scalable algorithm for non-linear offline system identification from partial observations.
1,769
scitldr
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD , have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally. One of the key factors behind the success of modern machine learning models is the availability of large amounts of training data (; ;). However, the state-of-the-art deep learning models deployed in industry typically rely on datasets too large to fit the memory of a single computer, and hence the training data is typically split and stored across a number of compute nodes capable of working in parallel. Training such models then amounts to solving optimization problems of the form where f m: R d → R represents the non-convex loss of a deep learning model parameterized by x ∈ R d associated with data stored on node m. Arguably, stochastic gradient descent (SGD) (; ;) in of its many variants (; ; ; ;) is the most popular algorithm for solving. In its basic implementation, all workers m ∈ {1, 2, . . ., M} in parallel compute a random approximation g m (x k) of ∇f m (x k), known as the stochastic gradient. These approximations are then sent to a master node which performs the aggregation The aggregated vector is subsequently broadcast back to the nodes, each of which performs an update of the form x k+1 = x k − γ kĝ (x k), thus updating their local copies of the parameters of the model. Typically, communication of the local gradient estimatorsĝ m (x k) to the master forms the bottleneck of such a system (; ;). In an attempt to alleviate this communication bottleneck, a number of compression schemes for gradient updates have been proposed and analyzed;; signSGD, Theorem 1 Step size Weak noise assumptions? ρi > ). A compression scheme is a (possibly randomized) mapping Q: R d → R d, applied by the nodes toĝ m (x k) (and possibly also by the master to aggregated update in situations when broadcasting is expensive as well) in order to reduce the number of bits of the communicated message. Sign-based compression. Although most of the existing theory is limited to unbiased compression schemes, i.e., on operators Q satisfying EQ(x) = x, biased schemes such as those based on communicating signs of the update entries only often perform much better (; ; ; ; ; ; ;). The simplest among these sign-based methods is signSGD (see also Algorithm 1; Option 1), whose update direction is assembled from the component-wise signs of the stochastic gradient. Adaptive methods. While ADAM is one of the most popular adaptive optimization methods used in deep learning , there are issues with its convergence and generalization properties. It was noted in that the behaviour of ADAM is similar to a momentum version of signSGD. Connection between sign-based and adaptive methods has long history, originating at least in Rprop and RMSprop . Therefore, investigating the behavior of signSGD can improve our understanding on the convergence of adaptive methods such as ADAM. We now summarize the main contributions of this work. Our key are summarized in Table 1. 1 In fact, bounded variance assumption, being weaker than bounded second moment assumption, is stronger (or, to be strict, more curtain) than SPB assumption in the sense of differential entropy, but not in the direct sense. The entropy of probability distribution under the bounded variance assumption is bounded, while under the SPB assumption it could be arbitrarily large. This observation is followed by the fact that for continuous random variables, the Gaussian distribution has the maximum differential entropy for a given variance (see https://en.wikipedia.org/wiki/Differential_entropy). • 2 methods for 1-node setup. In the M = 1 case, we study two general classes of sign based methods for minimizing a smooth non-convex function f. The first method has the standard form while the second has a new form not considered in the literature before: • Key novelty. The key novelty of our methods is in a substantial relaxation of the requirements that need to be imposed on the gradient estimatorĝ(x k) of the true gradient ∇f (x k). In sharp contrast with existing approaches, we allowĝ(x k) to be biased. Remarkably, we only need one additional and rather weak assumption onĝ(x k) for the methods to provably converge: we require the signs of the entries ofĝ(x k) to be equal to the signs of the entries of ∇f (x k) with a probability strictly larger than 1 /2 (see Section 2; Assumption 1). We show through a counterexample (see Section 2.2) that this assumption is necessary. • Geometry. As a byproduct of our analysis, we uncover a mixed l 1 -l 2 geometry of sign descent methods (see Section 3). • Convergence theory. We perform a complexity analysis of methods and (see Section 4.1; Theorem 1). While our complexity bounds have the same O(1 / √ K) dependence on the number of iterations, they have a better dependence on the smoothness parameters associated with f. Theorem 1 is the first on signSGD for non-convex functions which does not rely on mini-batching, and which allows for step sizes independent of the total number of iterations K. Finally, Theorem 1 in can be recovered from our general Theorem 1. Our bounds are cast in terms of a novel norm-like function, which we call the ρ-norm, which is a weighted l 1 norm with positive variable weights. • Distributed setup. We extend our to the distributed setting with arbitrary M (Section 4.2), where we also consider sign-based compression of the aggregated gradients. In this section we describe our key (and weak) assumption on the gradient estimatorĝ(x) of the true gradient ∇f (x), and give an example which shows that without this assumption, method can fail. Assumption 1 (SPB: Success Probability Bounds). For any x ∈ R d, we have access to an independent (and not necessarily unbiased) estimatorĝ(x) of the true gradient g(x):= ∇f (x) that satisfies for all x ∈ R d and all i ∈ {1, 2, . . ., d}. We will refer to the probabilities ρ i as success probabilities. As we will see, they play a central role in the convergence of sign based methods. We stress that Assumption 1 is the only assumption on gradient noise in this paper. Moreover, we argue that it is reasonable to require from the sign of stochastic gradient to show true gradient direction more likely than the opposite one. Extreme cases of this assumption are the absence of gradient noise, in which case ρ i = 1, and an overly noisy stochastic gradient, in which case ρ i ≈ 1 2. Remark 1. Assumption 1 can be relaxed by replacing bounds with However, if Prob(signĝ i (x) = 0) = 0 (e.g. in the case ofĝ i (x) has continuous distributions), then these two bounds are identical. Under review as a conference paper at ICLR 2020 Extension to stochastic sign oracle. Notice that we do not requireĝ to be unbiased. Moreover, we do not assume uniform boundedness of the variance, or of the second moment. This observation allows to extend existing theory to more general sign-based methods with a stochastic sign oracle. By a stochastic sign oracle we mean an oracle that takes x k ∈ R d as an input, and outputs a random vectorŝ k ∈ R d with entries in ±1. However, for the sake of simplicity, in the rest of the paper we will work with the signSGD formulation, i.e., we letŝ k = signĝ(x k). Here we analyze a counterexample to signSGD discussed in. Consider the following least-squares problem with unique minimizer x * =:, where ε ∈ and stochastic gradientĝ(x) = ∇ a i, x 2 = 2 a i, x a i with probabilities 1/2 for i = 1, 2. Let us take any point from the line l = {(z 1, z 2): z 1 + z 2 = 2} as initial point x 0 for the algorithm and notice that signĝ(x) = ±(1, −1) for any x ∈ l. Therefore, signSGD with any step-size sequence remains stuck along the line l, whereas the problem has a unique minimizer at the origin. We now investigate the cause of the divergence. In this counterexample, Assumption 1 is violated. Indeed, note that for i = 1, 2. By S:= {x ∈ R 2 : a 1, x · a 2, x > 0} = ∅ denote the open cone of points having either an acute or an obtuse angle with both a i' s. Then for any x ∈ S, the sign of the stochastic gradient is ±(1, −1) with probabilities 1 /2. Hence for any x ∈ S, we have low success probabilities: So, in this case we have an entire conic region with low success probabilities, which clearly violates. Furthermore, if we take a point from the complement open coneS c, then the sign of stochastic gradient equals to the sign of gradient, which is perpendicular to the axis of S (thus in the next step of the iteration we get closer to S). For example, if a 1, x < 0 and a 2, x > 0, then signĝ(x) = (1, −1) with probability 1, in which case x − γ signĝ(x) gets closer to low success probability region S. In summary, in this counterexample there is a conic region where the sign of the stochastic gradient is useless (or behaves adversarially), and for any point outside that region, moving direction (which is the opposite of the sign of gradient) leads toward that conic region. To justify our SPB assumption, we show that it holds under general assumptions on gradient noise. Lemma 1 (see B.1). Assume that for any point x ∈ R d, we have access to an independent and unbiased estimatorĝ(x) of the true gradient g(x). Assume further that each coordinateĝ i has a unimodal and symmetric distribution with variance σ Next, we remove the distribution condition and add a strong growth condition together with fixed mini-batch size. Lemma 2 (see B.2). Assume that for any point x ∈ R d, we have access to an independent, unbiased estimatorĝ(x) of the true gradient g(x), with coordinate-wise bounded variances σ for some constant c. Then, choosing a mini-batch size τ > 2c, we get Finally, we give an adaptive condition on mini-batch size for the SPB assumption to hold. Lemma 3 (see B.3). Assume that for any point x ∈ R d we have access to an independent and unbiased estimatorĝ(x) of the true gradient g(x). Let σ 2 i = σ 2 i (x) be the variance and ν Under review as a conference paper at ICLR 2020 In this section we introduce the concept of a norm-like function, which call ρ-norm, induced from success probabilities. Used to measure gradients in our convergence rates, ρ-norm is a technical tool enabling the analysis. be the collection of probability functions from the SPB assumption. We define the ρ-norm of gradient Note that ρ-norm is not a norm as it may not satisfy the triangle inequality. However, under SPB assumption, ρ-norm is positive definite as it is a weighted l 1 norm with positive (and variable) weights, and g ρ = 0 if and only if g = 0. Under the assumptions of Lemma 2, ρ-norm can be lower bounded by a weighted l 1 norm with positive constant weights 1 − 2c Under the assumptions of Lemma 1, ρ-norm can be lower bounded by a mixture of the l 1 and squared l 2 norms: Note that l 1,2 -norm is again not a norm. However, it is positive definite, continuous and order preserving, i.e., for any g k, g,g ∈ R d we have: i) g l 1,2 ≥ 0 and g l 1,2 = 0 if and only if g = 0;. From these three properties it follows that g k l 1,2 → 0 implies g k → 0. These properties are important as we will measure convergence rate in terms of the l 1,2 norm in the case of unimodal and symmetric noise assumption. To understand the nature of the l 1,2 norm, consider the following two cases when σ i (x) ≤ c|g i (x)| +c for some constants c,c ≥ 0. If the iterations are in ε-neighbourhood of a minimizer x * with respect to the l ∞ norm (i.e., max 1≤i≤d |g i | ≤ ε), then the l 1,2 norm is equivalent to scaled l 2 norm squared: On the other hand, if iterations are away from a minimizer (i.e., min 1≤i≤d |g i | ≥ L), then the l 1,2 -norm is equivalent to scaled l 1 norm: g 1. These equivalences are visible in Figure 1, where we plot the level sets of g → g l 1,2 at various distances from the origin. Similar mixed norm observation was also noted in. Now we turn to our theoretical of sign based methods. First we give our general convergence under the SPB assumption. Afterwards, we present convergence in the distributed setting under the unimodal and symmetric noise assumptions. Throughout the paper we assume that f: and is L-smooth with some non-negative constants That is, we assume that We allow f to be nonconvex. We now state our convergence for Algorithm 1 under the general SPB assumption. Under review as a conference paper at ICLR 2020 Theorem 1 (Non-convex convergence of signSGD, see B.4). Under the SPB assumption, signSGD (Algorithm 1 with Option 1) with step sizes γ k = γ 0 / √ k + 1 converges as follows If γ k ≡ γ > 0, we get 1 /K convergence to a neighbourhood of the solution: We now comment on the above : • Generalization. Theorem 1 is the first general on signSGD for non-convex functions without mini-batching, and with step sizes independent of the total number of iterations K. Known convergence (; on signSGD use mini-batches and/or step sizes dependent on K. Moreover, they also use unbiasedness and unimodal symmetric noise assumptions, which are stronger assumptions than our SPB assumption (see Lemma 1). Finally, Theorem 1 in can be recovered from Theorem 1 (see Section D for the details). • Convergence rate. Rates and can be arbitrarily slow, depending on the probabilities ρ i. This is to be expected. At one extreme, if the gradient noise was completely random, i.e., if ρ i ≡ 1/2, then the ρ-norm would become identical zero for any gradient vector and rates would be trivial inequalities, leading to divergence as in the counterexample. At other extreme, if there was no gradient noise, i.e., if ρ i ≡ 1, then the ρ-norm would be just the l 1 norm and from we get the rateÕ(1/ √ K) with respect to the l 1 norm. However, if we know that ρ i > 1/2, then we can ensure that the method will eventually converge. • Geometry. The presence of the ρ-norm in these rates suggests that there is no particular geometry (e.g., l 1 or l 2) associated with signSGD. Instead, the geometry is induced from the success probabilities. For example, in the case of unbiased and unimodal symmetric noise, the geometry is described by the mixture norm l 1,2. • Practicality. The rate (as well as) supports the common learning schedule practice of using a constant step size for a period of time, and then halving the step-size and continuing this process. For a reader interested in comparing Theorem 1 with a standard for SGD, we state the standard in the Section C. We now state a general convergence rate for Algorithm 1 with Option 2. Theorem 2 (see B.5). Under the SPB assumption, Algorithm 1 (Option 2) with step sizes γ k = γ 0 / √ k + 1 converges as follows: In the case of constant step size γ k = γ > 0, the same rate as is achieved. Comparing Theorem 2 with Theorem 1, notice that a small modification in Algorithm 1 can remove the log-dependent factor from; we then bound the average of past gradient norms instead of the minimum. On the other hand, in a big data regime, function evaluations in Algorithm 1 (Option 2, line 4) are infeasible. Clearly, Option 2 is useful only when one can afford function evaluations and has rough estimates about the gradients (i.e., signs of stochastic gradients). This option should be considered within the framework of derivative-free optimization. In this part we present the convergence of distributed signSGD (Algorithm 2) with majority vote introduced in. Majority vote is considered within a parameter server framework, where for each coordinate parameter server receives one sign from each node and sends Under review as a conference paper at ICLR 2020 back the sign sent by the majority of nodes. Known convergence (; use O(K) mini-batch size as well as O(1/K) constant step size. In the sequel we remove this limitations extending Theorem 1 to distributed training. In distributed setting the number of nodes M get involved in geometry introducing new ρ M -norm, which is defined by the regularized incomplete beta function I (see B.6). Now we can state the convergence rate of distributed signSGD with majority vote. Theorem 3 (Non-convex convergence of distributed signSGD, see B.6). Under SPB assumption, distributed signSGD (Algorithm 2) with step sizes γ k = γ 0 / √ k + 1 converges as follows For constant step sizes γ k ≡ γ > 0, we have convergence up to a level proportional to step size γ: Variance Reduction. Using Hoeffding's inequality, we show that 1, where ρ(x) = min 1≤i≤d ρ i (x) > 1 /2. Hence, in some sense, we have exponential variance reduction in terms of number of nodes (see B.7). Number of nodes. Notice that theoretically there is no difference between 2l−1 and 2l nodes, and this in not a limitation of the analysis. Indeed, as it is shown in the proof, expected sign vector at the master with M = 2l − 1 nodes is the same as with M = 2l nodes: E sign(ĝ is the sum of stochastic sign vectors aggregated from nodes. The intuition behind this phenomenon is that majority vote with even number of nodes, e.g. M = 2l, fails to provide any sign Under review as a conference paper at ICLR 2020 with little probability (it is the probability of half nodes voting for +1, and half nodes voting for −1). However, if we remove one node, e.g. M = 2l − 1, then master receives one sign-vote less but gets rid of that little probability of failing the vote (sum of odd number of ±1 cannot vanish). So, somehow this two things cancel each other and we gain no improvement in expectation adding one more node to parameter server framework with odd number of nodes. We verify our theoretical experimentally using the MNIST dataset with feed-forward neural network (FNN) and the well known Rosenbrock (non-convex) function with d = 10 variables: Stochastic formulation of minimization problem for Rosenbrock function is as follows: at any point x ∈ R d we have access to biased stochastic gradientĝ(x) = ∇f i (x) + ξ, where index i is chosen uniformly at random from {1, 2, . . ., d − 1} and ξ ∼ N (0, ν 2 I) with ν > 0. Figure 2 illustrates the effect of multiple nodes in distributed training with majority vote. As we see increasing the number of nodes improves the convergence rate. It also supports the claim that in expectation there is no improvement from 2l − 1 nodes to 2l nodes. Figure 4 shows the robustness of SPB assumption in the convergence rate with constant step size. We exploited four levels of noise in each column to demonstrate the correlation between success probabilities and convergence rate. In the first experiment (first column) SPB assumption is violated strongly and the corresponding rate shows divergence. In the second column, probabilities still violating SPB assumption are close to the threshold and the rate shows oscillations. Next columns show the improvement in rates when success probabilities are pushed to be close to 1. Under review as a conference paper at ICLR 2020 Figure 5: Performance of signSGD with variable step size (γ 0 = 0.25) under four different noise levels (mini-batch size 1, 2, 5, 7) using Rosenbrock function. As in the experiments of Figure 4 with constant step size, these plots show the relationship between success probabilities and the convergence rate. In low success probability regime (first and second columns) we observe oscillations, while in high success probability regime (third and forth columns) oscillations are mitigated substantially. Under review as a conference paper at ICLR 2020 to a neighborhood of the solution. We fixed gradient noise level by setting mini-batch size 2 and altered the constant step size. For the first column we set bigger step size γ = 0.25 to detect the divergence (as we slightly violated SPB assumption). Then for the second and third columns we set γ = 0.1 and γ = 0.05 to expose the convergence to a neighborhood of the minimizer. For the forth column we set even smaller step size γ = 0.01 to observe a slower convergence. Here we state the well-known Gauss's inequality on unimodal distributions 3. Theorem 4 (Gauss's inequality). Let X be a unimodal random variable with mode m, and let σ 2 m be the expected value of (X − m) 2. Then for any positive value of r, Applying this inequality on unimodal and symmetric distributions, direct algebraic manipulations give the following bound: where m = µ and σ 2 m = σ 2 are the mean and variance of unimodal, symmetric random variable X, and r ≥ 0. Now, using the assumption that eachĝ i (x) has unimodal and symmetric distribution, we apply this bound for and get a bound for success probabilities Improvment on Lemma 1 and l 1,2 norm: The bound after Gauss inequality can be improved including a second order term Hence, continuing the proof of Lemma 1, we get and we could have defined l 1,2 -norm in a bit more complicated form as Under review as a conference paper at ICLR 2020 B.2 SUFFICIENT CONDITIONS FOR SPB: PROOF OF LEMMA 2 Letĝ (τ) be the gradient estimator with mini-batch size τ. It is known that the variance forĝ (τ) is dropped by at least a factor of τ, i.e. Hence, estimating the failure probabilities of signĝ (τ) when g i = 0, we have which imples We will split the derivation into three lemmas providing some intuition on the way. The first two lemmas establish success probability bounds in terms of mini-batch size. Essentially, we present two methods: one works well in the case of small randomness, while the other one in the case of non-small randomness. In the third lemma, we combine those two bounds to get the condition on mini-batch size ensuring SPB assumption. Lemma 4. Let X 1, X 2,..., X τ be i.i.d. random variables with non-zero mean µ:= EX 1 = 0, finite variance σ 2:= E|X 1 − µ| 2 < ∞. Then for any mini-batch size τ ≥ 1 Proof. Without loss of generality, we assume µ > 0. Then, after some adjustments, the proof follows from the Chebyshev's inequality: where in the last step we used independence of random variables X 1, X 2,..., X τ. Obviously, bound is not optimal for big variance as it becomes a trivial inequality. In the case of non-small randomness a better bound is achievable additionally assuming the finitness of 3th central moment. Lemma 5. Let X 1, X 2,..., X τ be i.i.d. random variables with non-zero mean µ:= EX 1 = 0, positive variance σ 2:= E|X 1 − µ| 2 > 0 and finite 3th central moment ν 3:= E|X 1 − µ| 3 < ∞. Then for any mini-batch size τ ≥ 1 where error function erf is defined as Proof. Again, without loss of generality, we may assume that µ > 0. Informally, the proof goes as follows. As we have an average of i.i.d. random variables, we approximate it (in the sense of distribution) by normal distribution using the Central Limit Theorem (CLT). Then we compute success probabilities for normal distribution with the error function erf. Finally, we take into account the approximation error in CLT, from which the third term with negative sign appears. More formally, we apply Berry-Esseen inequality 4 on the rate of approximation in CLT : where N ∼ N has the standard normal distribution. Setting t = −µ √ τ /σ, we get It remains to compute the second probability using the cumulative distribution function of normal distribuition and express it in terms of the error function: Clearly, bound is better than when randomness is high. On the other hand, bound is not optimal for small randomness (σ ≈ 0). Indeed, one can show that in a small randomness regime, while both variance σ 2 and third moment ν 3 are small, the ration ν/σ might blow up to infinity producing trivial inequality. For instance, taking X i ∼ Bernoulli(p) and letting p → 1 gives ν/σ = O (1 − p) − 1 /6. This behaviour stems from the fact that we are using CLT: less randomness implies slower rate of approximation in CLT. As a of these two bounds on success probabilities, we conclude a condition on mini-batch size for the SPB assumption to hold. Under review as a conference paper at ICLR 2020 Lemma 6. Let X 1, X 2,..., X τ be i.i.d. random variables with non-zero mean µ = 0 and finite variance σ 2 < ∞. Then where ν 3 is (possibly infinite) 3th central moment. Proof. First, if σ = 0 then the lemma holds trivially. If ν = ∞, then it follows immediately from Lemma 4. Assume both σ and ν are positive and finite. In case of τ > 2σ 2 /µ 2 we apply Lemma 4 again. Consider the case τ ≤ 2σ 2 /µ 2, which implies we get which together with gives Hence, SPB assumption holds if It remains to show that erf Lemma follows from Lemma applying it to i.i.d. dataĝ First, from L-smoothness assumption we have where g k = g(x k),ĝ k =ĝ(x k),ĝ k,i is the i-th component ofĝ k andL is the average value of L i' s. Taking conditional expectation given current iteration x k gives Under review as a conference paper at ICLR 2020 Using the definition of success probabilities ρ i we get Plugging this into and taking full expectation, we get Therefore Now, in case of decreasing step sizes where we have used the following standard inequalities In the case of constant step size γ k = γ Under review as a conference paper at ICLR 2020 B.5 CONVERGENCE ANALYSIS: PROOF OF THEOREM 2 Clearly, the iterations {x k} k≥0 of Algorithm 1 (Option 2) do not increase the function value in any iteration, i.e. E[f (x k+1)|x k ] ≤ f (x k). Continuing the proof of Theorem 1 from, we get where we have used the following inequality The proof for constant step size is the same as in Theorem 1. The proof of Theorem 3 goes with the same steps as in Theorem 1, except the derivation- is replaced by where we have used the following lemma. Lemma 7. Assume that for some point x ∈ R d and some coordinate i ∈ {1, 2, . . ., d}, master node receives M independent stochastic signs signĝ m i (x), m = 1,..., M of true gradient g i (x) = 0. Let g (M) (x) be the sum of stochastic signs aggregated from nodes: where l = [(M +1) /2] and ρ i > 1 /2 is the success probablity for coordinate i.
General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities.
1,770
scitldr
Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks. Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical . Off-policy learning refers to evaluating and improving a deterministic policy using historic data collected from a stationary policy, which is important because in real-world scenarios on-policy evaluation is oftentimes expensive and has adverse impacts. For instance, evaluating a new treatment option, a clinical policy, by administering it to patients requires rigorous human clinical trials, in which patients are exposed to risks of serious side effects. As another example, an online advertising A/B testing can incur high cost for advertisers and bring them few gains. Therefore, we need to utilize historic data to perform off-policy evaluation and learning that can enable safe exploration of the hypothesis space of policies before deploying them. There has been extensive studies on off-policy learning in the context of reinforcement learning and contextual bandits, including various methods such as Q learning BID33 ), doubly robust estimator BID8 ), self-normalized (Swaminathan & Joachims (2015b) ), etc. A recently emerging direction of off-policy learning involves the use of logged interaction data with bandit feedback. However, in this setting, we can only observe limited feedback, often in the form of a scalar reward or loss, for every action; a larger amount of information about other possibilities is never revealed, such as what reward we could have obtained had we taken another action, the best action we should have take, and the relationship between the change in policy and the change in reward. For example, after an item is suggested to a user by an online recommendation system, although we can observe the user's subsequent interactions with this particular item, we cannot anticipate the user's reaction to other items that could have been the better options. Using historic data to perform off-policy learning in bandit feedback case faces a common challenge in counterfactual inference: How do we handle the distribution mismatch between the logging policy and a new policy and the induced generalization error? To answer this question, BID34 derived the new counterfactual risk minimization framework, that added the sample variance as a regularization term into conventional empirical risk minimization objective. However, the parametrization of policies in their work as linear stochastic models has limited representation power, and the computation of sample variance regularization requires iterating through all training samples. Although a first-order approximation technique was proposed in the paper, deriving accurate and efficient end-to-end training algorithms under this framework still remains a challenging task. Our contribution in this paper is three-fold:1. By drawing a connection to the generalization error bound of importance sampling BID6 ), we propose a new learning principle for off-policy learning with bandit feedback. We explicitly regularize the generalization error of the new policy by minimizing the distribution divergence between it and the logging policy. The proposed learning objective automatically trade off between emipircal risk and sample variance. 2. To enable end-to-end training, we propose to parametrize the policy as a neural network, and solves the divergence minimization problem using recent work on variational divergence minimization BID26 ) and Gumbel soft-max BID18 ) sampling. 3. Our experiment evaluation on benchmark datasets shows significant improvement in performance over conventional baselines, and case studies also corroborates the soundness of our theoretical proofs. We first review the framework of off-policy learning with logged bandit feedback introduced in BID34. A policy maps an input x ∈ X to a structured (discrete) output y ∈ Y. For example, the input x can be profiles of users, and we recommend movies of relevance to the users as the output y; or in the reinforcement learning setting, the input is the trajectory of the agent, and the output is the action the agent should take in the next time point. We use a family of stochastic policies, where each policy defines a posterior distribution over the output space given the input x, parametrized by some θ, i.e., h θ (Y|x). Note that here a distribution which has all its probability density mass on one action corresponds to a deterministic policy. With the distribution h(Y|x), we take actions by sampling from it, and each action y has a probability of h(y|x) being selected. In the discussion later, we will use h and h(y|x) interchangeably when there will not create any confusion. In online systems, we observe feedbacks δ(x, y; y *) for the action y sampled from h(Y|x) by comparing it to some underlying'best' y * that was not revealed to the system. For example, in recommendation system, we can use a scalar loss function δ(x, y; y DISPLAYFORM0 with smaller values indicating higher satisfaction with recommended items. The expected risk of a policy h(Y|x) is defined as DISPLAYFORM1, and the goal of off-policy learning is to find a policy with minimum expected risk on test data. In the off-line logged learning setting, we only have data collected from a logging policy h 0 (Y|x), and we aim to find an improved policy h(Y|x) that has lower expected risks R(h) < R(h 0). Specifically, the data we will use will be DISPLAYFORM2, where δ i and p i are the observed loss feedback and the logging probability (also called propensity score), and N is the number of training samples. Two main challenges are associated with this task: 1) If the distribution of a logging policy is skewed towards a specific region of the whole space, and doesn't have support everywhere, feedbacks of certain actions cannot be obtained and improvement for these actions is not possible as a . 2) since we cannot compute the expectation exactly, we need to resort to empirical estimation using finite samples, which creates generalization error and needs additional regularization. A vanilla approach to solve the problem is propensity scoring approach using importance sampling BID28 ), by accounting for the distribution mismatch between h and h 0. Specifically, we can rewrite the expected risk w.r.t h as the risk w.r.t h 0 using an importance reweighting: DISPLAYFORM3 With the collected historic dataset D, we can estimate the empirical riskR D (h), short asR(h) DISPLAYFORM4 2.2 COUNTERFACTUAL RISK MINIMIZATION BID34 pointed out several flaws with the vanilla approach, namely, not being invariant to loss scaling, large and potentially unbounded variance. To regularize the variance, the authors proposed a regularization term for sample variance derived from empirical Bernstein bounds. The modified objective function to minimize is now: DISPLAYFORM5, where DISPLAYFORM6 u i is the average of {u i} obtained from training data, and V ar(ū) is the sample variance of {u i}.As the variance term is dependent on the whole dataset, stochastic training is difficult, the authors approximated the regularization term via first-order Taylor expansion and obtained a stochastic optimization algorithm. Despite its simplicity, such first-order approximation neglects the non-linear terms from second-order and above, and introduces approximation errors while trying to reduce the sample variance. Instead of estimating variance empirically from the samples, which prohibits direct stochastic training, the fact that we have a parametrized version of the policy h(Y|x) motivates us to think: can we derive a variance bound directly from the parametrized distribution?We first note that the empirical risk termR(h) is the average loss reweigthed by importance sampling function DISPLAYFORM0 h0(y|x), and a general learning bound exist for importance sampling weights. Let z be a random variable and the importance sampling weight w(z) = p(z) p0(z), where p and p 0 are two probability density functions, the following identity holds Lemma 1. BID6 ) For a random variable z, let p(z) and p 0 (z) be two distribuion density function defined for z, and l(z) be a loss function of z bounded in. Let w = w(z) = p(z)/p 0 (z) be the importance sampling weight, the following identity holds: DISPLAYFORM1, where D 2 is the Rényi divergence D α BID27 ) with α = 2, i.e. squared Chi-2 divergence. Based on this lemma, we can derive an upper bound for the second moment of the weighted loss Theorem 1. Let X be a random variable distributed according to distribution P with density p(x), Y be a random variable, and δ(x, y) be a loss function over DISPLAYFORM2 For two sampling distributions of y, h(y|x) and h 0 (y|x), define their conditional divergence as d 2 (h(y|x)||h 0 (y|x); P(x)), we have DISPLAYFORM3 The bound is similar to Eq. with the difference that we are now working with a joint distribution over x, y. Detailed proofs can be found in Appendix 1.From the above theorem, we are able to derive a generalization bound between the expected risk R(h) and empirical riskR(h) using the distribution divergence function as Theorem 2. Let R h be the expected risk of the new policy on loss function δ, andR h be the emprical risk. We additionally assume the divergence is bounded by DISPLAYFORM4 Then with probability at least 1 − η, DISPLAYFORM5 The proof of this theorem is an application of Bernstein inequality and the second moment bound, and detailed proof is in Appendix 7. This highlights the bias-variance trade-offs as seen in empirical risk minimization (ERM) problems, whereR h approximates the emipircal risk/ bias, and the third term characterize the variance of the solution with distribution divergence (Recall V ar(w) = d 2 (h||h 0) − 1). It thus motivates us that in bandit learning setting, instead of directly optimizing the reweighed loss and suffer huge variance in test setting, we can try to minimize the variance regularized objectives as DISPLAYFORM6 λ = 2L 2 log 1/η is a model hyper-parameter controlling the trade-off between empirical risk and model variance, but we are still faced with the challenge of setting λ empirically and the difficuty in optimizing the objective (See Appendix for a comparison). Thus, in light of the recent success of distributionally robust learning, we explore an alternative formulation of the above regularized ERM in the next subsection. Instead of solving a'loss + regularizer' objective function, we here study a closely related constrained optimizationf formulation, whose intuition comes from the method of Langaragian mutliplier for constrained optimization. The new formulation is: DISPLAYFORM0, where ρ is a pre-determined constant as the regularization hyper-parameter. By applying Theorem, for a policy h, we have DISPLAYFORM1 This inequality shows that the robust objectiveR d(h||h0≤ρ) (h) is also a good surrogate of the true risk R(h), with their difference bouned by the regularization hyper-parameter ρ and approaches 0 when N → ∞.At first glance, the new objective function removes the needs to compute the sample variance in existing bounds, but when we have a parametrized distribution of h(y|x), and finite samples DISPLAYFORM2, estimating the divergence function is not an easy task. In the next subsection, we will present how recent f-gan networks for variational divergence minimization BID26 ) and Gumbel soft-max sampling BID18 ) can help solve the task. Discussion: Possibility of Counterfactual Learning: One interesting aspect of our bounds also stresses the need for the stochasticity of the logging policy BID21 ). For a deterministic logging policy, if the corresponding probability distribution can only have some peaked masses, and zeros elsewhere in its domain, our intution suggests that learning will be difficult, as those regions are never explored. Our theory well reflects this intuition in the calculation of the divergence term, the integral of form y h 2 (y|x)/h 0 (y|x)dy. A deterministic policy has a non-zero measure region of h 0 (Y|x) with probability density of h 0 (y|x) = 0, while the corresponding h(y|x) can have finite values in the region. The ing integral is thus unbounded, and in turn induces an unbounded generalization bound, making counterfactual learning in this case not possible. The derived variance regularized objective requires us to minimize the square root of the condi- DISPLAYFORM0 dy. For simplicity, we can examine the term inside the expectation operation first. With simple calculation, we have DISPLAYFORM1, where f (t) = t 2 − 1 is a convex function in the domain {t : t ≥ 0} with f = 0. Combining with the expectation operator gives a minimization objective of D f (h||h 0 ; P(X)) (+1 omitted as constant).The above calculation draws connection between our divergence and the f-divergence measure BID25 ). Follow the f-GAN for variational divergence minimization method proposed in BID26, we can reach a lower bound of the above objective as DISPLAYFORM2 For the second equality, as f is a convex function and applying Fenchel convex duality (f * = sup u {u v − f (u)}) gives the dual formulation. Because the expectation is taken w.r.t to x while the supreme is taken w.r.t. all functions T, we can safely swap the two operators. We note that the bound is tight when T 0 (x) = f (h/h 0), where f is the first order derivative of f as f (t) = 2t BID25 ).The third inequality follows because we restrict T to a family of functions instead of all functions. Luckily, the universal approximation theorem of neural networks BID15 states that neural networks with arbitrary number of hidden units can approximate continous functions on a compact set with any desired precision. Thus, by choosing the family of T to be the family of neural networks, the equality condition of the second equality can be satisfied theoretically. The final objective is a saddle point of a function T (x, y): X × Y → R that maps input pairs to a scalar value, and the policy we want to learn h(Y|x) acts as a sampling distribution. Although being a lower bound with achievable equality conditions, theoretically, this saddle point trained with mini-batch estimation is a consistent estimator of the true divergence. We use D f = sup T T dhdx − f * (T)dh 0 dx to denote the true divergence, and DISPLAYFORM3 dx the empirical estimator we use, whereĥ and h 0 are the emipircal distribution obtained by sampling from the two distribution respectively. DISPLAYFORM4 Proof. Let's start by decomposing the estimation error. DISPLAYFORM5, where the first term of error comes from restricting the parametric family of T to a family of neural networks, and the second term of error involves the approximation error of an emipirical mean estimation to the true distribution. By the universal approximation theorem, we have e 0 = 0, and that ∃T ∈ T, such that T = T 0.For the second term e 1, we plug in T 0 and have it rewritten as DISPLAYFORM6 For the first term, DISPLAYFORM7 we can see that this is the diffrence between an empirical distribution and the underlying population distribution. We can verify that the strong law of large numbers (SLLN) applies. By optimality condition, T 0 = h(y|x) h0(y|x), where both h and h 0 are probability density functions. By the bounded loss assumption, the ratio is integrable. Similarly, f * (T 0) = 2T 0 − 1 is also integrable. Thus, we can apply SLLN and conclude the term → 0. For the second term, we can apply Theroem 5 from [] and also obtain it → a.s.0.Again, a generative-adversarial approach BID12 ) can be applied. Toward this end, we represent the T function as a discriminator network parametrized as T w (x, y). We then parametrize the distribution of our policy h(y|x) as another generator neural network h θ (y|x) mapping x to the probability of sampling y. For structured output problems with discrete values of y, to allow the gradients of samples obtained from sampling backpropagated to all other parameters, we use the Gumbel soft-max sampling BID18 ) methods for differential sampling from the distribution h(y|x). We list the complete training procedure Alg. 1 for completeness. DISPLAYFORM8 sampled from logging policy h 0; a predefined threshold D 0; an initial generator distribution h θ 0 (y|x); an initial discriminator function T w 0 (x, y); max iteration I Result: An optimized generator h θ * (y|x) distribution that has minimum divergence to h 0 initialization; whileD f (h||h 0 ; P(X)) > D 0 or iter < I do Sample a mini-batch'real' samples (x i, y i) from D; Sample a mini-batch x from D, and construct'fake' samples (x i,ŷ i) by samplingŷ from h θ t (y|x) with Gubmel soft-max; Update DISPLAYFORM9 For our purpose of minimizing the variance regularization term, we can similarly derive a training algorithm, as the gradient of t → √ t + 1 can also be backpropagated. With the above two components, we are now ready to present the full treatment of our end-to-end learning for counterfactual risk minimization from logged data. The following algorithm solve the robust regularized formulation and for completeness, training for the original ERM formulation in Sec. 3.1 (referred to co-training version in the later experiment sections) is included in Appendix 7. DISPLAYFORM0 sampled from h 0; regularization hyper-parameter ρ, and maximum iteration of divergence minimization steps I, and max epochs for the whole algorithm M AX Result: An optimized generator h * θ (y|x) that is an approximate minimizer of R(w) initialization; while epoch < M AX do / * Update θ to minimize the reweighted loss * / Sample a mini-batch of m samples from D; Update θ t+1 = θ t − η θ g 1; / * Update discriminator and generator for divergence minimization * / Call Algorithm 1 to minimize the divergence D 2 (h||h 0 ; P(X)) with threshold = ρ, and max iter set to I; end Algorithm 2: Minimizing Variance Regularized Risk -Separate TrainingThe algorithm works in two seperate training steps: 1) update the parameters of the policy h to minimize the reweighed loss 2) update the parameters of the policy/ generator and the discriminator to regularize the variance thus to improve the generalization performance of the new policy. Exploiting historic data is an important problem in multi-armed bandit and its variants such as contextual bandit and has wide applications BID31; BID30; BID4 ). Approaches such as doubly robust estimators BID8 ) have been proposed, and recent theoretical study explored the finite-time minimax risk lower bound of the problem ), and an adaptive learning algorithm using the theoretical analysis. Bandits problems can be interpreted as a single-state reinforcement learning (RL) problems, and techniques including doubly robust estimators BID19;; BID24 ) have also been extended to RL domains. Conventional techniques such as Q function learning, and temporal difference learning BID33 ) are alternatives for off-policy learning in RL by accounting for the Markov property of the decision process. Recent works in deep RL studies have also addressed off-policy updates by methods such as multi-step bootstrapping (Mahmood et al. FORMULA3), off-policy training of Q functions BID14 ).Learning from logs traces backs to BID16 and BID28, where propensity scores are applied to evaluate candidate policies. In statistics, the problem is also described as treatment effect estimation (Imbens FORMULA4), where the focus is to estimate the effect of an intervention from observational studies that are collected by a different intervention. BID5 derived unbiased counterfactual estimators to study an example of computational advertising; another set of techniques reduce the bandit learning to a weighted supervised learning problems (Zadrozny et al. FORMULA4), but is shown to have poor generalization performance BID4 ).Although our variance regularization aims at off-policy learning with bandit feedback, part of the proof comes from the study of generalization bounds in importance sampling problems BID6 ), where the original purpose was to account for the distribution mismatch between training data and testing distribution, also called covariate shift, in supervised learning. BID7 also discussed variance regularized empirical risk minimization for supervised learning with a convex objective function, which has connections to distributionally robust optimization problem BID2 ). It will be of further interest to study how our divergence minimization technique can be applied to supervised learning and domain adaptation BID32; BID13 ) problems as an alternative to address the distribution match issue. Regularization for our objective function has close connection to the distributionally robust optimization techniques BID3 ), where instead of minizing the emiprical risk to learn a classifier, we minimize the supreme the emipirical risk over an ellipsoid uncertainty set. Wasserstein distance between emipircal distribution and test distribution is one of the most well studied contraint and is proven to achieve robust generalization performance (Esfahani & Kuhn ( For empirical evaluation of our proposed algorithms, we follow the conversion from supervised learning to bandit feedback method BID0). For a given supervised dataset DISPLAYFORM0, we first construct a logging policy h 0 (Y|x), and then for each sample x i, we sample a prediction y i ∼ h 0 (y|x i), and collect the feedback as δ(y * i, y i). For the purpose of benchmarks, we also use the conditional random field (CRF) policy trained on 5% of D * as the logging policy h 0, and use hamming loss, the number of incorrectly misclassified labels between y i and y * i, as the loss function δ BID34 ). To create bandit feedback datasets D = {x i, y i, δ i, p i}, each of the samples x i were passed four times to the logging policy h 0 and sampled actions y i were recorded along with the loss value δ i and the propensity score p i = h 0 (y i |x i).In evaluation, we use two type of evaluation metrics for the probabilistic policy h(Y|x). The first is the expected loss (referred to as 'EXP' later) R(h) = 1 Ntest i E y∼h(y|xi) δ(y * i, y), a direct measure of the generalization performance of the learned policy. The second is the average hamming loss of maximum a posteriori probability (MAP) prediction y MAP = arg max h(y|x) derived from the learned policy, as MAP is a faster way to generate predictions without the need for sampling in practice. However, since MAP predictions only depend on the regions with highest probability, and doesn't take into account the diverse of predictions, two policies with same MAP performance could have very different generalization performance. Thus, a model with high MAP performance but low EXP performance might be over-fitting, as it may be centering most of its probability masses in the regions where h 0 policy obtained good performance. Baselines Vanilla importance sampling algorithms using inverse propensity score (IPS), and the counterfactual risk minimization algorithm from Swaminathan & Joachims (2015a) (POEM) are compared, with both L-BFGS optimization and stochastic optimization solvers. The hyperparameters are selected by performance on validation set and more details of their methods can be found in the original paper BID34 ).Neural network policies without divergence regularization (short as "NN-NoReg" in later discussions) is also compared as baselines, to verify the effectiveness of variance regularization. Dataset We use four multi-label classification dataset collected in the UCI machine learning repo BID1 ), and perform the supervised to bandit conversion. We report the statistics in TAB2 in the Appendix. For these datasets, we choose a three-layer feed-forward neural network for our policy distribution, and a two or three layer feed-forward neural network as the discriminator for divergence minimization. Detailed configurations can be found in the Appendix 7.For benchmark comparison, we use the separate training version 2 as it has faster convergence and better performance (See Sec. 6.5 for an empirical comparison). The networks are trained with Adam BID20 ) of learning rate 0.001 and 0.01 respectively for the reweighted loss and the divergence minimization part. We used PyTorch to implement the pipelines and trained networks with Nvidia K80 GPU cards. Codes for reproducing the as well as preprocessed data can be downloaded with the link 1Results by an average of 10 experiment runs are obtained and we report the two evaluation metrics in TAB0. We report the regularized neural network policies with two Gumbel-softmax sampling schemes, soft Gumbel soft-max (NN-Soft), and straight-through Gumbel soft-max (NN-Hard).As we can see from the , by introducing a neural network parametrization of the polices, we are able to improve the test performance by a large margin compared to the baseline CRF policies, as the representation power of networks are often reported to be stronger than other models. The introduction of additional variance regularization term (comparing NN-Hard/Soft to NN-NoReg), we can observe an additional improvement in both testing loss and MAP prediction loss. We observe no significant difference between the two Gumbel soft-max sampling schemes. To study the effectiveness of variance regularization quantitatively, we vary the maximum number of iterations (I in Alg. 2) we take in each divergence minimization sub loop. For example,'NNHard-10' indicates that we use ST Gubmel soft-max and set the maximum number of iterations to 10. Here we set the thresholds for divergence slightly larger so maximum iterations are executed so that are more comparable. We plot the expected loss in test sets against the epochs average over 10 runs with error bars using the dataset yeast. As we can see from the figure, models with no regularization (gray lines in the figure) have higher loss, and slower convergence rate. As the number of maximum iterations for divergence minimization increases, the test loss decreased faster and the final test loss is also lower. This behavior suggests that by adding the regularization term, our learned policies are able to generalize better to test sets, and the stronger the regularization we impose by taking more divergence minimization steps, the better the test performance is. The regularization also helps the training algorithm to converge faster, as shown by the trend. Our theoretical bounds implies that the generalization performance of our algorithm improves as the number of training samples increases. We vary the number of passes of training data x was passed to the logging policy to sample an action y, and vary it in the range 2 [1,2,...,8] with log scales. When the number of training samples in the bandit dataset increases, both models with and without regularization have an increasing test performance in the expected loss and reaches a relatively stable level in the end. Moreover, regularized policies have a better generalization performance compared to the model without regularization constantly. This matches our theoretical intuitions that explicitly regularizing the variance can help improve the generalization ability, and that stronger regularization induces better generalization performance. But as indicated by the MAP performance, after the replay of training samples are more than 2 4, MAP prediction performance starts to decrease, which suggests the models may be starting over-fitting already. In this section, we use some experiments to present the difference in two training schemes: cotraining in Alg. 3 and the easier version Alg. 2. For the second algorithm, we also compare the two Gumbel-softmax sampling schemes in addition, denoted as Gumbel-softmax, and Straight-Through (ST) Gumbel-softmax respectively. The figures suggest that blending the weighted loss and distribution divergence performs slightly better than the model without regularization, however, the training is much more difficult compared to the separate training scheme, as it's hard to balance the gradient of the two parts of the objective function. We also observe no significant performance difference between the two sampling schemes of the Gumbel-softmax. In this section, we discuss how the effect of logging policies, in terms of stochasticity and quality, will affect the learning performance and additional visualizations of other metrics can be found in the Appendix 7.As discussed before, the ability of our algorithm to learn an improved policy relies on the stochasticity of the logging policy. To test how this stochasticity affects our learning, we modify the parameter of h 0 by introducing a temperature multiplier α. For CRF logging policies, the prediction is made by normalizing values of w T φ(x, y), where w is the model parameter and can be modified by α with w → αw. As α becomes higher, h 0 will have a more peaked distribution, and ultimately become a deterministic policy with α → ∞.We varied α in the range of 2 [−1,1,...,8], and report the average ratio of expected test loss to the logging policy loss of our algorithms (Y-axis in Fig 4a, where smaller values indicate a larger improvement). We can see that NN polices are performing better than logging policy when the stochasticity of h 0 is sufficient, while after the temperature parameter increases greater than 2 3, it's much harder and even impossible (ratio ¿ 1) to learn improved NN policies. We also note here that the stochasticity doesn't affect the expected loss values themselves, and the drop in the ratios mainly ed from the decreased loss of the logging policy h 0. In addition, comparing within NN policies, policies with stronger regularization have slight better performance against models with weaker ones, which in some extent shows the robustness of our learning principle. The decreasing stochasticity of h 0 makes it harder to obtain an improved NN policy, and our regularization can help the model be more robust and achieve better generalization performance. b) As h 0 improves, the models constantly outperform the baselines, however, the difficulty is increasing with the quality of h 0. Note: more visualizations of other metrics can be found in the appendix 7.Finally, we discusses the impact of logging policies to the our learned improved policies. Intuitively, a better policy that has lower hamming loss can produce bandit datasets with more correct predictions, however, it's also possible that the sampling biases introduced by the logging policy is larger, and such that some predictions might not be available for feedbacks. To study the trade-off between better policy accuracy and the sampling biases, we vary the proportion of training data points used to train the logging policy from 0.05 to 1, and compare the performance of our improved policies obtained by in Fig. 4b. We can see that as the logging policy improves gradually, both NN and NN-Reg policies are outperforming the logging policy, indicating that they are able to address the sampling biases. The increasing ratios of test expected loss to h 0 performance, as a proxy for relative policy improvement, also matches our intuition that h 0 with better quality is harder to beat. In this paper, we started from an intuition that explicitly regularizing variance can help improve the generalization performance of off-policy learning for logged bandit datasets, and proposed a new training principle inspired by learning bounds for importance sampling problems. The theoretical discussion guided us to a training objective as the combination of importance reweighted loss and a regularization term of distribution divergence measuring the distribution match between the logging policy and the policy we are learning. By applying variational divergence minimization and Gumbel soft-max sampling techniques, we are able to train neural network policies end-to-end to minimize the variance regularized objective. Evaluations on benchmark datasets proved the effectiveness of our learning principle and training algorithm, and further case studies also verified our theoretical discussion. Limitations of the work mainly lies in the need for the propensity scores (the probability an action is taken by the logging policy), which may not always be available. Learning to estimate propensity scores and plug the estimation into our training framework will increase the applicability of our algorithms. For example, as suggested by BID6, directly learning importance weights (the ratio between new policy probability to the logging policy probability) has comparable theoretical guarantees, which might be a good extension for the proposed algorithm. Although the work focuses on off-policy from logged data, the techniques and theorems may be extended to general supervised learning and reinforcement learning. It will be interesting to study how A. PROOFS DISPLAYFORM0 We apply Lemma 1 to z, importance sampling weight function w(z) = p(z)/p 0 (z) = h(y|x)/h 0 (y|x), and loss l(z)/L, we have DISPLAYFORM1 Thus, we have DISPLAYFORM2 Proof. For a single hypothesis denoted as δ with values DISPLAYFORM3 By Lemma 1, the variance can be bounded using Reni divergence as DISPLAYFORM4 Applying Bernstein's concentration bounds we have DISPLAYFORM5 σ 2 (Z)+ LM/3 ), we can obtain that with probability at least 1 − η, the following bounds for importance sampling of bandit learning holds DISPLAYFORM6, where the second inequality comes from the fact that DISPLAYFORM7 sampled from logging policy h 0; regularization hyper-parameter λ Result: An optimized generator h * θ (y|x) that is an approximate minimizer of R(w) initialization; while Not Converged do / * Update discriminator * / Sample a mini-batch of'fake' samples (x i,ŷ i) with x i from D andŷ i ∼ h θ t (y|x i); Sample a mini-batch of'real' samples (x i, y i) from D; Update w t+1 = w t + η w ∂F (T w, h θ); / * Update generator * / Sample a mini-batch of m samples from D; Sample a mini-batch of m 1'fake' samples; Estimate the generator gradient as g 2 = F (T w, h θ); Update θ t+1 = θ t − η θ (g 1 + λg 2); end Algorithm 3: Minimizing Variance Regularized Risk -Co-Training Version We report the statistics of the datasets as in the following table. For the latter two datasets TMC, (c) The effect of stochasticity of h0 vs ratio of test loss with MAP Figure 5: As the logging policy becomes more deterministic, NN policies are still able to find improvement over h 0 in a) expected loss and b) loss with MAP predictions. c) We cannot observe a clear trend in terms of the performance of MAP predictions. We hypothesize it from that h 0 policy already has good MAP prediction performance by centering some of the masses. While NN policies can easily pick up the patterns, it will be difficult to beat the baselines. We believe this phenomenon worth further investigation. (c) The quality of h0 vs ratio of expected test loss with MAP Figure 6: a) As the quality of the logging policy increases, NN policies are still able to find improvement over h 0 in expected loss. and b) c) For MAP predictions, however, it will be really difficult for NN policies to beat if the logging policy was already exposed to full training data and trained in a supervised fashion.
For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance.
1,771
scitldr
We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging. The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images. The mathematical basis underlying our method is the expectation-maximization framework, where data are divided in batches and coupled to additional "latent" unknowns. These unknowns are pairs of elements from the original unknown space (but now coupled to a specific data batch) and network inputs. In this setting, the neural network controls the similarity between these additional parameters, acting as a "center" variable. The ing problem amounts to a maximum-likelihood estimation of the network parameters when the augmented data model is marginalized over the latent variables. In least-squares imaging, we are interested in inverting the following inconsistent ill-conditioned linear inverse problem: In this expression, the unknown vector x represents the image, y i, i = 1,..., N the observed data from N source experiments and A i the discretized linearized forward operator for the ith source experiment. Despite being overdetermined, the above least-squares imaging problem is challenging. The linear systems A i are large, expensive to evaluate, and inconsistent because of noise and/or linearization errors. As in many inverse problems, solutions of problem 1 benefit from adding prior information in the form of penalties or preferentially in the form of constraints, yielding with C representing a single or multiple (convex) constraint set(s). This approach offers the flexibility to include multiple handcrafted constraints. Several key issues remain, namely; (i) we can not afford to work with all N experiments when computing gradients for the above data-misfit objective; (ii) constrained optimization problems converge slowly; (iii) handcrafted priors may not capture complexities of natural images; (iv) it is non-trivial to obtain uncertainty quantification information. To meet the computational challenges that come with solving problem 2 for non-differentiable structure promoting constraints, such as the 1 -norm, we solve problem 2 with Bregman iterations for a batch size of one. The kth iteration reads with A k the adjoint of A k, where, and being the projection onto the (convex) set and 2 the dynamic steplength. Contrary to the Iterative Shrinkage Thresholding Algorithm (ISTA), we iterate on the dual variablex. Moreover, to handle more general situations and to ensure we are for every iteration feasible (= in the constraint set) we replace sparsity-promoting thresholding with projections that ensure that each model iterate remains in the constraint set. As reported in Witte et al., iterations 3 are known to converge fast for pairs {y k, A k} that are randomly drawn, with replacement, from iteration to iteration. As such, Equation 3 can be interpreted as stochastic gradient descent on the dual variable. Handcrafted priors, encoded in the constraint set C, in combination with stochastic optimization, where we randomly draw a different source experiment for each iteration of Equation 3, allow us to create high-fidelity images by only working with random subsets of the data. While encouraging, this approach relies on handcrafted priors encoded in the constraint set C. Motivated by recent successes in machine learning and deep convolutional networks (CNNs) in particular, we follow Van Veen et al., Dittmer et al. and Wu and McMechan and propose to incorporate CNNs as deep priors on the model. Compared to handcrafted priors, deep priors defined by CNNs are less biased since they only require the model to be in the range of the CNN, which includes natural images and excludes images with unnatural noise. In its most basic form, this involves solving problems of the following type: In this expression, g(z, w) is a deep CNN parameterized by unknown weights w and z ∼ N is a fixed random vector in the latent space. In this formulation, we replaced the unknown model by a neural net. This makes this formulation suitable for situations where we do not have access to data-image training pairs but where we are looking for natural images that are in the range of the CNN. In recent work by Van Veen et al., it is shown that solving problem 5 can lead to good estimates for x via the CNN g(z, w) where w is the minimizer of problem 5 highly suitable for situations where we only have access to data. In this approach, the parameterization of the network by w for a fixed z plays the role of a non-linear redundant transform. While using neural nets as strong constraints may offer certain advantages, there are no guarantees that the model iterates remain physically feasible, which is a prerequisite if we want to solve non-linear imaging problems that include physical parameters. Unless we pre-train the network, early iterations while solving problem 5 will be unfeasible. Moreover, as mentioned by Van Veen et al., from solving inverse problems with deep priors may benefit from additional types of regularization. We accomplish this by combining hard handcrafted constraints with a weak constraint for the deep prior ing in a reformulation of the problem 5 into In this expression, the deep prior appears as a penalty term weighted by the trade-off parameter λ > 0. In this weak formulation, x is a slack variable, which by virtue of the hard constraint will be feasible throughout the iterations. The above formulation offers flexibility to impose constraints on the model that can be relaxed during the iterations as the network is gradually "trained". We can do this by either relaxing the constraint set (eg. by increasing the size of the TV-norm ball) or by increasing the trade-off parameter λ. So far, we used the neural network to regularize inverse problems deterministically by selecting a single latent variable z and optimizing over the network weights initialized by white noise. While this approach may remove bias related to handcrafted priors, it does not fully exploit documented capabilities of generative neural nets, which are capable of generating realizations from a learned distribution. Herein lies both an opportunity and a challenge when inverse problems are concerned where the objects of interest are generally not known a priori. Basically, this leaves us with two options. Either we assume to have access to an oracle, which in reality means that we have a training set of images obtained from some (expensive) often unknown imaging procedure, or we make necessary assumptions on the statistics of real images. In both cases, the learned priors and inferred posteriors will be biased by our (limited) understanding of the inversion process, including its regularization, or by our (limited) understanding of statistical properties of the unknown e.g. geostatistics. The latter may lead to perhaps unreasonable simplifications of the geology while the former may suffer from remnant imprint of the nullspace of the forward operator and/or poor choices for the handcrafted and deep priors. Contrary to approaches that have appeared in the literature, where the authors assume to have access to a geological oracle to train a GAN as a prior, we opt to learn the posterior through inversion deriving from the above combination of hard handcrafted constraints and weak deep priors with the purpose to train a network to generate realizations from the posterior. Our approach is motivated by Han et al. who use the Expectation Maximization (EM) technique to train a generative model on sample images. We propose to do the same but now for seismic data collected from one and the same Earth model. To arrive at this formulation, we consider each of the N source experiments with data y k as separate datasets from which images x k can in principle be inverted. In other words, contrary to problem 1, we make no assumptions that the y k come from one and the same x but rather we consider n N different batches each with their own x k. Using the these y k, we solve an unsupervised training problem during which • n minibatches of observed data, latent, and slack variables are paired into tuples with the latent variables z i's initialized as zero-centered white Gaussian noise, z i ∼ N (0, I). The slack variables x i's are computed by the numerically expensive Bregman iterations, which during each iteration work on the randomized source experiment of each minibatch. • latent variables z i's are sampled from p(z i |x i, w) by running l iterations of Stochastic Gradient Langevin Dynamics (SGLD, Welling and Teh ) (Equation 7), where w is the current estimate of network weights, and x i's are computed with Bregman iterations (Equation 8). These iterations for the latent variables are warm-started while keeping the network weights w fixed. This corresponds to an unsupervised inference step where training are created. Uncertainly in the z i's is accounted for by SGLD iterations. • the network weights are updated using {x i, z i} n i=1 with a supervised learning procedure. During this learning step, the network weights are updated by sample averaging the gradients w.r.t. w for all z i' s. As stated by Han et al., we actually compute a Monte Carlo average from these samples. By following this semi-supervised learning procedure, we expose the generative model to uncertainties in the latent variables by drawing samples from the posterior via Langevin dynamics that involve the following iterations for the pairs with ε the steplength. Compared to ordinary gradient descent, 7 contains an additional noise term that under certain conditions allows us to sample from the posterior distribution, p(z i |x i, w). The training samples x i came from the following Bregman iterations in the outer loop After sampling the latent variables, we update the network weights via for the z i's fixed with η steplength for network weights. Conceptually, the above training procedure corresponds to carrying out n different inversions for each data set y i separately. We train the weights of the network as we converge to the different solutions of the Bregman iterations for each dataset. As during Elastic-Averaging Stochastic Gradient Descent [10, Chaudhari et al. ], x i's have room to deviate from each other when λ is not too large. Our approach differs in the sense that we replaced the center variable by a generative network. We numerically conduct a survey where the source experiments contain severe incoherent noise and coherent linearization errors:, where A k = ∇F k is the Jacobian and F k (m) is the nonlinear forward operator with m the known smooth model and δm the unknown perturbation (image). The signal-to-noise ratio of the observed data is −11.37 dB. The of this experiment are included in Figure 1 from which we make the following observations. First, as expected the models generated from g(z, w) are smoother than the primal Bregman variable. Second, there are clearly variations amongst the different g(z, w)'s and these variations average out in the mean, which has fewer imaging artifacts. Because we were able to train the g(z, w) as a "byproduct" of the inversion, we are able to compute statistical information from the trained generative model that may give us information about the "uncertainty". In Figure 2, we included a plot of the pointwise standard deviation, computed with 3200 random realizations of g(z, w), z ∼ p z (z), and two examples of sample "prior" (before training) and "posterior" distribution. As expected, the pointwise standard deviations shows a reasonable sharpening of the probabilities before and after training through inversion. We also argue that the areas of high pointwise standard deviation coincide with regions that are difficult to image because of the linearization error and noise. In this work, we tested an inverse problem framework which includes hard constraints and deep priors. Hard constraints are necessary in many problems, such as seismic imaging, where the unknowns must belong to a feasible set in order to ensure the numerical stability of the forward problem. Deep priors, enforced through adherence to the range of a neural network, provide an additional, implicit type of regularization, as demonstrated by recent work [2, Dittmer et al. ], and corroborated by our numerical . The ing algorithm can be mathematically interpreted in light of expectation maximization methods. Furthermore, connections to elastic averaging SGD highlight potential computational benefits of a parallel (synchronous or asynchronous) implementation. On a speculative note, we argue that the presented method, which combines stochastic optimization on the dual variable with on-the-fly estimation of the generative model's weights using Langevin dynamics, reaps information on the "posterior" distribution leveraging multiplicity in the data and the fact that the data is acquired over one and the same Earth model. Our preliminary seem consistent with a behavior to be expected from a "posterior" distribution. b,c) sample "prior" (before training) and "posterior" distribution functions for two points in the model.
We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the "posterior" distribution leveraging multiplicity in the data.
1,772
scitldr
When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism,to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 53.7%, compared to 47.4% for the previous state-of-the-art model unaugmented with BERT embeddings. In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment. The ability to effectively query databases with natural language has the potential to unlock the power of large datasets to the vast majority of users who are not proficient in query languages. As such, a large body of research has focused on the task of translating natural language questions into queries that existing database software can execute. The release of large annotated datasets containing questions and the corresponding database SQL queries has catalyzed progress in the field, by enabling the training of supervised learning models for the task. In contrast to prior semantic parsing datasets , new tasks such as WikiSQL and Spider (b) pose the real-life challenge of generalization to unseen database schemas. Every query is conditioned on a multi-table database schema, and the databases do not overlap between the train and test sets. Schema generalization is challenging for three interconnected reasons. First, any text-to-SQL semantic parsing model must encode a given schema into column and table representations suitable for decoding a SQL query that might involve any of the given columns or tables. Second, these representations should encode all the information about the schema, including its column types, foreign key relations, and primary keys used for database joins. Finally, the model must recognize natural language used to refer to database columns and tables, which might differ from the referential language seen in training. The latter challenge is known as schema linking -aligning column/table references in the question to the corresponding schema columns/tables. While the question of schema encoding has been studied in recent literature (b), schema linking has been relatively less explored. Consider the example in Figure 1. It illustrates the challenge of ambiguity in linking: while "model" in the question refers to car_names.model rather than model_list.model, "cars" actually refers to both cars_data and car_names (but not car_makers) for the purpose of table joining. To resolve the column/table references properly, the semantic parser must take into account both the known schema relations (e.g. foreign keys) and the question context. Prior work (b) addressed the schema representation problem by encoding the directed graph of foreign key relations among the columns with a graph neural network. While effective, this approach has two important shortcomings. First, it does not contextualize schema encoding with the question, thus making it difficult for the model to reason about schema linking after both the column representations and question word representations have been built. Second, it limits information propagation during schema encoding to predefined relations in the schema such as foreign keys. The advent of self-attentional mechanisms in natural language processing shows that global reasoning is crucial to building effective representations of relational structures. However, we would like any global reasoning to also take into account the aforementioned predefined schema relations. In this work, we present a unified framework, called RAT-SQL, 1 for encoding relational structure in the database schema and a given question. It uses relation-aware self-attention to combine global reasoning over the schema entities and question words with structured reasoning over predefined schema relations. We then apply RAT-SQL to the problems of schema encoding and schema linking. As a , we obtain 53.7% exact match accuracy on the Spider test set. At the time of writing, this is the state of the art among models unaugmented with pretrained BERT embeddings. In addition, we experimentally demonstrate that RAT-SQL enables the model to build more accurate internal representations of the question's true alignment with schema columns and tables. Semantic parsing of natural language to SQL queries recently surged in popularity thanks to the creation of two new multi-table datasets with the challenge of schema generalization -WikiSQL and Spider (b). Schema encoding is not as challenging in WikiSQL as in Spider thanks to the lack of multi-table relations. Schema linking is relevant for both tasks but also more challenging in Spider due to the richer natural language expressiveness and less restricted SQL grammar observed in it. Indeed, the state of the art semantic parser on WikiSQL achieves a test set accuracy of 91.8%, significantly higher than the state of the art on Spider. The recent state-of-the-art models evaluated on Spider use various attentional architectures for question/schema encoding and AST-based structural architectures for query decoding. IRNet encodes the question and schema separately with LSTM and self-attention respectively, augmenting them with custom type vectors for schema linking. They further use the AST-based decoder of to decode a query in an intermediate representation (IR) that exhibits higher-level abstraction structure than SQL. Bogin et al. (2019b) encode the schema with a graph neural network and a similar grammar-based decoder. Both approaches highlight the importance of schema encoding and schema linking, but design separate feature engineering techniques to augment word vectors (as opposed to relations between words and columns) to resolve it. In contrast, the relational framework of RAT-SQL provides a unified way to encode arbitrary relational information among the inputs. Concurrently with this work, Bogin et al. (2019a) published Global-GNN, a different approach to schema linking for Spider which applies global reasoning between question words and schema columns/tables. Global reasoning is implemented by gating the graph neural network that computes the representation of schema elements using question token representations. This conceptually differs from RAT-SQL in two important ways: (a) question word representations influence the schema representations but not vice versa, and (b) like in other GNN-based encoding approaches, message propagation is limited to the schema-induced edges such as foreign key relations. In contrast, our Table 1 to reduce clutter. relation-aware transformer mechanism allows encoding arbitrary relations between question words and schema elements explicitly, and these representations are computed jointly using self-attention. We use the same formulation of relation-aware self-attention as. However, that work only applied it to sequences of words in the context of machine translation, and as such, their set of relation types only encoded the relative distance between two words. We extend their work and show that relation-aware self-attention can effectively encode more complex relationships that exist within an unordered sets of elements (in this case, columns and tables within a database schema as well as relations between the schema and the question). To the best of our knowledge, this is the first application of relation-aware self-attention to joint representation learning with both predefined and softly induced relations in the input structure. We now describe the RAT-SQL framework and its application to the problems of schema encoding and linking. First, we formally define the text-to-SQL semantic parsing problem and its components. Then, we introduce the relation-aware self-attention mechanism, our framework for jointly encoding relational structure between the question and the schema. Finally, we present our implementation of schema linking in the RAT-SQL framework. Given a natural language question Q and a schema S = C, T for a relational database, our goal is to generate the corresponding SQL P. Here the question Q = q 1... q |Q| is a sequence of words, and the schema consists of columns C = {c 1, . . ., c |C|} and tables T = t 1,..., t |T |. Each column name c i contains words c i,1,..., c i,|ci| and each table name t i contains words t i,1,..., t i,|ti|. The desired program P is represented as an abstract syntax tree T in the context-free grammar of SQL. Some columns in the schema are primary keys, used for uniquely indexing the corresponding table, and some are foreign keys, used to reference a primary key column in a different table. As described in Section 1, we would like to softly bias our schema encoding mechanism toward these predefined relations. In addition, each column has a type τ such as number or text. Schema linking aims at finding the alignment between question words and mentioned columns or tables. It's a crucial step for a parser to generate the right columns and tables in SQL. We model the latent alignment explicitly using an alignment matrix (Section 3.6), which is softly biased towards some string-match based relations, as inspired by. To support reasoning about relationships between schema elements in the encoder, we begin by representing the database schema using a directed graph G, where each node and edge has a label. We represent each table and column in the schema as a node in this graph, labeled with the words in the name; for columns, we prepend the type of the column to the label. For each pair of nodes x and y in the graph, Table 1 describes when there exists an edge from x to y and the label it should have. Figure 2 illustrates an example graph (although not all edges and labels are shown). Tree-structured decoder Self-attention layers (c) The decoder, choosing a column (Section 3.7) Figure 3: Overview of the stages of our approach. We now obtain an initial representation for each of the nodes in the graph, as well as for the words in the input question. For the graph nodes, we use a bidirectional LSTM (BiLSTM) over the words contained in the label. We concatenate the output of the initial and final time steps of this LSTM to form the embedding for the node. For the question, we also use a bidirectional LSTM over the words: where each of the BiLSTM functions first lookup word embeddings for each of the input tokens. The LSTMs do not share any parameters. At this point, we have representations c init i, t init i, and q init i. Similar to encoders used in some previous papers, these initial representations are independent of each other (uninfluenced by which other columns or tables are present). Now, we would like to imbue these representations with the information in the schema graph. We use a form of self-attention that is relation-aware to achieve this goal. In one step of relation-aware self-attention, we begin with an input x of n elements (where x i ∈ R dx) and transform each x i into y i ∈ R dx. We follow the formulation described in: where FC is a fully-connected layer, 1 ≤ h ≤ H, and W The r ij terms encode the relationship between the two elements x i and x j in the input. We explain how we obtain r ij in the next part. Application Within Our Encoder At the start, we construct the input x of |c| + |t| + |q| elements using c init i, t init i, and q init i: We then apply a stack of N relation-aware self-attention layers, where N is a hyperparameter. The weights of the encoder layers are not tied; each layer has its own set of weights. After processing through the stack of N encoder layers, we obtain Description of edge types present in the directed graph created to represent the schema. An edge exists from source node x ∈ S to target node y ∈ S if the pair fulfills one of the descriptions listed in the table, with the corresponding label. Otherwise, no edge exists from x to y. Column Column SAME-TABLE x and y belong to the same table. FOREIGN-KEY-COL-F x is a foreign key for y. FOREIGN-KEY-COL-R y is a foreign key for x. Table PRIMARY-KEY-F x is the primary key of y. x is a column of y (but not the primary key). Table Column PRIMARY-KEY-R y is the primary key of x. BELONGS-TO-R y is a column of x (but not the primary key). Table Table FOREIGN -KEY-TAB-F Table x has a foreign key column in y. FOREIGN-KEY-TAB-R Same as above, but x and y are reversed. FOREIGN-KEY-TAB-B x and y have foreign keys in both directions. We use c We define a discrete set of possible relation types, and map each type to an embedding to obtain r V ij and r K ij. We need a value of r ij for every pair of elements in x. In the subsequent sections, we describe the set of relation types we used. If x i and x j both correspond to nodes in G (i.e. each is either a column or table) with an edge from x i to x j, then we use the label on that edge (possibilities listed in Table 1) for r ij. However, this is not sufficient to obtain r ij for every pair of i and j. The graph G has no nodes corresponding to the question words, not every pair of schema nodes has an edge between them, and there is no self-edges (for when i = j). As such, we add more types beyond what is defined in Table 1: • If i = j, then COLUMN-IDENTITY or TABLE-IDENTITY. • • x i ∈ question, x j ∈ column ∪ table; or x i ∈ column ∪ table, x j ∈ question: see Section 3.6. • Otherwise, one of COLUMN-COLUMN, COLUMN -TABLE, TABLE-COLUMN, or TABLE-TABLE. 3.6 SCHEMA LINKING To aid the model with aligning column/table references in the question to the corresponding schema columns/tables, we furthermore define relation types which indicate when parts of the question textually match the names of the columns and tables. Specifically, for all n-grams of length 1 to 5 in the question, we determine whether it exactly matches the name of a column/table (exact match); or whether the n-gram is a subsequence of the name of a column/table (partial match). Therefore, for the case where x i ∈ question, x j ∈ column ∪ table; or x i ∈ column ∪ table, x j ∈ question, we set r ij to QUESTION-COLUMN-M, QUESTION-TABLE-M, COLUMN-QUESTION-M or TABLE-QUESTION-M depending on the type of x i and x j. M is one of EXACTMATCH, PARTIALMATCH, or NOMATCH. In the end, we add 2 + 5 + (4 × 3) + 4 types (one term per bullet in Section 3.5) beyond the 10 in Table 1, for a total of 33 types. Memory-Schema Alignment Matrix Our intuition suggests that the columns and tables which occur in the SQL P will generally have a corresponding reference in the natural language question (for example, "cars" and "cylinders" in Figure 1). To capture this intuition in the model, we apply relation-aware attention as a pointer mechanism between every memory element in y and all the columns/tables to compute explicit alignment matrices L col ∈ R |y|×|C| and L tab ∈ R |y|×|T |: The memory-schema alignment matrix is expected to resemble the real discrete alignments, therefore should respect certain constraints like sparsity. For example, the question word "model" in Figure 1 should be aligned with car_names.model rather than model_list.model or model_-list.model_id. To further bias the soft alignment towards the real discrete structures, we add an auxiliary loss to encourage sparsity of the alignment matrix. Specifically, for a column/table that is mentioned in the SQL query, we treat the model's current belief of the best alignment as the ground truth. Then we use a cross-entropy loss, referred as alignment loss, to strengthen the model's belief: where Rel(C) and Rel(T) denote the set of relevant columns and tables that appear in the SQL P. Once we have obtained an encoding of the input, we used the decoder from to generate the SQL P. The decoder generates P as an abstract syntax tree in depth-first traversal order, by using an LSTM to output a sequence of decoder actions that (i) expand the last generated node in the tree according to the grammar, called APPLYRULE; or when necessary to complete the last node, (ii) chooses a column or table from the schema, called SELECTCOLUMN and SELECTTABLE. Formally, we have the following: where y is the final encoding of the question and schema from the previous section, and a <t are all previous actions. We update the LSTM's state in the following way: m t, h t = f LSTM ([a t−1 z t h pt a pt n ft], m t−1, h t−1 ) where m t is the LSTM cell state, h t is the LSTM output at step t, a t−1 is the embedding of the previous action, p t is the step corresponding to expanding the parent AST node of the current node, and n ft is the embedding of the current node type. We obtain z t using multi-head attention (with 8 heads) on h t−1 over y. is a 2-layer MLP with a tanh non-linearity. For SELECTCOLUMN, we computẽ and similarly for SELECTTABLE. We implemented our model using PyTorch . During preprocessing, the input of questions, column names and table names are tokenized and lemmatized with the StandfordNLP toolkit. Within the encoder, we use GloVe word embeddings, held fixed in training except for the 50 most common words in the training set. All word embeddings have dimension 300. The bidirectional LSTMs have hidden size 128 per direction, and use the recurrent dropout method of with rate 0.2. We stack 8 relation-aware self-attention layers on top of the bidirectional LSTMs. Within the relation-aware self-attention layers, we set d x = d z = 256, H = 8, and use dropout with rate 0.1. The position-wise feed-forward network has inner layer dimension 1024. Inside the decoder, we use rule embeddings of size 128, node type embeddings of size 64, and a hidden size of 512 inside the LSTM with dropout rate 0.21. We used the Adam optimizer with β 1 = 0.9, β 2 = 0.999, and = 10 −9, which are defaults in PyTorch. During the first warmup_steps = max_steps/20 steps of training, we linearly increase the learning rate from 0 to 7.4 × 10 −4. Afterwards, the learning rate is annealed to 0, with formula 10 −3 (1 − step−warmup_steps max_steps−warmup_steps) −0.5. For all parameters, we used the default initialization method in PyTorch. We use a batch size of 20 and train for up to 40,000 steps. We use the Spider dataset (b) for all our experiments. As described by Yu et al. (2018b), the training data contains 8,659 examples, including 1,659 examples (questions and queries, with the accompanying schemas) from the Restaurants , GeoQuery , Scholar , Academic , Yelp and IMDB datasets. As Yu et al. (2018b) make the test set accessible only through an evaluation server, we perform most evaluations (other than the final accuracy measurement) using the development set. It contains 1,034 examples, with databases and schemas distinct from those in the training set. We report using the same metrics as Yu et al. (2018a): exact match accuracy on all examples, as well as divided by difficulty levels specified in the dataset. As in previous work, these metrics do not measure the model's performance on generating values within the queries. In Table 2a we show accuracy on the (hidden) test set for RAT-SQL and compare to all other approaches that are at or near state-of-the-art (according to the official dataset leaderboard). RAT-SQL outperforms all other methods that, like RAT-SQL, are not augmented with BERT embeddings. It even comes within 1.3% of beating the best BERT-augmented model. Since the typical improvement achieved by BERT augmentation is about 7% for all models, we are hopeful that adding such augmentation to RAT-SQL will also lead to state-of-the-art performance among BERT models. We also provide a breakdown of the accuracy by difficulty in Table 2b. As expected, performance drops with increasing difficulty. The overall generalization gap between development and test was strongly affected by the significant drop in accuracy (15%) on the extra hard questions. Table 2c shows an ablation study without RAT-based schema linking relations. Schema linking makes a statistically significant improvement to accuracy (p<0.001). The full Figure 4: Alignment between the question "For the cars with 4 cylinders, which model has the largest horsepower" and the database car_1 schema (columns and tables). model accuracy here differs from Table 2a because the latter shows the best single model from a hyper-parameter sweep (submitted for test evaluation) and the former gives the mean over ten runs. Alignment Recall from Section 3 that we explicitly represent the alignment between question words and table columns which is used during decoding for column selection. The existence of the alignment matrix provides a mechanism for the model to align words to columns, but the additional terms in the loss encourage it to actually act like an alignment. In our final model, the alignment loss terms do not make a difference in overall accuracy. This is surprising to us because in earlier development, the alignment loss did improve the model (statistically significantly, from 53.0% to 55.4%). We hypothesize that hyper-parameter tuning that caused us to increase encoding depth also eliminated the need for explicit supervision of alignment. An accurate alignment representation has other benefits as well, such as identifying question words to copy when a constant is needed (not part of the Spider dataset evaluation). In Figure 4 we show the alignment generated by our model on an example from the development set. 3 For the three key words that reference columns ("cylinders", "model", "horsepower"), the alignment matrix correctly identifies their corresponding column (cylinders, model, horsepower) and the table (cars_data) except it mistakenly aligns "model" to cars_data also instead of to car_names. The word "cars" aligns to the primary key of the cars_data table. Despite the abundance of research in semantic parsing of text to SQL, many contemporary models struggle to learn good representations for a given database schema as well as to properly link column/table references in the question. These problems are related: to encode & use columns/tables from the schema, the model must reason about their role in the context of a given question. In this work, we present a unified framework for addressing the schema encoding and linking challenges. Thanks to relation-aware self-attention, it jointly learns schema and question word representations based on their alignment with each other and predefined schema relations. Empirically, the RAT framework allows us to gain significant state of the art improvement on textto-SQL parsing. Qualitatively, it provides a way to combine predefined hard schema relations and inferred soft self-attended relations in the same encoder architecture. We foresee this joint representation learning being beneficial in many learning tasks beyond text-to-SQL, as long as the input has predefined structure. A THE NEED FOR SCHEMA LINKING One natural question is how often does the decoder fail to select the correct column, even with the schema encoding and linking improvements we have made. To answer this, we conducted an oracle experiment (see Table 3). For "oracle sketch", at every grammar nonterminal the decoder is forced to make the correct choice so the final SQL sketch exactly matches that of the correct answer. The rest of the decoding proceeds as if the decoder had made the choice on its own. Similarly, "oracle cols" forces the decoder to output the correct column or table at terminal productions. With both oracles, we see an accuracy of 99.4% which just verifies that our grammar is sufficient to answer nearly every question in the data set. With just "oracle sketch", the accuracy is only 70.9%, which means 73.5% of the questions that RAT-SQL gets wrong and could get right have incorrect column or table selection. Similarly, with just "oracle cols", the accuracy is 67.6%, which means that 82.0% of the questions that RAT-SQL gets wrong have incorrect structure. In other words, most questions have both column and structure wrong, so both problems will continue to be important to work on for the future.
State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding.
1,773
scitldr
As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily. Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them. In this work, we make a step to bridge neural networks with human-like learning capabilities. For this, we propose a model with a growing and open-bounded memory capacity that can be accessed based on the model’s current demands. To test this system, we introduce a continual learning task based on language modelling where the model is exposed to multiple languages and domains in sequence, without providing any explicit signal on the type of input it is currently dealing with. The proposed system exhibits improved adaptation skills in that it can recover faster than comparable baselines after a switch in the input language or domain. In a classic cartoon by Gary Larson, a student raises his hand to ask the teacher: "Mr. Osborne, may I be excused? My brain is full." . We laugh at this situation because we know it is absurd. Human brains don't just get full. Instead, they seem to be able to keep in their long-term memory massive amounts of information encoding well-acquired knowledge and skills. Furthermore, the information stored in memory is not necessarily relevant at all times. For instance, a person may have a phone call in French in the morning, then go about her daily errands in German, and later write an email in English. Different linguistic knowledge will be required for each of these situations, and context alone, rather than some explicit signal, will dictate what is needed at each given moment. Vanilla neural network models have been successfully deployed in various applications in the past. However, they rely on fixed sized memories and suffer from the problem known as "catastrophic forgetting" , which refers to the fact that previously acquired information is quickly forgotten as novel skills need to be mastered. Earlier work attempted to correct this problem by looking for available capacity on a fixed-sized network that would allow encoding a new solution without affecting previously learned tasks (; ; Serrà et al., 2018; ; ;). The problem with this approach is that eventually, the system will run out of available capacity. Instead, here we argue for developing models that can grow their internal capacity. While some work has also relied on growing the model to face catastrophic forgetting (; ;), they all rely, to the best of our knowledge, on an explicit signal identifying the task that the system is currently solving. Indeed, most work dealing with catastrophic forgetting has evaluated the models on settings often making unrealistic assumptions. Not only they typically provided the model with an explicit identifier for the task at hand, but also tasks featured unnatural properties, such as scrambled pixels, or categories that were incrementally added, but presented sequentially on blocks once and for all, and never encountered again during training. Only recently, some work has started tackling continual learning in a more realistic task-agnostic way . Yet, there are no standard publicly available datasets that can help the evaluation of continual learning systems on more natural settings. In this paper, we make a two-fold contribution towards task agnostic continual learning. First, we introduce a recurrent neural network that can grow its memory by creating new modules as training progresses. Rather than using all modules simultaneously, or indexing them based on a task identification signal, our model learns to weight their contributions to adapt to the current context. Second, we introduce to the community a multilingual/multidomain language modelling task with switching domains that we hope can fit this bill. We propose two variants of it. The first is a character-based language modelling benchmark with text written in 5 different languages that randomly switch between one another. The second one is a word-based language modelling task, where the text oscillates between 4 different domains. No segmentation signal is given when there is a switch, making the models having to discover it autonomously while they are evaluated for their adaptation skills. Our experimental show that our system can switch between different domains faster than comparable neural networks. Furthermore, our model is very general because it does not make any assumption about the type of underlying neural network architecture and thus, it can easily be adopted for tackling other tasks in conjunction with any other neural network system. Growth in neural networks has been explored with different perspectives. Here, we present a discussion of the possible avenues for developing neural networks with unbounded memory. 1. Growth of layers: Early work on Neural Networks used this method to reduce the amount of computational complexity or for escaping local minima . The goal, back then, was using the smallest possible number of hidden units. Here, instead, we are interested in allowing neural networks to grow for endowing them with larger memory capacity. In this sense, this strategy seems limited because all units remain fully connected at all time, forcing the network to access all memories simultaneously. 2. Growth of architecture: A different type of growth could be the one dictated by a different model that decides the characteristics of the learning system, including how many units to put in it. Neural architecture search and, particularly, neuro-evolution provide good examples of this. Note, however, that this type of growth is different from the main problem that we are dealing with here, in which a model needs to be able to extend itself. 3. Learned, structured growth: Models, like the Stack-RNNs permit the model to create new units, which are placed on a stack data structure, allowing it thus to have a flexible memory to process different problem instances of varying sizes. The model itself learns how many computational resources to use, but so far this has been demonstrated only on toy problems like sequence memorization. Moreover, Stack-RNNs are also unable to quickly recover "memories" from distant past because it would imply cycling through the whole stack. 4. Sparse growth: This is the strategy that we focus on in this paper. The network is grown by blocks or modules. One potential drawback with this strategy is the linear increase of time complexity as the network grows. To prevent this, here we simply limit the maximum number of modules that are kept alive at any given time. Other, more sophisticated, options could employ a Hierarchical Softmax operation over the modules or Winner-Takes-All types of rules , essentially searching for just the right memories to answer the current situation. The proposed Growing Long-Term Memory Network (GLTMN) is composed of modules operating in concert to compute the network's output by means of a weighted combination of their predictions. As such, it belongs to a family of architectures that, depending on whether the predictions are additively or multiplicatively combined, are referred as Mixture-of-Experts or Product-of-Experts . Before combining them, all the module's predictions are weighted by a vector of coefficients that can be produced by another module that is jointly trained. Or system differs in the following ways. First, the modules in our system are subdivided into two main blocks: the short-term memory (STM) and long-term memory (LTM). These two components differ in the following ways. First, while the STM has a fixed number of modules, the LTM grows incrementally, only being limited in size by the hosting computer memory capacity. Second, while predictions are computed as a standard MoE architecture, using both LTM and STM modules, only the latter ones gets trained on incoming experience. Mixture weights that encompass both LTM and STEM are kept as a separate parameter of the system that is continually trained based on recent experience. Modules in the STM are consolidated into LTM whenever a trigger point is reached (such as, after a given number of examples have been processed) choosing the module that has been contributing the most to the output according to its corresponding mixture weight recent history. At this point, the module is removed from STM and frozen into LTM. Similarly, LTM modules can be reinstated back into STM by picking the module with the highest contribution weight and copying back into STM. When a maximum size is reached, the LTM module that was reinstated into STM is removed, thus keeping the overall size of the model constant (see Figure 1 for a general sketch). Learning More formally, our model is composed of a set of modules M = {M 1, . . ., M n}, where M 1,..., M l are the modules in the LTM, and {M l+1, . . ., M n} are the ones in STM. At the beginning of training all modules belong to STM, thus l = 0. The system computes its predictions as follows. When an input x (with target y) is observed, it is fed to all modules M 1...n, obtaining log-linear output vectorsŷ. An additional vector of mixture weights w ∈ R n is used to linearly combine them. The output of the full model y is computed as a linear combination of the individual modules outputs weighted by the parameters w i: Note that since we are combining the model unnormalized predictions before the application of the softmax, we are effectively computing a geometric combination of each individual module's unnormalized probabilities:. Thus, this model can be seen as a Product of Experts. Compared to a Mixture of Experts, this approach does not require to normalize the output of each individual model, thus being much more efficient to compute. The predictions are then used to compute the cross-entropy loss L(ŷ, y), which is then used to backpropagate both into the STM modules and the mixture weights, and then optimized independently through gradient descent. In particular, to swiftly adapt the weights, we repeat the training update for w multiple (k = 100) times, whereas we only do it once for the modules. Note that in order to compute the gradients of the loss with respect to the weights after each update there is no need to recompute each module's output and thus, each step is not expensive to compute. Memory management Every T processed examples, the system consolidates a STM module into LTM. For this, it picks the module that has been most active in recent experience, as measured by the absolute mean weight value over the past examples (we use the last 20 batches). To limit the amount of computational power needed, we restricted the maximum total number of modules to n = 30. When this limit is reached, another module is removed from LTM and reinstated back into STM for further training. We pick the module with the highest mixture weight in absolute value. That is, while STM modules are selected for consolidation based on their past importance so they can be preserved for future use, LTM modules are reinstated based on their present relevance, so they can be further adapted. Note that, despite that, in practice the model has a memory limit, its potential to allocate new memory is unbounded, not unlike modern computer systems where physical memory limits do not affect the way programs are written. In the future, the memory efficiency could be improved by incorporating mechanisms such as distillation to compress the information stored across different modules into a single one. Last, but not least, note that the above-described model does not make any assumption about the type of architecture used. In our following experiments, we evaluate the model using an LSTM architecture, but there is nothing preventing it to be applied to feed-forward, convolutional or other types of networks; or even a mixture thereof. In this work we instantiate the GLTMN for an online language modelling task. For this we adopt double-layered LSTM networks as modules. Each of these modules observe text as small sequences of tokens, as is standard when applying the backpropagation through time algorithm, and has to predict each of the upcoming tokens by acting in concert with all the other modules 1. When the system is not allowed to use the LTM, our model reduces to a Products of Experts (PoE) where the mixture coefficients are given by an independent weight vector parameter. To make this architecture comparable to previously proposed neural mixture models (e.g.), we consider as a baseline a PoE model where the weights are computed by another network. In particular, we used another LSTM network that looks at the current batch and produces the weights. Our experimental setup aims to establish whether our model is able to adapt to a continuous stream of circumstances in which it needs to develop, improve and use a wide range of skills. While most previous work interested in continual learning considered sequences of tasks that were unambiguously identified by a marker given as an extra input to the model, here we are interested in a more realistic setup where only context can dictate which skills are required at a given time. For this, we introduce two lifelong language modelling tasks, where the model is continually exposed to a novel linguistic stream that switches between different languages or domains without any explicit marker signalling the change. More specifically, we propose two language modelling benchmarks: One is wordlevel and multi-domain whereas the other is character-level and multilingual. Both benchmarks feature conflicting learning signals when moving between domains or languages, making the network susceptible to catastrophic forgetting. A good model should be very good at transitioning between languages or domains, while still maintaining good overall performance for (online) language modelling. We are interested in modelling the continual adaptation to incoming non-i.i.d. data, a situation that is closer to the learning experience of any human being. Therefore, the traditional train-test split approach is not adequate here. Instead, we adopt an online learning paradigm. This means that at each time step the model receives an instance x t and makes a predictionŷ t. Then, the true target y t will be observed, with the model incurring in a loss L(ŷ t, y t). After reporting this loss, the model is trained, possibly for more than a single iteration, on the just observed example. The goal is minimizing the cumulative loss English, French, Spanish, German and Czech because they all have similar character sets, while also showing interesting linguistic variability thanks to belonging to three different Indo-European branches: Romance (French and Spanish), Germanic (English and German) and Slavic (Czech). Compared to earlier multilingual corpora , our dataset was carefully constructed to include only linguistically valid character sets, in order to prevent non-linguistic noise from interfering with our experiments. For this, we removed all lines from the input that containing characters appearing less than 100 times on the full orpus. The ing character vocabulary is no bigger than 215 characters. The second dataset is an English multi-domain dataset. For this, we used four different source corpora: news (same as above), europarl , the Toronto Book Corpus and Wikipedia . We kept in the vocabulary the top 25K words for each corpus, which after merging yielded a vocabulary size of 58K words. We then split the corpus in fragments coming from different languages or domains with lengths randomly sampled from a (truncated) exponential distribution. Thanks to the memorylessness property of this distribution, it is virtually impossible to estimate when the next switch is going to happen. For the multilingual dataset, we extracted 1M and 10M-characters-long randomly alternating combinations of 100 sequences, 10 for each language, with lengths sampled from a (truncated) exponential distribution with means λ = 10k and λ = 100k characters, respectively, and a 10M-characters-long one with 1000 sequences with mean length of 10k characters. For the multi-domain dataset we followed the same procedure, extracting 100 alternating sequences with mean lengths of λ = 10k and λ = 20k, for a total of 1M and 2M words. We used a smaller corpus in this last case because to allow for faster experimentation as the models have now to predict over a larger vocabulary, and thus they require more training time. We set the maximum size of the GLTMN to 30 LSTM modules having two layers and 200 hidden units each. We first compared it to the performance of a single equally-sized LSTM network, allowing us to measure the advantage of any mixture model with respect to any given single module. Second, we included as a further baseline another double-layered LSTM with 1300 units which has the same amount of parameters as our fully grown model on the multilingual task. As reference points, we also trained independent LSTMs, one for each domain or language (thus, using for diagnostic purposes a "forbidden" domain supervision signal), enabling us to compare the performance of our model to a situation where there is no forgetting from conflicting learning signals, but also where there is no possibility of transferring learned representations across possibly related domains. Finally, we evaluated a PoE, as described in Section 3.1. This is a model whose mixture coefficients are produced by a simple LSTM module with 10 hidden units. We also experimented with passing the weights through a softmax layer, thus enforcing a convex rather than a linear combination of the modules, but this did not prove useful neither for the GLTMN nor the PoE. We tuned the hyperparameters of all the models on a development set for each corpus. Among the ones that we considered for the GLTMN was the size of the STM choosing between 10, 15, 20 or all 30 modules. In the latter case, the model reduces to a PoE with a weights parametrized by a trainable vector of coefficients. Using 20 modules for the STM proved to be the best-performing strategy. Details of the hyperparamter search for the models are included in Appendix A.1. We are interested in measuring whether the growing model brings in any advantage at recovering information that the network had learned before, while remaining competitive in terms of overall performance. To measure these aspects, we propose the following metrics: Online perplexity This is the general perplexity over the data measured during model training. Note that since the task is framed as an online learning one, the training loss serves as a test measure because no example is seen twice. Post-switch confusion: When a distribution shift is experienced, a neural network that suffers from forgetting typically produces a spike. With this measure we aim at capturing how large was this spike. Let L avg be the average cross-entropy loss of the model between a switch occurring at t = 0 and the following one, and let t avg be the time step at which the model touches this level for the first time. Then, we define confusion as: That is, confusion computes the number of time steps weighted by the relative perplexity increase during which the model remains above the average loss for that sequence. We also complement this measure with plots that illustrate this process. In order to observe the asymptotic behaviour of the models, we restrict our analysis by reporting measures pertaining only to the second half of the data. We report our experimental for both the multilingual task and for the multi-domain data in Table 1. Results disaggregated by domain or language are reported in Appendix A.2. There are several aspects that are worth pointing out in these . First, we can see that the in overall online perplexity are mixed. The PoE with LSTM weights scores the biggest number of wins (two), followed by ours and a plain LSTM, winning one each. In general, we can conclude that there is a slight advantage for the PoE model, which may be due to its higher learning capacity during stable non-switching periods, but the GLTMN performs almost on-par with it. Moreover, when looking at the surprisal measures, we can see that the GLTMN excels at recovering after changes in the distribution. This observation can be graphically confirmed by looking at Figure 2. As it can be seen, the GLTMN recovers much faster than the stand-alone LSTM and the PoE models. It does spike, however, on the first batch, although this is completely expected. Recall that weights of the model are optimized according to each new batch. Thus, before the fist batch of the new data was observed it was not possible for it to adapt. We also note that while on the multilingual data, a LSTM trained independently on each different language (first row in the table) exhibits the lowest perplexity, this does not hold in the multi-domain corpus. This shows that while there is limited room for transferring knowledge in the multilingual case, in consonance with previous . In contrast, the multi-domain setting provides plenty of opportunities for transferring knowledge across each domain, and thus task-agnostic systems can benefit them. The main takeaways are that enhancing the neural network with a modular structure (e.g. PoE, GLTMN) brings an improvement to the general performance, but the LTM component helps the network to recover faster after the task switch. To get some further insights into the workings of the model we analyzed its weight vectors in the multilingual and the multi-domain tasks (λ = 10k) for the last seven exposures to each language or domain. We then averaged all the weight vectors on every batch for each of these seven linguistic sequences. In Figure 3 we show all these vectors sorted by the corresponding type of sequence, while weight indices were sorted by means of a clustering scheme to highlight modules that are working in tandem. Note that any of these modules could be in LTM or STM at any given time. In the multilingual case, we can see that there are a large number of modules allocated both to Czech and German. This may be explained by the fact that these are the two hardest domains as measured by the disaggregated perplexities (see Appendix for details). While there is some transfer of modules between languages, this remains quite limited, consistently with our observation above that models trained independently on each linguistic stream reach the lowest online perplexity. In contrast, for the multi-domain corpus, the clustering is much less clear. While the weights for the book corpus and the europarl domains seem to be mostly anti correlated and quite idiosyncratic, the modules acting on the news and wiki domains seem to be more distributed. This may also be explaining our findings that knowledge transfer helps in this task. Our work is related to the efforts aimed at solving the problem of catastrophic forgetting in neural networks (; ; ;), which have received considerable attention in the machine learning community. These approaches can be categorized into mainly two different branches: Those that keep the neural network size fixed and attempt to correct the learning procedure to avoid forgetting (; ; Serrà et al., 2018; ; ;), and those that allow the system to grow new modules to account for novel tasks (; ; ; d'). Our work is closer to the second stem. Also, close to our approach are mixture of experts systems (; ; ; ; ;), in particular the product of experts approach . Other models with unbounded memory were proposed in the past (; ; ;), although not all of them were studied in the context of continual learning, as we are doing here, and those who were assumed training tasks to be properly identified, as previously noted. Similar to our work are the models enahnced with a memory component, such as: memory networks , stack RNNs and neural turing machines which show that having a structured memory helps with learning longer dependencies and remembering. While our approach has some similarities, the proposed model saves fully connected modules which can save into the memory not only data but also the algorithms learned by the modules. The interaction between recent and remote memory has been extensively studied in the neuroscientific literature . We do not claim any direct connection between our model and how the human brain works, but we borrow the terms associated with consolidation and reinstatement of memories, as they fit quite neatly into our context. Finally, our problem formulation is an instance of neural-network-assisted language modelling and character level language modeling (; ; ;). Some models conceived for language modeling can extend their memories to support fast-changing statistics from the recent past, as in cache models . Also, some other work has extended these models towards the multilingual setting (Östling &). Here, we adapt these problems to a life-long learning setup where different languages can be conceived as different tasks. Differently form cache models, context switching implies retrieving a vast set of skills from a relatively distant past. We believe that developing more flexible forms of artificial intelligence will probably require flexible memory capabilities that can only be delivered by models capable of growth. Here we have proposed a method based on growing full-fledged modules over time. We explored a particular instantiation of this architecture in which modules are grown at a constant rate and consolidated into a long-term memory (LTM). Once the model has reached a maximum size, memories can be still be consolidated into LTM by reinstating LTM modules back into STM (see Figure 1). Furthermore, we introduced to the community two lifelong language modelling tasks. One, characterbased and multilingual, and other, word-based on multiple domains. Our experiments confirm the efficacy of our Growing LTM model, showing that it can learn to adapt much faster than comparable baselines without suffering in terms of its overall performance. The proposed system is very flexible, allowing it to be used with any neural network architecture. While here we have studied it in the lifelong language modeling setting, we believe that the system will also show promising in other domains with similar requirements, such as robotics -where the model can learn to deal with different kinds of terrains-or image recognition -where it can learn different kinds of visual information depending on the contextual requirements . In the future, mechanisms that exploit the structure of the input data for associating it with the relevant sets of models can be explored. Furthermore, we plan to study mechanisms that would allow the model to decide when to grow, rather than keeping a constant schedule. In the long term, the model should be capable of deciding how to structure its long-term memory and whether or not to grow it, as Stack-RNNs do to grow the working memory. Moreover, we are interested in exploring how communication between memories can be enabled through a central routing mechanism, in a similar fashion to the model proposed by. To conclude, in this work we have given a step -and we hope that more will follow-in providing neural networks with flexible memory structures. We expect that further pursuing this goal will pave the way towards developing more general learning systems and, fundamentally, that in the future neural networks will no longer need to be excused from class just because their weights are full.
We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it.
1,774
scitldr
We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task. Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution. A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented. The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines. Application of the deep learning for manufacturing processes has attracted a great attention as one of the core technologies in Industry 4.0 BID15. In many manufacturing processes, e.g. blast furnace, smelter, and milling, the complexity of the overall system makes it almost impossible or impractical to develop a simulation model from the first principles. Hence, system identification from sensor observations has been a long-standing research topic BID24. Still, when the observation is noisy and there is no prior knowledge on the underlying dynamics, there is only a very limited number of methods for the reconstruction of nonlinear dynamics. In this work, we consider the following class of problems, where the system is driven by a complex underlying dynamical system, e.g., ∂y ∂t = F(y(t), y(t − τ), u(t)).Here, y(t) is a continuous process, F is a nonlinear operator, τ is a delay-time parameter, and u(t) is an exogenous forcing, such as control parameters. At time step t, we then observe a noisy measurement of y(t) which can be defined by the following noise model DISPLAYFORM0 where ν t is a multiplicative and t is an additive noise process. In FORMULA0 and FORMULA1, we place no assumption on function F, do not assume any distributional properties of noises ν t and t, but assume the knowledge of the control parameters u(t).Since the noise components, ν t and t, are stochastic processes, the observationŷ t is a random variable. In this work, we are interested in computing temporal evolution of the probability density function (PDF) ofŷ, given the observations up to time step t, i.e., p(ŷ t+n | Y 0:t, U 0:t+n−1) for n ≥ 1, where Y 0:t = (ŷ 0, · · ·,ŷ t) is a trajectory of the past observations and U 0:t+n−1 = (u 0, · · ·, u t+n−1) consists of the history of the known control actions, U 0:t−1, and a future control scenario, U t:t+n−1. We show, in Section 3, a class of problems, where simple regression problem of forecasting the value ofŷ t+n is not sufficient or not possible, e.g., chaotic systems. Note that the computation of time evolution of a PDF has been a long-standing topic in statistical physics. For a simple Markov process, there are well-established theories based on the Fokker-Planck equation. However, it is very difficult to extend those theories to a more general problem, such as delay-time dynamical systems, or apply it to complex nonlinear systems. Modeling of the system has been extensively studied in the past, in particular, under the linearity assumptions on F and certain noise models, e.g., Gaussian t and ν t = 1 in. The approaches based on auto-regressive processes BID18 and Kalman filter BID9 are good examples. Although these methods do estimate the predictive probability distribution and enable the computation of the forecast uncertainty, the assumptions on the noise and linearity in many cases make it challenging to model real nonlinear dynamical systems. Recently, a nonlinear state-space model based on the Gaussian process, called the Gaussian Process State Space Model (GPSSM), has been extended for the identification of nonlinear system BID5 BID4. GPSSM is capable of representing a nonlinear system and is particularly advantageous when the size of the data set is relatively small that it is difficult to train a deep learning model. However, the joint Gaussian assumption of GPSSM may restrict the representation capability for a complex non-Gaussian noise. A recent success of deep learning created a flurry of new approaches for time series modeling and prediction. The ability of deep neural networks, such as RNN, to learn complex nonlinear spatiotemporal relationships in the data enabled these methods to outperform the classical time series approaches. For example, in the recent works of BID20 BID11; BID3, the authors proposed different variants of the RNN-based algorithms to perform time series predictions and showed their advantage over the traditional methods. Although encouraging, these approaches lack the ability to estimate the probability distribution of the predictions since RNN is a deterministic model and unable to fully capture the stochastic nature of the data. To enable RNN to model the stochastic properties of the data, BID2 augmented RNN with a latent random variable included in the hidden state and proposed to estimate the ing model using variational inference. In a similar vein, the works of BID0; BID14 extend the traditional Kalman filter to handle nonlinear dynamics when the inference becomes intractable. Their approach is based on formulating the variational lower bound and optimizing it under the assumption of Gaussian posterior. Another recent line of works enabled stochasticity in the RNN-based models by drawing a connection between Bayesian variation inference and a dropout technique. In particular, BID6 showed that the model parameter uncertainty (which then leads to uncertainty in model predictions), that traditionally was estimated using variational inference, can be approximated using a dropout method (a random removal of some connections in the network structure). The prediction uncertainty is then estimated by evaluating the model outputs at different realizations of the dropout weights. Following the ideas of BID6, BID27 proposed additional ways (besides modeling the parameter uncertainty) to quantify the forecast uncertainty in RNN, which included the model mis-specification error and the inherent noise of the data. We propose an RNN-model to compute the temporal evolution of a PDF, p(ŷ t+n | Y 0:t, U 0:t+n−1).To avoid the difficulties in directly estimating the continuous function, we use a numerical discretization technique, which converts the function approximation problem to a classification task (see Section 2.2). We note that the use of the traditional cross-entropy (CE) loss in our formulated classification problem can be problematic since it is oblivious to the class ordering. To address this, we additionally propose two regularizations for CE to account for a geometric proximity between the classes (see Sections 2.2.1 and 2.2.2). The probability distribution of one-step-ahead prediction, p(ŷ t+1 | Y 0:t, U 0:t) can now be simply estimated from the output softmax layer of RNN (see Section 2.2), while to propagate the probability distribution further in time, for a multiple-step forecast, we propose a sequential Monte Carlo (SMC) method (see Section 2.4). For clarity, we present most derivations for univariate time series but also show the extension to multivariate data in Section 2.3. We empirically show that the proposed modeling approach enables us to represent a continuous PDF of any arbitrary shape, including the ability to handle the multiplicative data noises in. Since the probability distribution is computed, the RNN-model can also be used for a regression task by computing the expectation (see Section 2.4). Hereafter, we use DE-RNN for the proposed RNN model, considering the similarity with the density-estimation task. In summary, the contributions of this work are as follows: (i) formulate the classical regression problem for time series prediction as a predictive density-estimation problem, which can be solved by a classification task (ii) propose an approach to compute the time evolution of probability distribution using SMC on the distributions from DE-RNN (iii) proposed two regularizations for CE loss to capture the ordering of the classes in the discretized PDF. We evaluated the proposed algorithm on three synthetic and two real datasets, showing its advantage over the baselines. Note that DE-RNN has a direct relevance to a wide range of problems in physics and engineering, in particular, for uncertainty quantification and propagation BID26. In this Section we present the details of the proposed algorithm using a specific form of RNN, called Long Short-Term Memory (LSTM) network. Although in the following presentation and experiments we used LSTM, other networks, e.g., GRU BID1, can be used instead. The Long Short-Term Memory network (LSTM) BID10 BID8 consists of a set of nonlinear transformations of input variables z t ∈ R m;Gating functions: DISPLAYFORM0 Internal state: DISPLAYFORM1 DISPLAYFORM2 Here, ϕ S and ϕ T, respectively, denote the sigmoid and hyperbolic tangent functions, L is a linear layer, which includes a bias, s t ∈ R Nc is the internal state, h t ∈ R Nc is the output of the LSTM network, N c is the number of the LSTM units, and a b denote a component-wise multiplication. Interesting observation can be made about equation. We can re-write equation FORMULA3 as DISPLAYFORM3 for some functions f and g. With a simple re-scaling, this equation can be interpreted as a first-order Euler scheme for a linear dynamical system, DISPLAYFORM4 Thus, LSTM can be understood as a series expansion, where a complex nonlinear dynamical system is approximated by a combination of many simpler dynamical systems. Usually, LSTM network is supplemented by feed-forward neural networks, e.g., DISPLAYFORM5 in which x t is the input feature. Using, we can denote by Ψ e and Ψ d a collection of the operators from input to internal state (encoder) and from internal state to the output P (decoder): DISPLAYFORM6 In this Section we first consider the problem of modeling the conditional PDF, p(ŷ t+1 | Y 0:t, U 0:t). Althoughŷ t+1 has a dependence on the past trajectories of bothŷ and u, using the "state space" LSTM model argument in Section 2.1, the conditional PDF can be modeled as a Markov process DISPLAYFORM0 Hence, to simplify the problem, we consider a task of estimating the PDF of a random variablê y, given an input x, i.e., p(ŷ|x). The obtained can then be directly applied to the original problem of estimating p(ŷ t+1 |s t). DISPLAYFORM1 Then, a discrete probability distribution can be defined DISPLAYFORM2 where it is clear that p(k|x) is a numerical discretization of the continuous PDF, p(ŷ|x). Using the LSTM from Section 2.1, the discrete probability p(k|x) can be modeled by the softmax layer (P) as an output of Ψ d in such that DISPLAYFORM3 Thus, the original problem of estimating a smooth function, p(ŷ|x), is transformed into a classification problem of estimating p(k|x) in a discrete space. Obviously, the size of the bin, |I j |, affects the fidelity of the approximation. The effects of the bin size are presented in Section 3.1. There is a similarity between the discretization and the idea of BID16. However, it should be noted that the same discretization technique, often called "finite volume method", has been widely used in the numerical simulations of partial differential equations for a long time. The discretization naturally leads to the conventional cross-entropy (CE) minimization. Suppose we have a data set, D R = {(ŷ i, x i);ŷ i ∈ R, x i ∈ R, and i = 1,..., N }. We can define a mapping DISPLAYFORM4 D C provides a training data set for the following CE minimization problem, DISPLAYFORM5 Note, however, that the CE minimization does not explicitly guarantee the smoothness of the estimated distribution. Since CE loss function depends only on P i of a correct label, δ cnk, as a , in the optimization problem every element P i, except for the one corresponding to the correct label, P cn, is penalized in the same way, which is natural in the conventional classification tasks where a geometric proximity between the classes is not relevant. In the present study, however, the softmax layer, or class probability, is used as a discrete approximation to a smooth function. Hence, it is expected that P cn and P cn±1 (i.e., the nearby classes) should be close to each other. To address this issue, in the following Sections 2.2.1 and 2.2.2, we propose two types of regularization to impose the class proximity structure in the CE loss. To explicitly impose the smoothness between the classes, we propose to use a regularized crossentropy (RCE) minimization, defined by the following loss function DISPLAYFORM0 where λ is a penalty parameter and the Laplacian matrix DISPLAYFORM1 RCE is analogous to the penalized maximum likelihood solution for density estimation BID23. Assuming a uniform bin size, |I 0 | = · · · = |I K | = δy, the Laplacian of a distribution can be approximated by a Taylor expansion p (ŷ|x) DISPLAYFORM2 In other words, RCE aims to smooth out the distribution by penalizing local minima or maxima. Alternative to adding an explicit regularization to CE, the smoothness can be achieved by enforcing a spatial correlation in the network output. Here, we use an one-dimensional convolution layer to enforce smoothness. Let o ∈ R K denote the last layer of DE-RNN, which was the input to the softmax layer. We can add a convolution layer, o ∈ R K, on top of o, such that DISPLAYFORM0 where the parameter h determines the smoothness of the estimated distribution. Then, o is supplied to the softmax layer. Using, DE-RNN can now be trained by the standard CE. The implicit regularization, here we call convolution CE (CCE), is analogous to a kernel density estimation. In the modeling of multivariate time series, it is usually assumed that the noise is independent, i.e., the covariance of the noise is a diagonal matrix. In this case, it is straightforward to extend DE-RNN, by using multiple softmax layers as the output of DE-RNN. However, such an independent noise assumption significantly limits the representative capability of an RNN. Here, we propose to use a set of independently trained DE-RNNs to compute the joint PDF of a multivariate time series. Letŷ t be a l-dimensional multivariate time series;ŷ t = (ŷ DISPLAYFORM0 The joint PDF can be represented by a product rule, DISPLAYFORM1 where the dependency on the past trajectory ( Y 0:t, U 0:t) is omitted in the notation for simplicity. Directly learning the joint PDF, p(ŷ t+1 | Y 0:t, U 0:t), in a tensor product space is not scalable. Instead, a set of DE-RNN is trained to represent the conditional PDFs shown on the right hand side of the above expression. Then, the joint PDF can be computed by a product of the Softmax outputs of the DE-RNNs. Note that, although it requires training l DE-RNNs to compute the full joint PDF, there is no dependency between the DE-RNNs in the training phase. So, the set of DE-RNNs can be trained in parallel, which can greatly reduce the training time. The details of the multivariate DE-RNN are explained in Appendix A. The inputs to a DE-RNN are (ŷ t, u t), and the output is the probability distribution, DISPLAYFORM0 Note that D C is used only in the training stage. Then, the moments of the predictive distribution can be easily evaluated, e.g., DISPLAYFORM1 DISPLAYFORM2, and α i−1/2 = 0.5(α i−1 + α i). Next, we consider a multiple-step forecast, which corresponds to computing a temporal evolution of the probability distribution, i.e., p(ŷ t+n | Y 0:t, U 0:t+n−1) for n > 1. For simplicity, the multiple-step forecast is shown only for a univariate time series. An extension to a multivariate time series is straightforward (Appendix A).Applying the of Section 2.2, once the distribution ofŷ t+1 in is computed, the distribution ofŷ t+2 can be similarly obtained as p(ŷ t+2 |s t+1). Observe that s t+1 is computed from a deterministic function of s t, u t+1, andŷ t+1, i.e., DISPLAYFORM3 Here, u t+1 and s t are already known, whileŷ t+1 is a random variable, whose distribution p(ŷ t+1 |s t) is computed from the deterministic function Ψ d (s t). Then, s t+1 is also a random variable. The distribution, p(s t+1 |s t, u t+1), can be obtained by applying a change of variables on p(ŷ t+1 |s t) with a nonlinear mapping Ψ e. Repeating this process, the multiple-step-ahead predictive distribution can therefore be computed as DISPLAYFORM4 Since the high dimensional integration in is intractable, we propose to approximate it by a sequential Monte Carlo method. The Monte Carlo procedure is outlined in Algorithm 1. Input: Y 0:t, U 0:t, number of Monte Carlo samples, N s, and forecast horizon n Output: p(ŷ t+n | Y 0:t, U 0:t+n−1) (density estimation fromŷ t+n) Initialization: Set LSTM states to s 0 = h 0 = 0 Perform a sequential update of LSTM up to time t from the noisy observations (Y 0:t). DISPLAYFORM0 Make N s replicas of the internal state, s DISPLAYFORM1 Compute the predictive distribution ofŷ i t+1 for each sample DISPLAYFORM2 Sample the target variable at t + 1,ŷ i t+1, from the distribution 1. Sample the class label from the discrete distribution: DISPLAYFORM3 In this section, DE-RNN is tested against three synthetic and two real data sets. The LSTM architecture used in all of the numerical experiments is identical. Two feed-forward networks are used before and after the LSTM; DISPLAYFORM0 in which ϕ SP and ϕ SM denote the softplus and softmax functions, respectively. The size of the bins is kept uniform, i.e., |I 1 | = · · · = |I K | = δy. The LSTM is trained by using ADAM BID13 with a minibath size of 20 and a learning rate of η = 10 −3. First, we consider a modified Cox-Ingersoll-Ross (CIR) process, which is represented by the following stochastic differential equation, DISPLAYFORM0 in which W is the Weiner process. The original CIR process is used to model the valuation of interest rate derivatives in finance. Equation FORMULA0 is solved by the forward Euler method with the time step size δt = 0.1. The simulation is performed for T =δt to generate the training data and T =δt is used for the testing. Note that the noise component of CIR is multiplicative, which depends on y(t). The experiments are performed for two different bin sizes, dy = 0.08 and 0.04. The DE-RNN has 64 LSTM cells. FIG0 shows the errors in the expectation and the standard deviation with respect to the analytical solution; DISPLAYFORM1 Here, p T denotes the true distribution of the CIR process. The normalized root mean-square errors (NRMSE) are defined as DISPLAYFORM2 DISPLAYFORM3 in which · denotes an average over the testing data, p L is the distribution from the LSTM, and sd[y] denotes the standard deviation of the data. The error in the expectation is normalized against a zeroth-order prediction, which assumes y t+1 = y t.In FIG0, it is clearly shown that the prediction are improved when a regularization is used to impose a smoothness condition. FIG0 and (b), for RCE, e µ and e σ become smaller when a smaller δy is used. As expected, e σ increases when λ is large. But, for the smaller bin size, δy = 0.04, both e µ and e σ are not so sensitive to λ. Similar to RCE, e µ and e σ for CCE decrease at first as the penalty parameter h increases. However, in general, RCE provides a better prediction compared to CCE.NRMEs are listed in TAB0. For a comparison, the predictions by AR and KF are shown. The CIR process is essentially a first-order autoregressive process. So, it is not surprising to see that AR and KF, which are designed for the first-order AR process, outperforms DE-RNN for the prediction of the expectation. However, e σ of AR and KF are much larger than that of DE-RNN, because those models assume an additive noise. The Gaussian process (GP) model has a relatively large e µ. But, GP outperforms AR and KF in the prediction of the noise (e σ). Still, e σ of RCE and CCE are less than 4%, while that of GP is about 10%, indicating that DE-RNN can model the complex noise process much better. In FIG1, a 200-step forecast by DE-RNN is compared with a Monte-Carlo solution of equation FORMULA0. DE-RNN is trained with δy = 0.04 and λ = 200. For the DE-RNN forecast, the testing data is supplied to DE-RNN for the first 100 time steps, i.e., for t = −10 to t = 0, and the SMC multiple-step forecast is performed for the next 200 time steps with 20,000 samples. It is shown that the multiple-step forecast by DE-RNN agrees very well with the MC solution of the CIR process. Note that, in FIG1 (b), the noise process, as reflected in sd[y t], is a function of y t, and hence the multi-step forecast of the noise increases rapidly first and then decreases before reaching a plateau. The SMC forecast can accurately capture the behavior. Such kind of behavior can not be represented if a simple additive noise is assumed. For the next test, we applied DE-RNN for a time series generated from the Mackey-Galss equation BID19; DISPLAYFORM0 We use the parameters adopted from , α = 0.2, β = 10, γ = 0.1, and τ = 17.The Mackey-Glass equation is solved by using a third-order Adams-Bashforth method with a time step size of 0.02. The time series is generated by down-sampling, such that the time interval between consecutive data is δt = 1. A noisy observation is made by adding a white noise; DISPLAYFORM1 t is a zero-mean Gaussian random variable with the noise level sd[t] = 0.3sd [y]. A time series of the length 1.6 × 10 5 δt is generated for the model trainig and another time series of length 2 × 10 3 δt is made for the validation. DE-RNN is trained for δy = 0.04sd[y] and consists of 128 LSTM cells. FIG2 shows the noisy observation and the expectation of the next-step prediction, E[ŷ t+1 |ŷ t], in a phase space. It is shown that DE-RNN can filter out the noise and reconstruct the original dynamics accurately. Even though the noisy data are used as an input, E[ŷ t+1 |ŷ t] accurately represents the original attractor of the chaotic system, indicating a strong de-noising capability of DE-RNN.The estimated probability distribution is shown in FIG2. Without a regularization, the standard CE in a noisy distribution, while the distribution from CCE shows a smooth Gaussian shape. FIG7: (a) 500-step forecast by a regression LSTM (•) and the ground truth . (b) The color contours denote a 500-step forecast of the probability distribution, p(ŷ n+s |ŷ s), and the dashed lines are 95%-CI. The ground truth is shown as the solid line .The prediction errors are shown in table 2. NRMSEs are defined as, DISPLAYFORM2 NRMSEs are computed with respect to the ground truth. Again, e µ compares the prediction error to the zeroth-order prediction. In this example, the errors are not so sensitive to the regularization parameters. The best is achieved by CCE. DE-RNN can make a very good estimation of the noise. The error in the noise component, e σ, is only 2% ∼ 5%. Unlike the CIR process, NRMSEs from KF and ARIMA are much larger than those of DE-RNN. Because the underlying process is a delay-time nonlinear dynamical system, those linear models can not accurately approximate the complex dynamics. Since GP is capable of representing a nonlinear behavior of data, GP outperforms KF and ARIMA both in e µ and e σ. In particular, e σ of GP is similar to that of DE-RNN.However, e µ of GP is about 1.5 times larger than DE-RNN.A multiple-step forecast of the Mackey-Glass time series is shown in FIG7. In the validation time series, the observations in t ∈δt are supplied to the DE-RNN to develop the internal state, and a 500-step forecast is made for t ∈δt. In FIG7 (a), it is shown that a multiple-step forecast by a standard regression LSTM approximates y(t) very well initially, e.g, for t < 80δt, but eventually diverges for larger t. Because of the Mackey-Glass time series is chaotic, theoretically it is impossible to make a long time forecast. But, in the DE-RNN forecast, y(t) is bounded by the 95%-confidence interval (CI) even for the 500-step forecast. Note that the uncertainty, denoted by 95%-CI grows only at a very mild rate in time. In fact, it is observed that CI is not a monotonic function of time. In DE-RNN, the 95%-CI may grow or decrease following the dynamics of the system, while for the conventional time series models, such as ARIMA and KF, the uncertainty is a non-decreasing function of time.3.3 MAUNA LOA CO 2 OBSERVATIONIn this experiments, DE-RNN is tested against the atmospheric CO 2 observation at Mauna Loa Observatory, Hawaii BID12 ). The CO 2 data set consists of weekly-average atmospheric CO 2 concentration from Mar-29-1958 to Sep-23-2017 FIG3. The data from Mar-29-1958 to Apr-01-2000 is used to train DE-RNN and a 17-year forecast is made from Apr-01-2000 to Sep-23-2017. This CO 2 data has been used in previous studies BID6 BID22. In DE-RNN, 64 LSTM cells and δy = 0.1sd[dy t], in which dy t = y t+1 − y t, are used. The 17-year DE-RNN forecast, with 1,000 MC samples, is shown in FIG3 (b). DE-RNN well represents the growing trend and the oscillatory patten of the CO 2 data. The CO 2 data is nonstationary, where the rate of increase of CO 2 is an increasing function of time. Since DE-RNN is trained against the history data, where the rate of CO 2 increase is smaller than the current, it is expected that the forecast will underestimate the future CO 2. E[ŷ n+s |ŷ s] agrees very well with the observation for the first 200 weeks, but eventually underestimates CO 2 concentration. It is interesting to observe that the upper bound of the 95%-CI grows more rapidly than the expectation and provides a good approximation of the observation. For a comparison, the forecast by a regression LSTM is also shown. Similar to the chaotic Mackey-Glass time series, the regression LSTM makes a good prediction for a short time, e.g., t < 100 weeks, but eventually diverges from the observation. Note that the lower bound of 95%-CI encompasses the regression LSTM. FIG3 (c) shows a forecast made by GP, following setup suggested by BID21. For a shortterm forecast (< 50 weeks), GP provides a sharper estimate of the uncertainty, i.e., a smaller 95%-CI interval. However, even for a mid-term forecast, 100 ∼ 600 weeks, the ground truth is close or slightly above the upper bound of 95%-CI. Because of the different behaviors, it is difficult to conclude which method, DE-RNN or GP, provides a better forecast. But, it should be noted that the GP is based on a combination of handcrafted kernels specifically designed for this particular problem, while such careful tuning is not required for DE-RNN. In the last experiment, IBM Power System S822LC and NAS Parallel Benchmark (NPB) are used to generate the temperature trace. FIG4 shows the temperature of a CPU. The temperature sensor generates a discrete data, which has a resolution of 1 • C. The CPU temperature is controlled by three major parameters; CPU frequency, CPU utilization, and cooling fan speed. In this experiment, we have randomized the CPU frequencies and job arrival time to mimic the real workload behavior, while the fan speed is fixed to 3300RPM. The time step size is δt = 2 seconds. Accurate forecast of CPU temperature for a future workload scenario is essential in developing an energy-efficient control strategy for the thermal management of a cloud system. FIG4 (c) and (d) show multiple-step forecasts of the CPU temperature by RCE and a regression LSTM, respectively. The bin size is δy = 0.18• C, which is smaller than the sensor resolution. In the forecast, the future control parameters are given to DE-RNN. In other words, DE-RNN predicts the probability distribution of future temperature with respect to a control scenario, i.e., p(ŷ t+n | Y 0:t, U 0:t+n−1). The forecast is made by using 5,000 Monte Carlo samples. Here, 1800-step forecast is made, t = 0 ∼ 3, 600 sec. and only the in t ∈ sec. is shown. While the regression LSTM makes a very noisy prediction near the local peak temperature at t 500, RCE provides a much more stable forecast. TAB2 shows the l ∞ -errors, i.e., maximum absolute difference. The maximum error is observed near the peak temperature at t 500. ARIMA, KF, and GP are also tested for the multiple-step forecast, but the are not shown because their performance is much worse than the LSTMs. The changes in the temperature are associated with step changes of some control parameters. Such abrupt transitions seem to cause the large oscillation in the regression LSTM prediction. But, for RCE or CCE, the prediction is made from an ensemble of Monte Carlo samples, which makes it more robust to such abrupt changes. Also, note that for t < 200 sec., RCE prediction (53.4• C) is in between the two discrete integer levels, 53• C and 54• C, which correctly reflects the uncertainty in the measurement, while the regression LSTM (53.1• C) more closely follows one of the two levels. Finally, in this experiment we evaluate the performance of the multivariate DE-RNN, for which we used a noisy multivariate time series generated by the Lorenz equations BID17 dy DISPLAYFORM0 DISPLAYFORM1 We used the coefficients from BID17, which are α 1 = 10, α 2 = 8/3, and α 3 = 28. The system of equations FORMULA1 is solved by using a third-order Adams-Bashforth method with a time step size of 0.001 and a time series data set is generated by downsampling, such that δt = 0.02. A multivariate joint normal distribution is added to the ground truth to generate a noisy time series, i.e.,   ŷ DISPLAYFORM2 Here, the noise level is set to σ 1 = 0.2sd[y DISPLAYFORM3 t+1, Y 0:t), and p(ŷ DISPLAYFORM4 t+1, Y 0:t), re- DISPLAYFORM5 The moments of the joint PDF are computed by the Monte Carlo method with a sample size of 5 × 10 3. It is shown that DE-RNN makes a very good prediction of both expectations and covariances. The error in the covariance is less than 4% except for Σ 12. It is shown that DE-RNN is able to make correct predictions of not only the magnitude, but also the signs of the covariance. A vector autoregressive model (VAR) also predicts the signs of the covariance correctly. But, the errors in the expectation and covariances are much larger than DE-RNN. The GP used in the experiment assumes an independent noise, i.e. Σ = ρ 2 I. Hence, e Σij is not evaluated for GP. Similar to the MackeyGlass time series, GP outperforms VAR, but the errors are larger than DE-RNN. The error in e σ is about 10 times larger than DE-RNN, while e µ of GP is about 3 times larger than DE-RNN. FIG6 shows 300-step forecasts of the Lorenz time series. The expectations from DE-RNNs make a good prediction of the ground truth up to t − s < 1.5. Then, the expectations start to diverge from the ground truth, which is accompanied by a sudden increase in the 95%-CI. For a longer-time forecast, e.g., t − s > 4, the 95%-CI exhibits an oscillatory patten depending on the dynamics of the Lorenz system and well captures the oscillations in the ground truth. We present DE-RNN to compute the time evolution of a probability distribution for complex time series data. DE-RNN employs LSTM to learn multiscale, nonlinear dynamics from the noisy observations, which is supplemented by a softmax layer to approximate a probability density function. To assign probability to the softmax output, we use a mapping from R to N +, which leads to a cross-entropy minimization problem. To impose a geometric structure in the distribution, two regularization strategies are proposed. The regularized cross-entropy method is analogous to the penalized maximum likelihood estimate for the density estimation, while the convolution cross-entropy method is motivated by the kernel density estimation. The proposed algorithm is validated against three synthetic data set, for which we can compare with the analytical solutions, and two real data sets. Recall the product rule from Section 2.3 DISPLAYFORM0 The extension of DE-RNN algorithm for univariate data to an l-dimensional multivariate time series is straightforward. First, define the discretization grid points, DISPLAYFORM1 Ki ), for every variable, i.e., i = 1, · · ·, l. Here, K i is the number of the discretization intervals for the i-th variable. Then, we can define the discretization intervals, I DISPLAYFORM2, and the mapping functions, DISPLAYFORM3 The first component of the product rule is the marginal PDF, p(ŷt+1). Hereafter, the obvious dependency on Y 0:t, U 0:t in the notation is omitted for simplicity. The marginal PDF can be computed by the same method as for the univariate time series. To train DE-RNNs for the conditional PDFs for the i-th variable, p(ŷ DISPLAYFORM4 t+1), the original time series data of length N, D R = {ŷ t ;ŷ t ∈ R l, and t = 1, . . ., N}, is discretized by using C (i), which gives us DISPLAYFORM5 ∈ N +,ŷ t ∈ R l, and t = 1,..., N }, where c DISPLAYFORM6, is computed by an LSTM as DISPLAYFORM7 in which s DISPLAYFORM8 t is the internal state of the i-th variable. In other words, in the training of the DE-RNN for p(ŷ DISPLAYFORM9 t+1), the input vector is the variables at the current time step,ŷ t, combined with the conditioning variables in the next step, (ŷ DISPLAYFORM10 t+1), and the target is the class label, c DISPLAYFORM11 t+1, in the next time step. The DE-RNN can be trained by minimizing RCE or CCE as described in Section 2.2. Observe that during the training phase, each DE-RNN is independent from each other. Therefore, all the DE-RNNs can be trained in parallel, significantly improving the computational efficiency, enabling the method to scale only linearly with the number of dimensions l. Once all the conditional PDFs are obtained, the joint PDF can be computed by a product of the DE-RNN outputs. For a demonstration, the covariance of a bivariate time series can be computed as DISPLAYFORM12 where the time index, (t+1), is omitted in the notation of the softmax output, Pj, and the subscript j denotes the j-th element of the softmax layer. Note that, although there is no dependency between DE-RNNs during training, in the prediction phase of computing the joint PDF, there is a hierarchical dependency between all the DE-RNNs. This kind of direct numerical integration does not scale well for number of dimensions l 1. For a high dimensional system, a sparse grid BID25 or a Monte Carlo method can be used to speed up the numerical integration. We outline a Monte Carlo procedure in Algorithm 2. Comparing with Algorithm 1, an extension of Algorithm 2 for a multiple-step forecast of a multivariate time series is straightforward. In this Section we present a few extra details of the proposed DE-RNN algorithm. FIG10 shows the architecture of DE-RNN as was used during the experiments in Section 3. In Figure 9 we show the process of computing one-step-ahead predictions of univariate time series as was presented in Section 2.2. Note that since DE-RNN estimates approximation of the predictive probability distribution, the predicted value, e.g., for time step t+1, is the discrete approximation of E[ŷ t+1 | Y 0:t, U 0:t], i.e., the expectation ofŷ t+1 given all the observations and control inputs up to time t. Finally, in for n = 1, N s do Draw a sample,ŷ,n t+1, from Pt+1. for j = 2, l doCompute the conditional distribution forŷ end for end for FIG0 we show the details of the multi-step forecast for univariate time series as was presented in Algorithm 1, in Section 2.4. Using sequential Monte Carlo method, the discrete approximation of the predictive distribution p(ŷ t+n | Y 0:t, U 0:t+n−1) is estimated using N s samples. Figure 9: Details of the computation for the one-step-ahead predictions. At a given step the model computes a discrete approximation of the expectation for the next step observation. DISPLAYFORM0 DISPLAYFORM1
Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems
1,775
scitldr
In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making. Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. In the case of humans, the visual working memory (VWM) task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. Our work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps. We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings. The supervised setting uses image sequences with their corresponding labels. The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism. We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting. We quantitatively estimate the performance of these models on sequences of images from a standard image dataset (CIFAR-100). Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes. Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information. This work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain. Memory is an essential component for solving many tasks intelligently. Most sequential tasks 24 involve the need for a mechanism to maintain a context. In the brain, working memory serves as 25 a work space to encode and maintain the most relevant information over a short period of time, in 26 order to use it to guide behavior for cognitive tasks. Several cognitive tasks have been proposed 27 in the Neuropsychology literature to study and understand the working memory in animals. The Visual Working Memory Task (VWM task) or the classic N-back task is one of the most simple On the other hand, with artificial intelligence systems, there has been very good progress in models 32 that learn from sequences of inputs using artificial neural networks as memory for all types of learning (supervised, unsupervised and reinforcement). We intend to use these developments as an ideal 34 opportunity for synergy to computationally model the working memory system of the brain. As memory is an important aspect of both artificial intelligence and neuroscience, there are some 36 good studies that helped choose our models as discussed in Section 2. For all experiments in both supervised and reinforcement learning settings, 100 images were drawn The problem solved by all the models is a binary classification problem, predicting unseen/seen. The performance of all the models in the experiments were measured using the accuracy metric 67 calculated based on the number of correct predictions for the 100 images in a sequence (as a %). This 68 evaluation was repeated for 10 independent trials as a part of ablation studies.
LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex
1,776
scitldr
Nonlinearity is crucial to the performance of a deep (neural) network (DN). To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling. In particular, DN layers constructed from these operations can be interpreted as {\em max-affine spline operators} (MASOs) that have an elegant link to vector quantization (VQ) and $K$-means. While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax. {\em This paper extends the MASO framework to these and an infinitely large class of new nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs).} We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural ``hard'' VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding ``soft'' VQ inference problems. We further extend the framework by hybridizing the hard and soft VQ optimizations to create a $\beta$-VQ inference that interpolates between hard, soft, and linear VQ inference. A prime example of a $\beta$-VQ DN nonlinearity is the {\em swish} nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation. Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters. Deep (neural) networks (DNs) have recently come to the fore in a wide range of machine learning tasks, from regression to classification and beyond. A DN is typically constructed by composing a large number of linear/affine transformations interspersed with up/down-sampling operations and simple scalar nonlinearities such as the ReLU, absolute value, sigmoid, hyperbolic tangent, etc. BID13. Scalar nonlinearities are crucial to a DN's performance. Indeed, without nonlinearity, the entire network would collapse to a simple affine transformation. But to date there has been little progress understanding and unifying the menagerie of nonlinearities, with few reasons to choose one over another other than intuition or experimentation. Recently, progress has been made on understanding the rôle played by piecewise affine and convex nonlinearities like the ReLU, leaky ReLU, and absolute value activations and downsampling operations like max-, average-, and channel-pooling BID1. In particular, these operations can be interpreted as max-affine spline operators (MASOs) BID16; BID14 that enable a DN to find a locally optimized piecewise affine approximation to the prediction operator given training data. A spline-based prediction is made in two steps. First, given an input signal x, we determine which region of the spline's partition of the domain (the input signal space) it falls into. Second, we apply to x the fixed (in this case affine) function that is assigned to that partition region to obtain the prediction y = f (x).The key of BID1 is any DN layer constructed from a combination of linear and piecewise affine and convex is a MASO, and hence the entire DN is merely a composition of MASOs. MASOs have the attractive property that their partition of the signal space (the collection of multidimensional "knots") is completely determined by their affine parameters (slopes and offsets). This provides an elegant link to vector quantization (VQ) and K-means clustering. That is, during learning, a DN implicitly constructs a hierarchical VQ of the training data that is then used for splinebased prediction. This is good progress for DNs based on ReLU, absolute value, and max-pooling, but what about DNs based on classical, high-performing nonlinearities that are neither piecewise affine nor convex like the sigmoid, hyperbolic tangent, and softmax or fresh nonlinearities like the swish BID20 that has been shown to outperform others on a range of tasks?Contributions. In this paper, we address this gap in the DN theory by developing a new framework that unifies a wide range of DN nonlinearities and inspires and supports the development of new ones. The key idea is to leverage the yinyang relationship between deterministic VQ/K-means and probabilistic Gaussian Mixture Models (GMMs) BID3. Under a GMM, piecewise affine, convex nonlinearities like ReLU and absolute value can be interpreted as solutions to certain natural hard inference problems, while sigmoid and hyperbolic tangent can be interpreted as solutions to corresponding soft inference problems. We summarize our primary contributions as follows:Contribution 1: We leverage the well-understood relationship between VQ, K-means, and GMMs to propose the Soft MASO (SMASO) model, a probabilistic GMM that extends the concept of a deterministic MASO DN layer. Under the SMASO model, hard maximum a posteriori (MAP) inference of the VQ parameters corresponds to conventional deterministic MASO DN operations that involve piecewise affine and convex functions, such as fully connected and convolution matrix multiplication; ReLU, leaky-ReLU, and absolute value activation; and max-, average-, and channelpooling. These operations assign the layer's input signal (feature map) to the VQ partition region corresponding to the closest centroid in terms of the Euclidean distance, Contribution 2: A hard VQ inference contains no information regarding the confidence of the VQ region selection, which is related to the distance from the input signal to the region boundary. In response, we develop a method for soft MAP inference of the VQ parameters based on the probability that the layer input belongs to a given VQ region. Switching from hard to soft VQ inference recovers several classical and powerful nonlinearities and provides an avenue to derive completely new ones. We illustrate by showing that the soft versions of ReLU and max-pooling are the sigmoid gated linear unit and softmax pooling, respectively. We also find a home for the sigmoid, hyperbolic tangent, and softmax in the framework as a new kind of DN layer where the MASO output is the VQ probability. Contribution 3: We generalize hard and soft VQ to what we call β-VQ inference, where β ∈ is a free and learnable parameter. This parameter interpolates the VQ from linear (β → 0), to probabilistic SMASO (β = 0.5), to deterministic MASO (β → 1). We show that the β-VQ version of the hard ReLU activation is the swish nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc through experimentation BID20.Contribution 4: Seen through the MASO lens, current DNs solve a simplistic per-unit (per-neuron), independent VQ optimization problem at each layer. In response, we extend the SMASO GMM to a factorial GMM that that supports jointly optimal VQ across all units in a layer. Since the factorial aspect of the new model would make naïve VQ inference exponentially computationally complex, we develop a simple sufficient condition under which a we can achieve efficient, tractable, jointly optimal VQ inference. The condition is that the linear "filters" feeding into any nonlinearity should be orthogonal. We propose two simple strategies to learn approximately and truly orthogonal weights and show on three different datasets that both offer significant improvements in classification per-formance. Since orthogonalization can be applied to an arbitrary DN, this and our theoretical understanding are of independent interest. This paper is organized as follows. After reviewing the theory of MASOs and VQ for DNs in Section 2, we formulate the GMM-based extension to SMASOs in Section 3. Section 4 develops the hybrid β-VQ inference with a special case study on the swish nonlinearity. Section 5 extends the SMASO to a factorial GMM and shows the power of DN orthogonalization. We wrap up in Section 6 with directions for future research. Proofs of the various appear in several appendices in the Supplementary Material. We first briefly review max-affine spline operators (MASOs) in the context of understanding the inner workings of DNs BID1 BID14, with each spline formed from R piecewise affine and convex mappings. The MASO parameters consist of the "slopes" A ∈ R K×R×D and the "offsets/biases" B ∈ R K×R. See Appendix A for the precise definition. Given the input x ∈ R D and parameters A, B, a MASO produces the output z ∈ R K via DISPLAYFORM0 where [z] k denotes the k th dimension of z. The three subscripts of the slopes tensor [A] k,r,d correspond to output k, partition region r, and input signal index d. The two subscripts of the offsets/biases tensor [B] k,r correspond to output k and partition region r. An important consequence of FORMULA0 is that a MASO is completely determined by its slope and offset parameters without needing to specify the partition of the input space (the "knots" when D = 1). Indeed, solving automatically computes an optimized partition of the input space R D that is equivalent to a vector quantization (VQ) BID19; BID9. We can make the VQ aspect explicit by rewriting in terms of the Hard-VQ (HVQ) matrix T H ∈ R K×R. that contains K stacked one-hot row vectors, each with the one-hot position at index [t] k ∈ {1, . . ., R} corresponding to the arg max over r = 1,..., R of. Given the HVQ matrix, (or equivalently, a region of the input space), the input-output mapping is affine and fully determined by DISPLAYFORM1 We retrieve from by noting that DISPLAYFORM2 The key for this paper is that the layers of a very large class of DN are MASOs. Hence, such a DN is a composition of MASOs, where each layer MASO has as input the feature map DISPLAYFORM3 and produces DISPLAYFORM4, with corresponding to the layer. Each MASO has thus specific parameters A , B .Theorem 1. Any DN layer comprising a linear operator (e.g., fully connected or convolution) composed with a convex and piecewise affine operator (such as a ReLU, leaky-ReLU, or absolute value activation; max/average/channel-pooling; maxout; all with or without skip connections) is a MASO Balestriero & Baraniuk (2018a; .Appendix A provides the parameters A , B for the MASO corresponding to the th layer of any DN constructed from linear plus piecewise affine and convex components. Given this connection, we will identify z (−1) above as the input (feature map) to the MASO DN layer and z as the output (feature map). We also identify [z ] k in FORMULA0 and FORMULA1 as the output of the k th unit (aka neuron) of the th layer. MASOs for higher-dimensional tensor inputs/outputs are easily developed by flattening. The MASO/HVQ connection provides deep insights into how a DN clusters and organizes signals layer by layer in a hierarchical fashion BID1. However, the entire ap-proach requires that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax. The goal of this paper is to extend the MASO analysis framework of Section 2 to these and an infinitely large class of other nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs). For now, we focus on a single unit k from layer of a MASO DN, which contains both linear and nonlinear operators; we generalize below in Section 5. The key to the MASO mechanism lies in the VQ variables [t ] k ∀k, since they fully determine the output via. For a special choice of bias, the VQ variable computation is equivalent to the K-means algorithm BID1. DISPLAYFORM0, the MASO VQ partition corresponds to a K- DISPLAYFORM1 For example, consider a layer using a ReLU activation function. Unit k of that layer partitions its input space using a K-means model with R = 2 centroids: the origin of the input space and the unit layer parameter [A ] k,1,·. The input is mapped to the partition region corresponding to the closest centroid in terms of the Euclidean distance, and the corresponding affine mapping for that region is used to project the input and produce the layer output as in FORMULA1.We now leverage the well-known relationship between K-means and Gaussian Mixture Models (GMMs) BID4 to GMM-ize the deterministic VQ process of max-affine splines. As we will see, the constraint on the value of B k,rin Proposition 1 will be relaxed thanks to the GMM's ability to work with a nonuniform prior over the regions (in contrast to K-means).To move from a deterministic MASO model to a probabilistic GMM, we reformulate the HVQ selection variable [t DISPLAYFORM2 Armed with this, we define the following generative model for the layer input z ( −1) as a mixture of R Gaussians with mean DISPLAYFORM3 and identical isotropic covariance with parameter σ DISPLAYFORM4 with ∼ N(0, Iσ 2). Note that this GMM generates an independent vector input z (−1) for every unit k = 1,..., D in layer. For reasons that will become clear below in Section 3.3, we will refer to the GMM model as the Soft MASO (SMASO) model. We develop a joint, factorial model for the entire MASO layer (and not just one unit) in Section 5. Given the GMM and an input z (−1), we can compute a hard inference of the optimal VQ selection variable [t ] k via the maximum a posteriori (MAP) principle DISPLAYFORM0 The following is proved in Appendix E.1.Theorem 2. Given a GMM with parameters σ 2 = 1 and DISPLAYFORM1, t = 1,..., R , the MAP inference of the latent selection variable [t ] k given in can be computed via the MASO HVQ DISPLAYFORM2 Note in Theorem 2 that the bias constraint of Proposition 1 (which can be interpreted as imposing a uniform prior [π ] k,· ) is completely relaxed. HVQ inference of the selection matrix sheds light on some of the drawbacks that affect any DN employing piecewise affine, convex activation functions. First, during gradient-based learning, the gradient will propagate back only through the activated VQ regions that correspond to the few 1-hot entries in T H. The parameters of other regions will not be updated; this is known as the "dying neurons phenomenon" BID22; BID0. Second, the overall MASO mapping is continuous but not differentiable, which leads to unexpected gradient jumps during learning. Third, the HVQ inference contains no information regarding the confidence of the VQ region selection, which is related to the distance of the query point to the region boundary. As we will now see, this extra information can be very useful and gives rise to a range of classical and new activation functions. We can overcome many of the limitations of HVQ inference in DNs by replacing the 1-hot entries of the HVQ selection matrix with the probability that the layer input belongs to a given VQ region DISPLAYFORM0 which follows from the simple structure of the GMM. This corresponds to a soft inference of the categorical variable DISPLAYFORM1 H as the noise variance in → 0. Given the SVQ selection matrix, the MASO output is still computed via. The SVQ matrix can be computed indirectly from an entropy-penalized MASO optimization; the following is reproved in Appendix E.2 for completeness. DISPLAYFORM2 Proposition 2, which was first established in BID17; BID18, unifies HVQ and SVQ in a single optimization problem. The transition from HVQ to SVQ FORMULA15 is obtained simply by adding the entropy regularization H(t). Notice that removing the Entropy regularization from leads to the same VQ as. We summarize this finding in Table. 1. Remarkably, switching from HVQ to SVQ MASO inference recovers several classical and powerful nonlinearities and provides an avenue to derive completely new ones. Given a set of MASO parameters A , B for calculating the layer-output of a DN via, we can derive two distinctly different DNs: one based on the HVQ inference of and one based on the SVQ inference of. The following are proved in Appendix E.5. Appendix C discusses how the GMM and SVQ formulations shed new light on the impact of parameter initialization in DC learning plus how these formulations can be extended further. Value for DISPLAYFORM0 DISPLAYFORM1 Changing viewpoint slightly, we can also derive classical nonlinearities like the sigmoid, tanh, and softmax BID13 from the soft inference perspective. Consider a new soft DN layer whose unit output [z ] k is not the piecewise affine spline of but rather the probability [z DISPLAYFORM0) that the input z falls into each VQ region. The following propositions are proved in Appendix E.6. Combining and yields a hybrid optimization for a new β-VQ that recovers hard, soft, and linear VQ inference as special cases DISPLAYFORM0 with the new hyper-parameter [β ] k ∈. The β-VQ obtained from the above optimization problem utilizes [β ] k to balance the impact of the regularization term (introduced in the SVQ derivation), allowing to recover and interpolate the VQ between linear, soft and hard (see Table. 1). The following is proved in Appendix E.3. Theorem 3. The unique global optimum of FORMULA19 is given by DISPLAYFORM1 The β-VQ covers all of the theory developed above as special cases: β = 1 yields HVQ, β = TAB1 summarizes some of the many nonlinearities that are within reach of the β-VQ. DISPLAYFORM2 The GMM models the impact of only a single layer unit on the layer-input z (−1). We can easily extend this model to a factorial model for z (−1) that enables all D units at layer to combine their syntheses: DISPLAYFORM0 with ∼ N(0, Iσ 2). This new model is a mixture of R Gaussians with means DISPLAYFORM1 and identical isotropic covariances with variance σ 2. The factorial aspect of the model means that the number of possible combinations of the t values grow exponentially with the number of units. Hence, inferring the latent variables t quickly becomes intractable. However, we can break this combinatorial barrier and achieve efficient, tractable VQ inference by constraining the MASO slope parameters A to be orthogonal DISPLAYFORM2 Orthogonality is achieved in a fully connected layer (multiplication by the dense matrix W composed with activation or pooling) when the rows of W are orthogonal. Orthogonality is achieved in a convolution layer (multiplication by the convolution matrix C composed with activation or pooling) when the rows of C are either non-overlapping or properly apodized; see Appendix E.4 for the details plus the proof of the following . Theorem 4. If the slope parameters A of a MASO are orthogonal in the sense of FORMULA0 DISPLAYFORM3 In an orthogonal, factorial MASO, optimal inference can be performed independently per factor, as opposed to jointly over all of the factors. Orthogonality renders the joint MAP inference of the factorial model's VQs tractable. The following is proved in Appendix E.4.Practically, this not only lowers the computational complexity tremendously but also imparts the benefit of "uncorrelated unit firing," which has been shown to be advantageous in DNs BID21. Beyond the scope of this paper, such an orthogonalization strategy can also be applied to more general factorial models such as factorial; BID10 and factorial HMMs BID11. Table 2: Classification experiment to demonstrate the utility of orthogonal DN layers. For three datasets and the same largeCNN architecture (detailed in Appendix D), we tabulate the classification accuracy (larger is better) and its standard deviation averaged over 5 runs with different Adam learning rates. In each case, orthogonal fully-connected and convolution matrices improve the classification accuracy over the baseline. Corollary 1. When the conditions of Theorem 4 are fulfilled, the joint MAP estimate for the VQs of the factorial model DISPLAYFORM4 and thus can be computed with linear complexity in the number of units. The advantages of orthogonal or near-orthogonal filters have been explored empirically in various settings, from to , typically demonstrating improved performance. Table 2 tabulates the of a simple confirmation experiment with the largeCNN architecture described in Appendix D. We added to the standard cross-entropy loss a term DISPLAYFORM5 2 that penalizes non-orthogonality (recall). We did not cross-validate the penalty coefficient λ but instead set it equal to 1. The tabulated show clearly that favoring orthogonal filters improves accuracy across both different datasets and different learning settings. Since the orthogonality penalty does not guarantee true orthogonality but simply favors it, we performed one additional experiment where we reparametrized the fully-connected and convolution matrices using the Gram-Schmidt (GS) process BID7 so that they were truly orthogonal. Thanks to the differentiability of all of the operations involved in the GS process, we can backpropagate the loss to the orthogonalized filters in order to update them in learning. We also used the swish activation, which we showed to be a β-VQ nonlinearity in Section 4. Since the GS process adds significant computational overhead to the learning algorithm, we conducted only one experiment on the largest dataset (CIFAR100). The exactly orthogonalized largeCNN achieved a classification accuracy of 61.2%, which is a major improvement over all of the in the bottom (CIFAR100) cell of Table 2. This indicates that there are good reasons to try to improve on the simple orthogonality-penalty-based approach. Our development of the SMASO model opens the door to several new research questions. First, we have merely scratched the surface in the exploration of new nonlinear activation functions and pooling operators based on the SVQ and β-VQ. For example, the soft-or β-VQ versions of leakyReLU, absolute value, and other piecewise affine and convex nonlinearities could outperform the new swish nonlinearity. Second, replacing the entropy penalty in the and with a different penalty will create entirely new classes of nonlinearities that inherit the rich analytical properties of MASO DNs. Third, orthogonal DN filters will enable new analysis techniques and DN probing methods, since from a signal processing point of view problems such as denoising, reconstruction, compression have been extensively studied in terms of orthogonal filters. This work was partially supported by NSF grants IIS-17-30574 and IIS-18-38177, AFOSR grant FA9550-18-1-0478, ARO grant W911NF-15-1-0316, ONR grants N00014-17-1-2551 and N00014-18-12571, DARPA grant G001534-7500, and a DOD Vannevar Bush Faculty Fellowship (NSSEFF) grant N00014-18-1-2047. propose a brief approximation . Due to the specificity of the convolution operator we are able to provide a tractable inference coupled with an apodization scheme. To demonstrate this, we first highlight that any input can be represented as a direct sum of its apodized patches. Then, we see that filtering apodized patches with a filter is equivalent to convolving the input with apodized filters. We first need to introduce the patch notation. We define a patch P[z ( −1) ](pi, pj) ∈ {1, . . ., I } × {1, . . ., J } as the slice of the input with indices c = 1,..., K , i = (all channels) and (i, j) ∈ {pi,..., pi + I The above highlights the ability to treat an input via its collection of patches with the condition to apply the defined apodization function. With the above, we can demonstrate how minimizing the per patch reconstruction loss leads to minimizing the overall input modeling DISPLAYFORM0 which represents the internal modeling of the factorial model applied across filters and patches. As a , when performing the per position minimization one minimizes an upper bound which ultimately reaches the global minimum as We first present the topologies used in the experiments except for the notation ResNetD-W which is the standard wide ResNet based topology with depth D and width W. We thus have the following network architectures for smallCNN and largeCNN: largeCNN
Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together.
1,777
scitldr
Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice. A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem. We develop generative models for protein sequences conditioned on a graph-structured specification of the design target. Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space. Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models. A central goal for computational protein design is to automate the invention of protein molecules with defined structural and functional properties. This field has seen tremendous progess in the past two decades BID14, including the design of novel 3D folds BID20, enzymes BID30, and complexes BID4. However, the current practice often requires multiple rounds of trial-and-error, with first designs frequently failing BID19 BID28. Several of the challenges stem from the bottom-up nature of contemporary approaches that rely on both the accuracy of energy functions to describe protein physics as well as on the efficiency of sampling algorithms to explore the protein sequence and structure space. Here, we explore an alternative, top-down framework for protein design that directly learns a conditional generative model for protein sequences given a specification of the target structure, which is represented as a graph over the sequence elements. Specifically, we augment the autoregressive self-attention of recent sequence models BID34 with graph-based descriptions of the 3D structure. By composing multiple layers of structured self-attention, our model can effectively capture higher-order, interaction-based dependencies between sequence and structure, in contrast to previous parameteric approaches BID24 BID36 that are limited to only the first-order effects. The graph-structured conditioning of a sequence model affords several benefits, including favorable computational efficiency, inductive bias, and representational flexibility. We accomplish the first two by leveraging a well-evidenced finding in protein science, namely that long-range dependencies in sequence are generally short-range in 3D space BID23 BID3. By making the graph and self-attention similarly sparse and localized in 3D space, we achieve computational scaling that is linear in sequence length. Additionally, graph structured inputs offer representational flexibility, as they accomodate both coarse,'flexible backbone' (connectivity and topology) as well as fine-grained (precise atom locations) descriptions of structure. We demonstrate the merits of our approach via a detailed empirical study. Specifically, we evaluate our model at structural generalization to sequences of protein folds that were outside of the training set. Our model achieves considerably improved generalization performance over the recent deep models of protein sequence given structure as well as structure-naïve language models. Generative models for proteins A number of works have explored the use of generative models for protein engineering and design. and BID36 proposed neural models for sequences given 3D structure, where the amino acids at different positions in the sequence are predicted independently of one another. BID11 introduced a generative model for protein sequences conditioned on a 1D, context-free grammar based specification of the fold topology. BID8 and BID37 used deep neural networks to model the conditional distribution of letters in a specific position given the structure and sequence of all surrounding residues. In contrast to these works, our model captures the joint distribution of the full protein sequence while grounding these dependencies in terms of long-range interactions arising from the structure. In parallel to the development of structure-based models, there has been considerable work on deep generative models for protein sequences in individual protein families with directed BID26 BID31 and undirected BID33 ) latent variable models. These methods have proven useful for protein engineering, but presume the availability of a large number of sequences from a particular family. More recently, several groups have obtained promising using unconditional protein language models BID6 BID0 BID12 BID27 to learn protein sequence representations that can transfer well to supervised tasks. While serving different purposes, we emphasize that one advantage of conditional generative modeling is to facilitate adaptation to specific (and potentially novel) parts of structure space. Language models trained on hundreds of millions of evolutionary sequences are unfortunately still'semantically' bottlenecked by the much smaller number of evolutionary 3D folds (perhaps thousands) that the sequences design. We propose evaluating protein language models with structure-based splitting of sequence data (Section 3, albeit on much smaller sequence data), and begin to see how unconditional language models may struggle to assign high likelihoods to sequences from out-of-training folds. In a complementary line of research, deep models of protein structure BID2 BID16 BID1 have been proposed recently that could be used to craft 3D structures for input to sequence design. Protein design For classical approaches to computational protein design, which are based on joint modeling of structure and sequence, we refer the reader to a review of both methods and accomplishments in BID14. More recently, proposed a non-parametric approach to protein design in which a target design is decomposed into substructural motifs that are then queried against a protein database. In this work we will focus on comparisons with direct parametric models of the sequence-structure relationship. Self-Attention Our model extends the Transformer BID7 to additionally capture sparse, pairwise relational information between sequence elements. The dense variation of this problem was explored in BID29 and. As noted in those works, incorporating general pairwise information incurs O(N 2) memory (and computational) cost for sequences of length N, which can be highly limiting for training on GPUs. We circumvent this cost by instead restricting the self-attention to the sparsity of the input graph. Given this graph-structured self-attention, our model may also be reasonably cast in the framework of message-passing or graph neural networks BID10 BID5. Our approach is similar to Graph Attention Networks BID35, but augmented with edge features and an autoregressive decoder. We represent protein structure in terms of an attributed graph G = (V, E) with node features V = {v 1, . . ., v N} and edge features E = {e ij} i =j over the sequence residues (amino acids). This formulation can accommodate different variations on the macromolecular design problem, including 3D considerations For a rigid-body design problem, the structure for conditioning is a fixed set of backbone coordinates X = {x i ∈ R 3 : 1 ≤ i ≤ N}, where N is the number of positions 1. We desire a graph representation of the coordinates G(X) that has two properties:• Invariance. The features are invariant to rotations and translations.• Locally informative. The edge features incident to v i due to its neighbors N(i),i.e. {e ij} j∈N(i), contain sufficient information to reconstruct all adjacent coordinates {x j} j∈N(i) up to rigid-body motion. While invariance is motivated by standard symmetry considerations, the second property is motivated by limitations of current graph neural networks BID10. In these networks, updates to node features v i depend only on the edge and node features adjacent to v i. However, typically, these features are insufficient to reconstruct the relative neighborhood positions {x j} j∈N(i), so individual updates cannot fully depend on the'local environment'. For example, pairwise distances D ij and D il are insufficient to determine if x j and x l are on the same or opposite sides of x i. We develop invariant and locally informative features by first augmenting the points x i with'orientations' O i that define a local coordinate system at each point. We define these in terms of the backbone geometry as DISPLAYFORM0 where b i is the negative bisector of angle between the rays (x i−1 − x i) and (x i+1 − x i), and n i is a unit vector normal to that plane. Formally, we have DISPLAYFORM1 Finally, we derive the spatial edge features e (s) ij from the rigid body transformation that relates reference frame (x i, O i) to reference frame (x j, O j). While this transformation has 6 degrees of freedom, we decompose it into features for distance, direction, and orientation as DISPLAYFORM2 Here r(·) is a function that lifts the distances into a radial basis 2, the term in the middle corresponds to the relative direction of x j in the reference frame of (x i, O i), and q(·) converts the 3 × 3 relative rotation matrix to a quaternion representation. Quaternions represent rotations as four-element vectors that can be efficiently and reasonably compared by inner products. Positional encodings Taking a cue from the original Transformer model, we obtain positional embeddings e (p) ij that encode the role of local structure around node i. Specifically, we need to model the positioning of each neighbor j relative to the node under consideration i. Therefore, we obtain the position embedding as a sinusoidal function of the gap i − j. Note that this is in contrast to the absolute positional encodings of the original Transformer, and instead matches the relative encodings in BID29.Node and edge features Finally, we obtain an aggregate edge encoding vector e ij by concatenating the structural encodings e (s) ij and the positional encodings e (p) ij and then linearly transforming them to have the same dimension as the model. We only include edges in the k-nearest neighbors graph of X, with k = 30 for all experiments. For node features, we compute the three dihedral angles of the protein backbone (φ i, ψ i, ω i) and embed these on the 3-torus as {sin, cos} × (φ i, ψ i, ω i). We also consider'flexible backbone' descriptions of 3D structure based solely on topological binary edge features. We combine the relative positional encodings with two binary edge features: contacts that indicate when the distance between C α residues at i and j are less than 8 Angstroms and hydrogen bonds which are directed and defined by the electrostatic model of DSSP BID17. These features implicitly integrate over different 3D backbone configurations that are compatible with the specified topology. In this work, we introduce a Structured Transformer model that draws inspiration from the selfattention based Transformer model BID34 and is augmented for scalable incorporation of relational information. While general relational attention incurs quadratic memory and computation costs, we avert these by restricting the attention for each node i to the set N(i, k) of its k-nearest neighbors in 3D space. Since our architecture is multilayered, iterated local attention can derive progressively more global estimates of context for each node i. Second, unlike the standard Transformer, we also include edge features to embed the spatial and positional dependencies in deriving the attention. Thus, our model generalizes Transformer to spatially structured settings. Autoregressive decomposition We decompose the joint distribution of the sequence given structure p(s|x) autoregressively as DISPLAYFORM0 where the conditional probability p(s i |x, s <i) of amino acid s i at position i is conditioned on both the input structure x and the preceding amino acids s <i = {s 1, . . . s i−1} 4. These conditionals are parameterized in terms of two sub-networks: an encoder that computes refined node embeddings from structure-based node features V(x) and edge features E(x) and a decoder that autoregressively predicts letter s i given the preceding sequence and structural embeddings from the encoder. Encoder Our encoder module is designed as follows. A transformation W h: DISPLAYFORM1 Each layer of the encoder implements a multi-head self-attention component, where head ∈ [L] can attend to a separate subspace of the embeddings via learned query, key and value transformations BID34. The queries are derived from the current embedding at node i while the keys and values from the relational information r ij = (h j, e ij) at adjacent nodes j ∈ N (i, k). Specifically, W DISPLAYFORM2 where m DISPLAYFORM3 The of each attention head l are collected as the weighted sum h DISPLAYFORM4 ij and then concatenated and transformed to give the update DISPLAYFORM5 We update the embeddings with this residual and alternate between these self-attention layers and position-wise feedforward layers as in the original Transformer BID34. We stack multiple layers atop each other, and thereby obtain continually refined embeddings as we traverse the layers bottom up. The encoder yields the embeddings produced by the topmost layer as its output. Decoder Our decoder module has the same structure as the encoder but with augmented relational information r ij that allows access to the preceding sequence elements s <i in a causally consistent manner. Whereas the keys and values of the encoder are based on the relational information r ij = (h j, e ij), the decoder can additionally access sequence elements s j as DISPLAYFORM6 Here h DISPLAYFORM7 is the embedding of node j in the current layer of the decoder, h (enc) j is the embedding of node j in the final layer of the encoder, and g(s j) is a sequence embedding of amino acid s j at node j. This concatenation and masking structure ensures that sequence information only flows to position i from positions j < i, but still allows position i to attend to subsequent structural information. We stack three layers of self-attention and position-wise feedforward modules for the encoder and decoder with a hidden dimension of 128 throughout the experiments 5. Dataset To evaluate the ability of the models to generalize across different protein folds, we collected a dataset based on the CATH hierarchical classification of protein structure BID25. For all domains in the CATH 4.2 40% non-redundant set of proteins, we obtained full chains up to length 500 (which may contain more than one domain) and then cluster-split these at the CATH topology level (i.e. fold level) into training, validation, and test sets at an 80/10/10 split. Chains containing multiple CATH tpologies were purged with precedence for test over validation over train. Our splitting procedure ensured that no two domains from different sets would share the same topologies (folds). The final splits contained 18025 chains in the training set, 1637 chains in the validation set, and 1911 chains in the test set. Optimization We trained models using the learning rate schedule and initialization of BID34, a dropout BID32 rate of 10%, and early stopping based on validation perplexity.5 except for the decoder-only language model experiment which used a hidden dimension of 256 Many protein sequences may reasonably design the same 3D structure BID21, and so we focus on likelihood-based evaluations of model performance. Specifically, we evaluate the perplexity per letter of test protein folds (topologies) that were held out from the training and validation sets. Protein perplexities What kind of perplexities might be useful? To provide context, we first present perplexities for some simple models of protein sequences in TAB0. The amino acid alphabet and its natural frequencies upper-bound perplexity at 20 and ∼17.8, respectively. Random protein sequences under these null models are unlikely to be functional without further selection BID18. First order profiles of protein sequences such as those from the Pfam database BID9, however, are widely used for protein engineering. We found the average perplexity per letter of profiles in Pfam 32 (ignoring alignment uncertainty) to be ∼11.6. This suggests that even models with high perplexities of this order have the potential to be useful models for the space of functional protein sequences. The importance of structure We found that there was a significant gap between unconditional language models of protein sequences and models conditioned on structure. Remarkably, for a range of structure-independent language models, the typical test perplexities turned out to be ∼16-17 TAB1, which were barely better than null letter frequencies TAB0. We emphasize that the RNNs were not broken and could still learn the training set in these capacity ranges. It would seem that protein language models trained on one subset of 3D folds (in our cluster-splitting procedure) generalize poorly to predict the sequences of unseen folds, which is important to consider when training protein language models for protein engineering and design. All structure-based models had (unsurprisingly) considerably lower perplexities. In particular, our Structured Transformer model attained a perplexity of ∼7 on the full test set. When we compared different graph features of protein structure TAB2, we indeed found that using local orientation information was important. We also compared to a recent method SPIN2 that predicts, using deep neural networks, protein sequence profiles given protein structures BID24. Since SPIN2 is computationally intensive (minutes per protein for small proteins) and was trained on complete proteins rather than chains, we evaluated it on two subsets of the full test set: a'Small' subset of the test set containing chains up to length 100 and a'Single chain' subset containing only those models where the single chain accounted for the entire protein record in the Protein Data Bank. Both subsets discarded any chains with structural gaps. We found that our Structured Transformer model considerably improved upon the perplexities of SPIN2 TAB1. We presented a new deep generative model to'design' protein sequences given a graph specification of their structure. Our model augments the traditional sequence-level self-attention of Transformers BID34 with relational 3D structural encodings and is able to leverage the spatial locality of dependencies in molecular structures for efficient computation. When evaluated on unseen folds, the model achieves significantly improved perplexities over the state-of-the-art parametric generative models. Our framework suggests the possibility of being able to efficiently design and engineer protein sequences with structurally-guided deep generative models, and underscores the central role of modeling sparse long-range dependencies in biological sequences. We thank members of the MIT MLPDS consortium for helpful feedback and discussions.
We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies.
1,778
scitldr
We provide a novel perspective on the forward pass through a block of layers in a deep network. In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\tau$-nice Proximal Stochastic Gradient method. We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method. By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant $L$ used to determine the optimal step size of the above proximal methods. We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via $L$, consistently improves classification accuracy. Deep learning has revolutionized computer vision and natural language processing and is increasingly applied throughout science and engineering BID20. This has motivated the mathematical analysis of various aspects of deep networks, such as the capacity and uniqueness of their representations BID28 BID24 and their global training convergence properties BID10. However, a complete characterization of deep networks remains elusive. For example, Bernoulli dropout layers are known to improve generalization BID29, but a thorough theoretical understanding of their behavior remains an open problem. While basic dropout layers have proven to be effective, there are many other types of dropout with various desirable properties BID22. This raises many questions. Can the fundamental block of layers that consists of a dropout layer followed by a linear transformation and a non-linear activation be further improved for better generalization? Can the choice of dropout layer be made independently from the linear transformation and non-linear activation? Are there systematic ways to propose new types of dropout?We attempt to address some of these questions by establishing a strong connection between the forward pass through a block of layers in a deep network and the solution of convex optimization problems of the following form: DISPLAYFORM0 Note that when f i (a i x) = 1 2 (a i x − y i) 2 and g(x) = x 2 2, Eq. is standard ridge regression. When g(x) = x 1, Eq. has the form of LASSO regression. We show that a block of layers that consists of dropout followed by a linear transformation (fullyconnected or convolutional) and a non-linear activation has close connections to applying stochastic solvers to. Interestingly, the choice of the stochastic optimization algorithm gives rise to commonly used dropout layers, such as Bernoulli and additive dropout, and to a family of other types of dropout layers that have not been explored before. As a special case, when the block in question does not include dropout, the stochastic algorithm reduces to a deterministic one. Our contributions can be summarized as follows. (i) We show that a forward pass through a block that consists of Bernoulli dropout followed by a linear transformation and a non-linear activation is equivalent to a single iteration of τ -nice Proximal Stochastic Gradient, Prox-SG BID34 when it is applied to an instance of. We provide various conditions on g that recover (either exactly or approximately) common non-linearities used in practice. (ii) We show that the same block with an additive dropout instead of Bernoulli dropout is equivalent to a single iteration of mS2GD BID16 ) -a mini-batching form of variance-reduced SGD BID12 ) -applied to an instance of. (iii) By expressing both fully-connected and convolutional layers (referred to as linear throughout) as special cases of a high-order tensor product BID2, we derive a formula for the Lipschitz constant L of ∇F (x). As a consequence, we can compute the optimal step size for the stochastic solvers that correspond to blocks of layers. We note that concurrent work BID26 used a different analysis strategy to derive an equivalent for computing the singular values of convolutional layers. (iv) We validate our theoretical analysis experimentally by replacing blocks of layers in standard image classification networks with corresponding solvers and show that this improves the accuracy of the models. Optimization algorithms can provide insight and guidance in the design of deep network architectures BID31 BID14 BID35 BID36. For example, BID35 have proposed a deep network architecture for compressed sensing. Their network, dubbed ADMM-Net, is inspired by ADMM updates BID4 on the compressed sensing objective. Similarly, BID36 demonstrated that unrolling a proximal gradient descent solver BID1 ) on the same problem can further improve performance. The work of BID14 demonstrated a relation between incremental proximal methods and ResNet blocks; based on this observation, they proposed a new architecture (variational networks) for the task of image reconstruction. BID0 proposed to embed optimization problems, in particular linearly-constrained quadratic programs, as structured layers in deep networks. BID21 replaced proximal operators in optimization algorithms by neural networks. BID11 proposed a new matrix layer, dubbed ReEig, that applies a thresholding operation to the eigenvalues of intermediate feature representations that are stacked in matrix form. ReEig can be tightly connected to a proximal operator of the set of positive semi-definite matrices. proposed a new architecture based on a sparse representation construct, Multi-Layer Convolutional Sparse Coding (ML-CSC), initially introduced by BID23. Sparsity on the intermediate representations was enforced by a multi-layer form of basis pursuit. This body of work has demonstrated the merits of connecting the design of deep networks with optimization algorithms in the form of structured layers. Yet, with few exceptions BID0, previous works propose specialized architectures for specific tasks. Our work aims to contribute to a unified framework that relates optimization algorithms to deep layers. A line of work aims to provide rigorous interpretation for dropout layers. For example, BID32 showed that dropout is linked to an adaptively balanced 2 -regularized loss. BID33 showed that approximating the loss with a normal distribution leads to a faster form of dropout. BID7 BID38 developed a framework that connects dropout with approximate variational inference in Bayesian models. We provide a complementary perspective, in which dropout layers arise naturally in an optimization-driven framework for network design. This section is organized as follows. We introduce our notation and preliminaries in Section 3.1. In Section 3.2, we present a motivational example relating a single iteration of proximal gradient descent (Prox-GD) on to the forward pass through a fully-connected layer followed by a nonlinear activation. We will show that several commonly used non-linear activations can be exactly or approximately represented as proximal operators of g(x). In Section 3.3, we unify fully-connected and convolutional layers as special cases of a high-order tensor product. We propose a generic instance of in a tensor setting, where we provide a formula for the Lipschitz constant L of the finite sum structure of. In Section 3.4, we derive an intimate relation between stochastic solvers, namely τ -nice Prox-SG and mS2GD, and two types of dropout layers. FIG6 shows an overview of the connections that will be developed. FIG6: An overview of the tight relation between a single iteration of a stochastic solver and the forward pass through the l th layer in a network that consists of dropout followed by a linear transformation and a non-linear activation. We study an instance of problem with quadratic F (x), where x l−1 are the input activations and x l, the variables being optimized, correspond to the output activations. Varying the type of stochastic solver changes the nature of the dropout layer, while the prior g(x) on the output activations determines the non-linearity Prox 1 L g . As we will be working with tensors, we will follow the tensor notation of BID15. The order of a tensor is the number of its dimensions. In particular, scalars are tensors of order zero, vectors are tensors of order one, and matrices are tensors of order two. We denote scalars by lowercase letters a, vectors by bold lowercase letters a, and matrices by bold capital letters A. We use subscripts a i to refer to individual elements in a vector. Tensors of order three or more will be denoted by cursive capital letters A ∈ R J1×J2×···×Jn. Throughout the paper, we will handle tensors that are of at most order four. High-order tensors with a second dimension of size equal to one are traditionally called vector tensors and denoted A ∈ R J1×1×J3×J4. We use A(i, j, k, z) to refer to an element in a tensor and A(i, j, k, :) to refer to a slice of a tensor. The inner product between tensors of the same size is denoted A, B = i1,...,i N A (i 1, . . ., i N) B (i 1, . . ., i N). The squared Frobenius norm of a tensor A is defined as A 2 F = A, A. Lastly, the superscripts and H are used to denote the transpose and the Hermitian transpose, respectively. As a motivating example, we consider the l th linear layer in a deep network that is followed by a non-linear activation ρ, i.e. x l = ρ(Ax l−1 + b), where A ∈ R n2×n1 and b ∈ R n2 are the weights and biases of the layer and x l−1 and x l are the input and output activations, respectively. Now consider an instance of with a convex function g(x) and DISPLAYFORM0 where A (i, :) is the i th row of A. Such an objective can be optimized iteratively in x l using Prox-GD with the following update equation: DISPLAYFORM1 where the Lipschitz constant L = λ max AA and λ max denotes the maximum eigenvalue. By initializing the iterative optimization at x l = 0, it becomes clear that a single iteration of is equivalent to a fully-connected layer followed by a non-linearity that is implemented by the proximal operator BID6. The choice of g(x) determines the specific form of the non-linearity ρ. Several popular activation functions can be traced back to their corresponding g(x). The ReLU, which enforces non-negative output activations, corresponds to the indicator function g(x) = 1 x≥0; the corresponding instance of problem is a non-negative quadratic program. Similar observations for the ReLU have been made in other contexts BID0 BID23. We observe that many other activation functions fit this framework. For example, when g(x) is a squared hinge loss, i.e. DISPLAYFORM2 DISPLAYFORM3, a single update of FORMULA2 is equivalent to a linear layer followed by a Leaky ReLU. TAB0 lists some other choices of g(x) and their induced activations. is not required to exhibit a simple, coordinate-wise separable structure. More complex functions can be used, as long as the proximal operator is easy to evaluate. Interesting examples arise when the output activations have matrix structure. For instance, one can impose nuclear norm regularization g(X) = X * to encourage X to be low rank. Alternatively, one can enforce positive semi-definite structure on the matrix X by defining g(X) = 1 X 0. A similar activation has been used for higher-order pooling BID11.In what follows, we will show that this connection can be further extended to explain dropout layers. Interestingly, specific forms of dropout do not arise from particular forms of objective FORMULA0, but from different stochastic optimization algorithms that are applied to it. Before presenting our main on the equivalence between a forward pass through a block of layers and solving with stochastic algorithms, we provide some key lemmas. These lemmas will be necessary for a unified treatment of fully-connected and convolutional layers as generic linear layers. This generic treatment will enable efficient computation of the Lipschitz constant for both fully-connected and convolutional layers. Lemma 1. Consider the l th convolutional layer in a deep network with some non-linear activation, e.g. Prox g , where the weights A ∈ R n2×n1×W ×H, biases B ∈ R n2×1×W ×H, and input activations X l−1 ∈ R n1×1×W ×H are stacked into 4 th -order tensors. We can describe the layer as DISPLAYFORM0 where HO is the high-order tensor product. Here n 1 is the number of input features, n 2 is the number of output features (number of filters), and W and H are the spatial dimensions of the features. As a special case, a fully-connected layer follows naturally, since HO reduces to a matrix-vector multiplication when W = H = 1.The proof can be found in supplementary material. Note that the order of the dimensions is essential in this notation, as the first dimension in A corresponds to the number of independent filters while the second corresponds to the input features that will be aggregated after the 2D convolutions. Also note that according to the definition of HO in BID2, the spatial size of the filters in A, namely W and H, has to match the spatial dimensions of the input activations X l−1, since the operator HO performs 2D circular convolutions while convolutions in deep networks are 2D linear convolutions. This is not a restriction, since one can perform linear convolution through a zero-padded circular convolution. Lastly, we assume that the values in B are replicated along the spatial dimensions W and H in order to recover the behaviour of biases in deep networks. Given this notation, we will refer to either a fully-connected or a convolutional layer as a linear layer throughout the rest of the paper. Since we are interested in a generic linear layer followed by a non-linearity, we will consider the tensor quadratic version of F (x), denoted F (X): DISPLAYFORM1 Note that if A ∈ R n2×n1×W ×H, then A H ∈ R n1×n2×W ×H, where each of the frontal slices of A(:, :, i, j) is transposed and each filter, A(i, j, :, :), is rotated by 180•. This means that A H HO X aggregates the n 2 filters after performing 2D correlations. This is performed n 1 times independently. This operation is commonly referred to as a transposed convolution. Details can be found in supplementary material. Next, the following lemma provides a practical formula for the computation of the Lipschitz constant L of the finite sum part of FORMULA6: DISPLAYFORM2 where is the 2D discrete Fourier transform along the spatial dimensions W and H.The proof can be found in supplementary material. Lemma 2 states that the Lipschitz constant L is the maximum among the set of maximum eigenvalues of all the possible W × H combinations of the outer product of frontal slices (:, :, i, j) H (:, :, i, j). Note that if W = H = 1, then = A ∈ R n2×n1 since the 2D discrete Fourier transform of scalars (i.e. matrices of size 1 × 1) is an identity mapping. As a consequence, we can simplify to L = max i=j=1 {λ max A(:, :, i, j)A H (:, :, i, j) } = λ max AA, which recovers the Lipschitz constant for fully-connected layers. In this subsection, we present two propositions. The first shows the relation between standard Bernoulli dropout (p is the dropout rate), BerDropout p BID29, and τ -nice Prox-SG. The second proposition relates additive dropout, AddDropout, to mS2GD BID16. We will first introduce a generic notion of sampling from a set. This is essential as the stochastic algorithms sample unbiased function estimates from the set of n 1 functions in. Definition 3.1. BID9. A sampling is a random set-valued mapping with values being the subsets of [n 1] = {1, . . ., n 1}. A sampling S is τ -nice if it is uniform, i.e. Prob (i ∈ S) = Prob (j ∈ S) ∀ i, j, and assigns equal probabilities to all subsets of [n 1] of cardinality τ and zero probability to all others. Various other types of sampling can be found in BID9. We are now ready to present our first proposition. Proposition 1. A single iteration of Prox-SG with τ -nice sampling S on with τ = (1 − p)n 1, zero initialization, and unit step size can be shown to exhibit the update DISPLAYFORM0 which is equivalent to a forward pass through a BerDropout p layer that drops exactly n 1 p input activations followed by a linear layer and a non-linear activation. We provide a simplified sketch for fully-connected layers here. The detailed proof is in the supplement. To see how reduces to the functional form of BerDropout p followed by a fully-connected layer and a non-linear activation, consider W = H = 1. The argument of Prox 1 L g in (without the bias term) reduces to n 1 τ i∈S A(:, i, :, : DISPLAYFORM1 The first equality follows from the definition of HO, while the second equality follows from trivially reparameterizing the sum, with BerDropout p being equivalent to a mask that zeroes out exactly pn 1 input activations. Note that if τ -nice Prox-SG was replaced with Prox-GD, i.e. τ = n 1, then this corresponds to having a BerDropout p layer with dropout rate p = 0; thus, reduces to A BerDropout p (X l−1) = A X l−1, which recovers our motivating example that relates Prox-GD with the forward pass through a fully-connected layer followed by a non-linearity. Note that Proposition 1 directly suggests how to apply dropout to convolutional layers. Specifically, complete input features from n 1 should be dropped and the 2D convolutions should be performed only on the τ -sampled subset, where τ = (1 − p)n 1.Similarly, the following proposition shows that a form of additive dropout, AddDropout, can be recovered from a different choice of stochastic solver. Proposition 2. A single outer-loop iteration of mS2GD BID16 ) with unit step size and zero initialization is equivalent to a forward pass through an AddDropout layer followed by a linear layer and a non-linear activation. The proof is given in the supplement. It is similar to Proposition 1, with mS2GD replacing τ -nice Prox-SG. Note that any variance-reduced algorithm where one full gradient is computed at least once can be used here as a replacement for mS2GD. For instance, one can show that the serial sampling version of mS2GD, S2GD BID16, and SVRG BID12 can also be used. Other algorithms such as Stochastic Coordinate Descent with arbitrary sampling are discussed in the supplement. A natural question arises as a consequence of our framework: If common layers in deep networks can be understood as a single iteration of an optimization algorithm, what happens if the algorithm is applied for multiple iterations? We empirically answer this question in our experiments. In particular, we embed solvers as a replacement to their corresponding blocks of layers and show that this improves the accuracy of the models without an increase in the number of network parameters. Experimental setup. We perform experiments on CIFAR-10 and CIFAR-100 BID17 ). In all experiments, training was conducted on 90% of the training set while 10% was left for validation. The networks used in the experiments are variants of LeNet , AlexNet BID18, and VGG16 BID27. We used stochastic gradient descent with a momentum of 0.9 and a weight decay of 5 × 10 −4. The learning rate was set to (10 −2, 10 −3, 10 −4) for the first, second, and third 100 epochs, respectively. For finetuning, the learning rate was initially set to 10 −3 and reduced to 10 −4 after 100 epochs. Moreover, when a block of layers is replaced with a deterministic solver, i.e. Prox-GD, the step size is set to the optimal constant 1/L, where L is computed according to Lemma 2 and updated every epoch without any zero padding as a circular convolution operator approximates a linear convolution in large dimensions BID38. In Prox-SG, a decaying step size is necessary for convergence; therefore, the step size is exponentially decayed as suggested by BID3, where the initial step size is again set according to Lemma 2. Finally, to guarantee convergence of the stochastic solvers, we add the strongly convex function λ 2 X 2 F to the finite sum in, where we set λ = 10 −3 in all experiments. Note that for networks that include a stochastic solver, the network will be stochastic at test time. We thus report the average accuracy and standard deviation over 20 trials. Replacing fully-connected layers with solvers. In this experiment, we demonstrate that (i) training networks with solvers replacing one or more blocks of layers can improve accuracy when trained from scratch, and (ii) the improvement is consistently present when one or more blocks are replaced with solvers at different layers in the network. To do so, we train a variant of LeNet on the CIFAR-10 dataset with two BerDropout p layers. The last two layers are fully-connected layers with ReLU activation. We consider three variants of this network: Both fully-connected layers are augmented with BerDropout p (LeNet-D-D), only the last layer is augmented with BerDropout p (LeNet-ND-D), and finally only the penultimate layer is augmented with BerDropout p (LeNet-D-ND). In all cases, we set the dropout rate to p = 0.5. We replace the BerDropout p layers with their corresponding stochastic solvers and run them for 10 iterations with τ = n 1 /2 (the setting corresponding to a dropout rate of p = 0.5). We train these networks from scratch using the same procedure as the baseline networks. The are summarized in Table 2. It can be seen that replacing BerDropout p with the corresponding stochastic solver (τ -nice Prox-SG) improves performance significantly, for any choice of layer. The indicate that networks that incorporate stochastic solvers can be trained stably and achieve desirable generalization performance. LeNet-D-ND LeNet-ND-D Baseline 64.39% 71.72% 68.54% Prox-SG 72.86% ± 0.177 75.20% ± 0.205 76.23% ± 0.206 Table 2: Comparison in accuracy between variants of the LeNet architecture on the CIFAR-10 dataset. The variants differ in the location (D or ND) and number of BerDropout p layers for both the baseline networks and their stochastic solver counterpart Prox-SG. Accuracy consistently improves when Prox-SG is used. Accuracy is reported on the test set. Convolutional layers and larger networks. We now demonstrate that solvers can be used to improve larger networks. We conduct experiments with variants of AlexNet 1 and VGG16 on both CIFAR-10 and CIFAR-100. We start by training strong baselines for both AlexNet and VGG16, achieving 77.3% and 92.56% test accuracy on CIFAR-10, respectively. Note that performance on this dataset is nearly saturated. We then replace the first convolutional layer in AlexNet with the deterministic Prox-GD solver, since this layer is not preceded by a dropout layer. The are summarized in Table 3. We observe that finetuning the baseline network with the solver leads to an improvement of ≈ 1.2%, without any change in the network's capacity. A similar improvement is observed on the harder CIFAR-100 dataset. AlexNet AlexNet-Prox-GD CIFAR-10 77.30% 78.51% CIFAR-100 44.20%45.53% Table 3: Replacing the first convolutional layer of AlexNet by the deterministic Prox-GD solver yields consistent improvement in test accuracy on CIFAR-10 and CIFAR-100.Results on VGG16 are summarized in Table 4. Note that VGG16 has two fully-connected layers, which are preceded by a BerDropout p layer with dropout rate p = 0.5. We start by replacing only the last layer with Prox-SG with 30 iterations and τ = n 1 /2 (VGG16-Prox-SG-ND-D). We further replace both fully-connected layers that include BerDropout p with solvers (VGG16-Prox-SG-D-D). We observe comparable performance for both settings on CIFAR-10. We conjecture that this might be due to the dataset being close to saturation. On CIFAR-100, a more pronounced increase in accuracy is observed, where VGG-16-Prox-SG-ND-D outperforms the baseline by about 0.7%.We further replace the stochastic solver with a deterministic solver and leave the dropout layers unchanged. We denote this setting as VGG16-Prox-GD in Table 4. Interestingly, this setting performs the best on CIFAR-10 and comparably to VGG16-Prox-SG-ND-D on CIFAR-100. CIFAR-10 92.56% 92.44% ± 0.028 92.57% ± 0.029 92.80% CIFAR-100 70.27%70.95% ± 0.042 70.44% ± 0.077 71.10% Table 4: Experiments with the VGG16 architecture on CIFAR-10 and CIFAR-100. Accuracy is reported on the test set. Dropout rate vs. τ -nice sampling. In this experiment, we demonstrate that the improvement in performance is still consistently present across varying dropout rates. Since Proposition 1 has established a tight connection between the dropout rate p and the sampling rate τ in, we observe that for different choices of dropout rate the baseline performance improves upon replacing a block of layers with a stochastic solver with the corresponding sampling rate τ. We conduct experiments with 1 AlexNet BID18 was adapted to account for the difference in spatial size of the images in CIFAR-10 and ImageNet BID5 ). The first convolutional layer has a padding of 5, and all max-pooling layers have a kernel size of 2. A single fully-connected layer follows at the end. VGG16 on CIFAR-100. We train four different baseline models with varying choices of dropout rate p ∈ {0, 0.1, 0.9.0.95} for the last layer. We then replace this block with a stochastic solver with a sampling rate τ and finetune the network. Table 5 reports the accuracy of the baselines for varying dropout rates p and compares to the accuracy of the stochastic solver with corresponding τ (Prox-SG). With a high dropout rate, the performance of the baseline network drops drastically. When using the stochastic solver, we observe a much more graceful degradation. For example, with a sampling rate τ that corresponds to an extreme dropout rate of p = 0.95 (i.e. 95% of all input activations are masked out), the baseline network with BerDropout p suffers a 56% reduction in accuracy while the stochastic solver declines by only 5%. Prox-SG Table 5: Comparison of the VGG16 architecture trained on CIFAR-100 with varying dropout rates p in the last BerDropout p layer. We compare the baseline to its stochastic solver counterpart with corresponding sampling rate τ = (1 − p)n 1. Accuracy is reported on the test set. In summary, our experiments show that replacing common layers in deep networks with stochastic solvers can lead to better performance without increasing the number of parameters in the network. The ing networks are stable to train and exhibit high accuracy in cases where standard dropout is problematic, such as high dropout rates. We have presented equivalences between layers in deep networks and stochastic solvers, and have shown that this can be leveraged to improve accuracy. The presented relationships open many doors for future work. For instance, our framework shows an intimate relation between a dropout layer and the sampling S from the set [n 1] in a stochastic algorithm. As a consequence, one can borrow theory from the stochastic optimization literature to propose new types of dropout layers. For example, consider a serial importance sampling strategy with Prox-SG to solve BID37 BID34, where serial sampling is the sampling that satisfies Prob (i ∈ S, j ∈ S) = 0. A serial importance sampling S from the set of functions f i (X) is the sampling such that Prob DISPLAYFORM0 i.e. each function from the set [n 1] is sampled with a probability proportional to the norm of the gradient of the function. This sampling strategy is the optimal serial sampling S that maximizes the rate of convergence solving BID37. From a deep layer perspective, performing Prox-SG with importance sampling for a single iteration is equivalent to a forward pass through the same block of layers with a new dropout layer. Such a dropout layer will keep each input activation with a non-uniform probability proportional to the norm of the gradient. This is in contrast to BerDropout p where all input activations are kept with an equal probability 1 − p. Other types of dropout arise when considering non-serial importance sampling where |S| = τ > 1.In summary, we have presented equivalences between stochastic solvers on a particular class of convex optimization problems and a forward pass through a dropout layer followed by a linear layer and a non-linear activation. Inspired by these equivalences, we have demonstrated empirically on multiple datasets and network architectures that replacing such network blocks with their corresponding stochastic solvers improves the accuracy of the model. We hope that the presented framework will contribute to a principled understanding of the theory and practice of deep network architectures. A LEAKY RELU AS A PROXIMAL OPERATOR Proof. The proximal operator is defined as Prox g (a) = arg min DISPLAYFORM1 Note that the problem is both convex and smooth. The optimality conditions are given by: DISPLAYFORM2 Since the problem is separable in coordinates, we have: DISPLAYFORM3 The Leaky ReLU is defined as DISPLAYFORM4 which shows that Prox g is a generalized form of the Leaky ReLU with a shift of λ and a slope α = Proof. The proximal operator is defined as Prox g (a) = arg min DISPLAYFORM5 Note that the function g(x) is elementwise separable, convex, and smooth. By equating the gradient to zero and taking the positive solution of the ing quadratic polynomial, we arrive at the closedform solution: DISPLAYFORM6 where denotes elementwise multiplication. It is easy to see that this operator is close to zero for x i << 0, and close to x i for x i >> 0, with a smooth transition for small |x i |.Note that the function Prox g (a) approximates the activation SoftPlus = log(1 + exp (a)) very well. An illustrative example is shown in FIG2. Proposition 5. The proximal operator to g(x) = −γ i log(1 − x i x i) approximates the Tanh non-linearity. Proof. To simplify the exposition, we derive the proximal operator for the case γ = 1. The general case for γ > 0 follows analogously. The proximal operator is defined as Prox g (a) = arg min DISPLAYFORM0 Note that the logarithm is taken element wise, and the objective is convex and smooth. By equating the gradient to zero, it can be seen that the optimal solution is a root of a cubic equation: DISPLAYFORM1 which is defined in each coordinate i separately. DISPLAYFORM2 Since q 2 − p 3 < 0, ∀a i ∈ R, it is guaranteed that all roots are real and distinct. Consequently, the roots can be described as DISPLAYFORM3 Since g(x) is only defined on x ∈ [−1, 1] d, the root that minimizes has to satisfy DISPLAYFORM4 DISPLAYFORM5 It is straightforward to check that DISPLAYFORM6 By substituting f k into and checking inequality it becomes clear that the root corresponding to k = 2 minimizes. By using trigonometric identities the root corresponding to k = 2 can be further simplified to DISPLAYFORM7 which has the approximate shape of the Tanh activation. An example of this operator is shown in FIG2. The proximal operator corresponding to the Sigmoid activation can be derived in a similar fashion by setting g(x) = −γ log(x) − γ log(1 − x). The exposition presented in the paper requires some definitions related to tensors and operators on tensors. We summarize the material here. In all subsequent definitions, we assume D ∈ R n1×n2×n3×n4 and X ∈ R n2×1×n3×n4.Definition D.1. BID2 The t-product between high-order tensors is defined as DISPLAYFORM0 where circ HO (D) ∈ R n1n3n4×n2n3n4 and MatVec HO X ∈ R n2n3n4×1.The operator circ HO unfolds an input tensor into a structured matrix. On the other hand, MatVec HO unfolds an input tensor into a vector. The fold and unfold procedures are detailed in BID2. Definition D.2. BID2 The operator DISPLAYFORM1 DISPLAYFORM2 where bdiag : C n1×n2×n3×n4 → C n1n3n4×n2n3n4, maps a tensor to a block diagonal matrix of all the frontal faces D(:, :, i, j). Note that if n 3 = n 4 = 1, bdiag is an identity mapping. Moreover if n 1 = n 2 = 1, bdiag is a diagonal matrix. Due to the structure of the tensor unfold of circ HO , the ant matrix circ HO (D) exhibits the following blockwise diagonalization: DISPLAYFORM3 where F n is the n × n Normalized Discrete Fourier Matrix. Note thatD has the dimensions n 3 and n 4 replaced with the corresponding 2D Discrete Fourier Transforms. That isD(i, j, :, :) is the 2D Discrete Fourier Transform of D(i, j, :, :).For more details, the reader is advised to start with third order tensors in the work of BID13 and move to the work of BID2 is equivalent to (i) performing 2D convolutions spatially along the third and fourth dimensions.(ii) It aggregates the along the feature dimension n 1. (iii) It repeats the procedure for each of the n 2 filters independently. We will show the following using direct manipulation of the properties of HO BID2. DISPLAYFORM4 Note that Equation FORMULA1 shows that features are aggregated along the n 1 dimension. Now, by showing that A (:, i, :, :) HO X l−1 (i, :, :, :) performs n 2 independent 2D convolutions along on the i th channel, the Lemma 1 is proven. For ease of notation, consider two tensors U ∈ R n2×1×W ×H and Y ∈ R 1×1×W ×H then we have the following: DISPLAYFORM5 2D-Inverse Fourier Transform with stride of of n2 DISPLAYFORM6 Note that Ĝ is the elementwise product of the 2D Discrete Fourier Transform between a feature of an input activation Y and the 2D Discrete Fourier Transform of every filter of the n 2 in Y. Since DISPLAYFORM7 is the inverse 2D Fourier transform along each of the n 2 filters ing in Ĝ. Thus U HO Y performs 2D convolutions independently along each of the n 2 filters, combined with; thus, Lemma 1 is proven. DISPLAYFORM8 Lemma 3. For τ -nice Prox-SG, DISPLAYFORM9 is an unbiased estimator to F (X).Proof. DISPLAYFORM10 The first equality follows by introducing an indicator function where 1 i∈S = 1 if i ∈ S and zero otherwise. The last equality follows from the uniformity across elements of the τ -nice S.From Lemma 3, and with zero initialization it follows that DISPLAYFORM11 is an unbiased estimator of ∇F (X) X =0. The last iteration follows by noting that A = A H H. Therefore, the first iteration of τ -nice Prox-SGD with zero initialization and unit step size is: DISPLAYFORM12 Note that the previous stochastic sum in with τ = (1 − p)n 1 can be reparameterized as follows: DISPLAYFORM13 where M ∈ R n1×1×W ×H is a mask tensor. Note that since τ = (1 − p)n 1, M has exactly pn 1 slices M(i, :, :, :) that are completely zero. This equivalent to a dropout layer where the layer drops exactly pn 1 input activations. It follows that is equivalent to a forward pass through a BerDropout p layer followed by a linear layer and non-linear activation. H PROOF OF PROPOSITION 2 mS2GD BID16 with zero initialization at the first epoch defines the following update: DISPLAYFORM14 With zero initialization at the first epoch we have Y = 0, therefore DISPLAYFORM15 I RANDOMIZED COORDINATE DESCENT PERMUTES DROPOUT AND LINEAR LAYERWe present an additional insight to the role of stochastic solvers on in network design. In particular, we show that performing a randomized coordinate descent, RCD, on ignoring the finite sum structure, is equivalent to a linear transformation followed by BerDropout p and a non-linear activation. That is, performing RCD permutes the order of linear transformation and dropout. For ease of notation, we show this under the special case of fully connected layers. Proposition 6. A single iteration of Randomized Coordinate Descent, e.g. NSync, with τ -nice sampling of coordinates of with τ = (1 − p)n 2, unit step sizes along each partial derivative, and with zero initialization is equivalent to:Prox 1 L g i∈S A(i, :, :, :) HO X l−1 + B(i, :, :, :), which is equivalent to a forward pass through a linear layer followed by a BerDropout p layer (that drops exactly n 2 p output activations) followed by a non-linear activation. Proof. We provide a sketch of the proof on the simple quadratic F (x) = 1 2 A x − x l−1 2 − b x where the linear layer is a fully-connected layer. Considering a randomized coordinate descent, e.g. NSync, with τ -nice sampling of the coordinates we have the following: DISPLAYFORM16 Note that e i is a vector of all zeros except the i th coordinate which is equal to 1. Moreover, since the step sizes along each partial derivative is 1, v = 1. Equation is equivalent to a forward pass through a linear layer followed by a BerDropout p layer and a non-linear activation.
A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design.
1,779
scitldr
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code. Deep networks are emerging as components of a number of revolutionary technologies, including image recognition , speech recognition, and driving assistance . Unlocking the full promise of such applications requires a system perspective where task performance, throughput, energy-efficiency, and compactness are all critical considerations to be optimized through co-design of algorithms and deployment hardware. Current research seeks to develop methods for creating deep networks that maintain high accuracy while reducing the precision needed to represent their activations and weights, thereby reducing the computation and memory required for their implementation. The advantages of using such algorithms to create networks for low precision hardware has been demonstrated in several deployed systems (; ;). It has been shown that low precision networks can be trained with stochastic gradient descent by updating high precision weights that are quantized, along with activations, for the forward and backward pass . This quantization is defined by a mapping of real numbers to the set of discrete values supported by a given low precision representation (often integers with 8-bits or less). We would like a mapping for each quantized layer that maximizes task performance, but it remains an open question how to optimally achieve this. To date, most approaches for training low precision networks have employed uniform quantizers, which can be configured by a single step size parameter (the width of a quantization bin), though more complex nonuniform mappings have been considered . Early work with low precision deep networks used a simple fixed configuration for the quantizer , while starting with , later work focused on fitting the quantizer to the data, either based on statistics of the data distribution (; ;) or seeking to minimize quantization error during training (c;). Most recently, work has focused on using backpropagation with , FAQ , LQ-Nets , PACT (b), Regularization (c), and NICE (stochastic gradient descent to learn a quantizer that minimizes task loss (; ; b; a; ; ;). While attractive for their simplicity, fixed mapping schemes based on user settings place no guarantees on optimizing network performance, and quantization error minimization schemes might perfectly minimize quantization error and yet still be non optimal if a different quantization mapping actually minimizes task error. Learning the quantization mapping by seeking to minimize task loss is appealing to us as it directly seeks to improve on the metric of interest. However, as the quantizer itself is discontinuous, such an approach requires approximating its gradient, which existing methods have done in a relatively coarse manner that ignore the impact of transitions between quantized states (b; a;). Here, we introduce a new way to learn the quantization mapping for each layer in a deep network, Learned Step Size Quantization (LSQ), that improves on prior efforts with two key contributions. First, we provide a simple way to approximate the gradient to the quantizer step size that is sensitive to quantized state transitions, arguably providing for finer grained optimization when learning the step size as a model parameter. Second, we propose a simple heuristic to bring the magnitude of step size updates into better balance with weight updates, which we show improves convergence. The overall approach is usable for quantizing both activations and weights, and works with existing methods for backpropagation and stochastic gradient descent. Using LSQ to train several network architectures on the ImageNet dataset, we demonstrate significantly better accuracy than prior quantization approaches (Table 1) and, for the first time that we are aware of, demonstrate the milestone of 3-bit quantized networks reaching full precision network accuracy (Table 4). We consider deep networks that operate at inference time using low precision integer operations for computations in convolution and fully connected layers, requiring quantization of the weights and activations these layers operate on. Given data to quantize v, quantizer step size s, the number of positive and negative quantization levels Q P and Q N, respectively, we define a quantizer that computesv, a quantized and integer scaled representation of the data, andv, a quantized representation of the data at the same scale as v:v Here, clip(z, r 1, r 2) returns z with values below r 1 set to r 1 and values above r 2 set to r 2, and z rounds z to the nearest integer. Given an encoding with b bits, for unsigned data (activations) Q N = 0 and Q P = 2 b − 1 and for signed data (weights) Q N = 2 b−1 and Q P = 2 b−1 − 1. For inference,w andx values can be used as input to low precision integer matrix multiplication units underlying convolution or fully connected layers, and the output of such layers then rescaled by the step size using a relatively low cost high precision scalar-tensor multiplication, a step that can potentially be algebraically merged with other operations such as batch normalization (Figure 1). LSQ provides a means to learn s based on the training loss by introducing the following gradient through the quantizer to the step size parameter: This gradient is derived by using the straight through estimator to approximate the gradient through the round function as a pass through operation (though leaving the round itself in place for the purposes of differentiating down stream operations), and differentiating all other operations in Equations 1 and 2 normally. This gradient differs from related approximations (Figure 2), which instead either learn a transformation of the data that occurs completely prior to the discretization itself , or estimate the gradient by removing the round operation from the forward equation, algebraically canceling terms, and then differentiating such that ∂v /∂s = 0 where −Q N < v /s < Q P (b; a). In both such previous approaches, the relative proximity of v to the transition point between quantized states does not impact the gradient to the quantization parameters. However, one can reason that the Figure 2: Given s = 1, Q N = 0, Q P = 3, A) quantizer output and B) gradients of the quantizer output with respect to step size, s, for LSQ, or a related parameter controlling the width of the quantized domain (equal to s(Q P + Q N)) for QIL and PACT (b). The gradient employed by LSQ is sensitive to the distance between v and each transition point, whereas the gradient employed by QIL is sensitive only to the distance from quantizer clip points, and the gradient employed by PACT (b) is zero everywhere below the clip point. Here, we demonstrate that networks trained with the LSQ gradient reach higher accuracy than those trained with the QIL or PACT gradients in prior work. closer a given v is to a quantization transition point, the more likely it is to change its quantization bin (v) as a of a learned update to s (since a smaller change in s is required), thereby ing in a large jump inv. Thus, we would expect ∂v /∂s to increase as the distance from v to a transition point decreases, and indeed we observe this relationship in the LSQ gradient. It is appealing that this gradient naturally falls out of our simple quantizer formulation and use of the straight through estimator for the round function. For this work, each layer of weights and each layer of activations has a distinct step size, represented as an fp32 value, initialized to 2 |v| / √ Q P, computed on either the initial weights values or the first batch of activations, respectively. It has been shown that good convergence is achieved during training where the ratio of average update magnitude to average parameter magnitude is approximately the same for all weight layers in a network . Once learning rate has been properly set, this helps to ensure that all updates are neither so large as to lead to repeated overshooting of local minima, nor so small as to lead to unnecessarily long convergence time. Extending this reasoning, we consider that each step size should also have its update magnitude to parameter magnitude proportioned similarly to that of weights. Thus, for a network trained on some loss function L, the ratio should on average be near 1, where z denotes the l 2 -norm of z. However, we expect the step size parameter to be smaller as precision increases (because the data is quantized more finely), and step size updates to be larger as the number of quantized items increases (because more items are summed across when computing its gradient). To correct for this, we multiply the step size loss by a gradient scale, g, where for weight step size g = 1 / √ N W Q P and for activation step size g = 1 / √ N F Q P, where N W is the number of weights in a layer and N f is the number of features in a layer. In section 3.4 we demonstrate that this improves trained accuracy, and we provide reasoning behind the specific scales chosen in the Section A of the Appendix. Model quantizers are trained with LSQ by making their step sizes learnable parameters with loss gradient computed using the quantizer gradient described above, while other model parameters can be trained using existing techniques. Here, we employ a common means of training quantized networks , where full precision weights are stored and updated, quantized weights and activations are used for forward and backward passes, the gradient through the quantizer round function is computed using the straight through estimator such that and stochastic gradient descent is used to update parameters. For simplicity during training, we usev as input to matrix multiplication layers, which is algebraically equivalent to the previously described inference operations. We set input activations and weights to either 2-, 3-, 4-, or 8-bit for all matrix multiplication layers except the first and last, which always use 8-bit, as making the first and last layers high precision has become standard practice for quantized networks and demonstrated to provide a large benefit to performance. All other parameters are represented using fp32. All quantized networks are initialized using weights from a trained full precision model with equivalent architecture before fine-tuning in the quantized space, which is known to improve performance (; ;). Networks were trained with a momentum of 0.9, using a softmax cross entropy loss function, and cosine learning rate decay without restarts . Under the assumption that the optimal solution for 8-bit networks is close to the full precision solution , 8-bit networks were trained for 1 epoch while all other networks were trained for 90 epochs. The initial learning rate was set to 0.1 for full precision networks, 0.01 for 2-, 3-, and 4-bit networks and to 0.001 for 8-bit networks. All experiments were conducted on the ImageNet dataset , using pre-activation ResNet, VGG with batch norm, or SqueezeNext . All full precision networks were trained from scratch, except for VGG-16bn, for which we used the pretrained version available in the PyTorch model zoo. Images were resized to 256 × 256, then a 224 × 224 crop was selected for training, with horizontal mirroring applied half the time. At test time, a 224 × 224 centered crop was chosen. We implemented and tested LSQ in PyTorch. We expect that reducing model precision will reduce a model's tendency to overfit, and thus also reduce the regularization in the form of weight decay necessary to achieve good performance. To investigate this, we performed a hyperparameter sweep on weight decay for ResNet-18 (Table 2), and indeed found that lower precision networks reached higher accuracy with less weight decay. Performance was improved by reducing weight decay by half for the 3-bit network, and reducing it by a quarter for the 2-bit network. We used these weight decay values for all further experiments. We trained several networks using LSQ and compare accuracy with other quantized networks and full precision baselines (Table 1). To facilitate comparison, we only consider published models that quantize all convolution and fully connected layer weights and input activations to the specified precision, except for the first and last layers which may use higher precision (as for the LSQ models). In some cases, we report slightly higher accuracy on full precision networks than in their original publications, which we attribute to our use of cosine learning rate decay . We found that LSQ achieved a higher top-1 accuracy than all previous reported approaches for 2-, 3-and 4-bit networks with the architectures considered here. For nearly all cases, LSQ also achieved the best-to-date top-5 accuracy on these networks, and best-to-date accuracy on 8-bit versions of these networks. In most cases, we found no accuracy advantage to increasing precision from 4-bit to 8-bit. It is worth noting that the next best low precision method used progressive fine tuning (sequentially training a full precision to 5-bit model, then the 5-bit model to a 4-bit model, and so on), significantly increasing training time and complexity over our approach which fine tunes directly from a full precision model to the precision of interest. It is interesting to note that when comparing a full precision to a 2-bit precision model, top-1 accuracy drops only 2.9 for ResNet-18, but 14.0 for SqueezeNext-23-2x. One interpretation of this is that the SqueezeNext architecture was designed to maximize performance using as few parameters as possible, which may have placed it at a design point extremely sensitive to reductions in precision. For a model size limited application, it is important to choose the highest performing model that fits within available memory limitations. To facilitate this choice, we plot here network accuracy against corresponding model size (Figure 3). We can consider the frontier of best performance for a given model size of the architectures considered here. On this metric, we can see that 2-bit ResNet-34 and ResNet-50 networks offer an absolute advantage over using a smaller network, but with higher precision. We can also note that at all precisions, VGG-16bn exists below this frontier, which is not surprising as this network was developed prior to a number of recent innovations in achieving higher performance with fewer parameters. Figure 3: Accuracy vs. model size for the networks considered here show some 2-bit networks provide the highest accuracy at a given model size. Full precision model sizes are inset for reference. To demonstrate the impact of the step size gradient scale (Section 2.2), we measured R (see Equation 4) averaged across 500 iterations in the middle of the first training epoch for ResNet-18, using different step size gradient scales (the network itself was trained with the scaling as described in the methods to avoid convergence problems). With no scaling, we found that relative to parameter size, updates to step size were 2 to 3 orders of magnitude larger than updates to weights, and this imbalance increased with precision, with the 8-bit network showing almost an order of magnitude greater imbalance than the 2-bit network (Figure 4, left). Adjusting for the number of weights per layer (g = 1 / √ N W), the imbalance between step size and weights largely went away, through the imbalance across precision remained (Figure 4, center). Adjusting for the number of number of weights per layer and precision (g = 1 / √ N W Q P), this precision dependent imbalance was largely removed as well (Figure 4, right). We considered network accuracy after training a 2-bit ResNet-18 using different step size gradient scales (Table 3). Using the network with the full gradient scale (g Step size gradient scale for weight and activation step size respectively) as baseline, we found that adjusting only for weight and feature count led to a 0.3 decrease in top-1 accuracy, and when no gradient scale was applied the network did not converge unless we dropped the initial learning rate. Dropping the initial learning rate in multiples of ten, the best top-1 accuracy we achieved using no gradient scale was 3.4 below baseline, using an initial learning rate of 0.0001. Finally, we found that using the full gradient scaling with an additional ten-fold increase or decrease also reduced top-1 accuracy. Overall, this suggests a benefit to our chosen heuristic for scaling the step size loss gradient. We chose to use cosine learning rate decay in our experiments as it removes the need to select learning rate schedule hyperparameters, is available in most training frameworks, and does not increase training time. To facilitate comparison with in other publications that use step-based learning rate decay, we trained a 2-bit ResNet-18 model with LSQ for 90 epochs, using an initial learning rate of 0.01, which was multiplied by 0.1 every 20 epochs. This model reached a top-1 accuracy of 67.2, a reduction of 0.4 from the equivalent model trained with cosine learning rate decay, but still marking an improvement of 1.5 over the next best training method (see Table 1). We next sought to understand whether LSQ learns a solution that minimizes quantization error (the distance betweenv and v on some metric), despite such an objective not being explicitly encouraged. For this purpose, for a given layer we define the final step size learned by LSQ asŝ and let S be the set of discrete values {0.01ŝ, 0.02ŝ, ..., 20.00ŝ}. For each layer, on a single batch of test data we computed the value of s ∈ S that minimizes mean absolute error, ) where p and q are probability distributions. For purposes of relative comparison, we ignore the first term of Kullback-Leibler divergence, as it does not depend onv, and approximate the second term as −E[log(q(v(s)))], where the expectation is over the sample distribution. For a 2-bit ResNet-18 model we foundŝ = 0.949 ± 0.206 for activations andŝ = 0.025 ± 0.019 for weights (mean ± standard deviation). The percent absolute difference betweenŝ and the value of s that minimizes quantization error, averaged across activation layers was 50% for mean absolute error, 63% for mean square error, and 64% for Kullback-Leibler divergence, and averaged across weight layers, was 47% for mean absolute error, 28% for mean square error, and 46% for Kullback-Leibler divergence. This indicates that LSQ learns a solution that does not in fact minimize quantization error. As LSQ achieves better accuracy than approaches that directly seek to minimize quantization error, this suggests that simply fitting a quantizer to its corresponding data distribution may not be optimal for task performance. To better understand how well low precision networks can reproduce full precision accuracy, we combined LSQ with same-architecture knowledge distillation, which has been shown to improve low precision network training . Specifically, we used the distillation loss function of with temperature of 1 and equal weight given to the standard loss and the distillation loss (we found this gave comparable to weighting the the distillation loss two times more or less than the standard loss on 2-bit ResNet-18). The teacher network was a trained full precision model with frozen weights and of the same architecture as the low precision network trained. As shown in Table 4, this improved performance, with top-1 accuracy increasing by up to 1.1 (3-bit ResNet-50), and with 3-bit networks reaching the score of the full precision baseline (see Table 1 for comparison). As a control, we also used this approach to distill from the full precision teacher to a full precision (initially untrained) student with the same architecture, which did not lead to an improvement in the student network accuracy beyond training the student alone. These reinforce previous work showing that knowledge-distillation can help low precision networks catch up to full precision performance . Table 4: Accuracy for low precision networks trained with LSQ and knowledge distillation, which is improved over using LSQ alone, with 3-bit networks reaching the accuracy of full precision (32-bit) baselines (shown for comparison). The presented here demonstrate that on the ImageNet dataset across several network architectures, LSQ exceeds the performance of all prior approaches for creating quantized networks. We found best performance when rescaling the quantizer step size loss gradient based on layer size and precision. Interestingly, LSQ does not appear to minimize quantization error, whether measured using mean square error, mean absolute error, or Kullback-Leibler divergence. The approach itself is simple, requiring only a single additional parameter per weight or activation layer. Although our goal is to train low precision networks to achieve accuracy equal to their full precision counterparts, it is not yet clear whether this goal is achievable for 2-bit networks, which here reached accuracy several percent below their full precision counterparts. However, we found that such 2-bit solutions for state-of-the-art networks are useful in that they can give the best accuracy for the given model size, for example, with an 8MB model size limit, a 2-bit ResNet-50 was better than a 4-bit ResNet-34 (Figure 3). This work is a continuation of a trend towards steadily reducing the number of bits of precision necessary to achieve good performance across a range of network architectures on ImageNet. While it is unclear how far it can be taken, it is noteworthy that the trend towards higher performance at lower precision strengthens the analogy between artificial neural networks and biological neural networks, which themselves employ synapses represented by perhaps a few bits of information and single bit spikes that may be employed in small spatial and/or temporal ensembles to provide low bit width data representation. Analogies aside, reducing network precision while maintaining high accuracy is a promising means of reducing model size and increasing throughput to provide performance advantages in real world deployed deep networks. We compute our gradient scale value by first estimating R (Equation 4), starting with the simple heuristic that for a layer with N W weights To develop this approximation, we first note that the expected value of an l 2 -norm should grow with the square root of the number of elements normalized. Next, we assume that where Q P = 1, step size should be approximately equal to average weight magnitude so as to split the weight distribution into zero and non zero values in a roughly balanced fashion. Finally, we assume that for larger Q P, step size should be roughly proportional to 1 /Q P, so that as the number of available quantized states increases, data between the clip points will be quantized more precisely, and the clip points themselves (equal to sQ N and sQ P) will move further out to better encode outliers. We also note that, in the expectation, ∇ w L and ∇ s L are of approximately the same order. This can be shown by starting from the chain rule then assuming ∂ŵi /∂s is reasonably close to 1 (see for example Figure 2), and treating all ∂L /∂ŵi as uncorrelated zero-centered random variables, to compute the following expectation across weights: By assuming ∂ŵ /∂w = 1 for most weights, we similarly approximate Bringing all of this together, we can then estimate Knowing this expected imbalance, we compute our gradient scale factor for weights by simply taking the inverse of R, so that g is set to 1 / √ N W Q P. As most activation layers are preceded by batch normalization , and assuming updates to the learned batch normalization scaling parameter is the primary driver of changes to pre-quantization activations, we can use a similar approach to the above to show that there is an imbalance between step size updates and update driven changes to activations that grows with the number of features in a layer, N F as well as Q P. Thus, for activation step size we set g to 1 / √ N F Q P. In this section we provide pseudocode to facilitate the implementation of LSQ. We assume the use of automatic differentiation, as supported by a number of popular deep learning frameworks, where the desired operations for the training forward pass are coded, and the automatic differentiation engine computes the gradient through those operations in the backward pass. Our approach requires two functions with non standard gradients, gradscale (Function 1) and roundpass (Function 2). We implement the custom gradients by assuming a function called detach that returns its input (unmodified) during the forward pass, and whose gradient during the backward pass is zero (thus detaching itself from the backward graph). This function is used in the form: so that in the forward pass, y = x 1 (as the x 2 terms cancel out), while in the backward pass ∂L /∂x1 = 0 (as detach blocks gradient propagation to x 1) and ∂L /∂x2 = ∂L /∂y. We also assume a function nf eatures that given an activation tensor, returns the number of features in that tensor, and nweights that given a weight tensor, returns the number of weights in that tensor. Finally, the above are used to implement a function called quantize, which quantizes weights and activations prior to their use in each convolution or fully connected layer. The pseudocode provided here is chosen for simplicity of implementation and broad applicability to many training frameworks, though more compute and memory efficient approaches are possible. This example code assumes activations are unsigned, but could be modified to quantize signed activations.
A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks.
1,780
scitldr
There have been several studies recently showing that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, ing in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios. We propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets. We introduce an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases. In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples, i.e. examples that cannot be correctly classified by only relying on biases. Our approaches are model agnostic and simple to implement. We experiment on large-scale natural language inference and fact verification datasets and their out-of-domain datasets and show that our debiased models significantly improve the robustness in all settings, including gaining 9.76 points on the FEVER symmetric evaluation dataset, 5.45 on the HANS dataset and 4.78 points on the SNLI hard set. These datasets are specifically designed to assess the robustness of models in the out-of-domain setting where typical biases in the training data do not exist in the evaluation set. Recent neural models (; ;) have achieved high and even near human-performance on several large-scale natural language understanding benchmarks. However, it has been demonstrated that neural models tend to rely on existing idiosyncratic biases in the datasets, and leverage superficial correlations between the label and existing shortcuts in the training dataset to perform surprisingly well 1, without learning the underlying task (; ; ; ; ;). For instance, natural language inference (NLI) consists of determining whether a hypothesis sentence (There is no teacher in the room) can be inferred from a premise sentence (Kids work at computers with a teacher's help) 2 . However, recent work has demonstrated that large-scale NLI benchmarks contain annotation artifacts; certain words in the hypothesis are highly indicative of inference class that allow models with poor premise grounding to perform unexpectedly well . As an example, in some NLI benchmarks, negation words such as "nobody", "no", and "not" in the hypothesis are often highly correlated with the contradiction label. As a consequence, NLI models do not need to learn the true relationship between the premise and hypothesis and instead can rely on statistical cues, such as learning to link negation words with the contradiction label. As a of the existence of such biases, models exploiting statistical shortcuts during training often perform poorly on out-of-domain datasets, especially if they are carefully designed to limit the spurious cues. To allow proper evaluation, recent studies have tried to create new evaluation datasets that do not contain such biases . Unfortunately, it is hard to avoid spurious statistical cues in the construction of large-scale benchmarks, and collecting 1 We use biases, heuristic patterns or shortcuts interchangeably. 2 The given sentences are in the contradictory relation and the hypothesis cannot be inferred from the premise. new datasets is costly . It is therefore crucial to develop techniques to reduce the reliance on biases during the training of the neural models. In this paper, we propose several end-to-end debiasing techniques to adjust the cross-entropy loss to reduce the biases learned from datasets, which work by down-weighting the biased examples so that the model focuses on learning hard examples. Figure 1 illustrates an example of applying our strategy to prevent an NLI model from predicting the labels using existing biases in the hypothesis. Our strategy involves adding a bias-only branch f B on top of the base model f M during training (In case of NLI, the bias-only model only uses the hypothesis). We then compute the combination of the two models f C in a way to motivate the base model to learn different strategies than the ones used by the bias-only branch f B. At the end of the training, we remove the bias-only classifier and use the predictions of the base model. We propose three main debiasing strategies, detailed in Section 2.2. In our first two proposed methods, the combination is done with an ensemble method which combines the predictions of the base and the bias-only models. The training loss of the base model is then computed on the output of this combined model f C. This has the effect of reducing the loss going from the combined model to the base model for the examples which the bias-only model classifies correctly. For the third method, the bias-only predictions are used to directly weight the loss of the base model, explicitly modulating the loss depending on the accuracy of the bias-only model. All strategies work by allowing the base model to focus on learning the hard examples, by preventing it from learning the biased examples. Our approaches are simple and highly effective. They require training a simple classifier on top of the base model. Furthermore, our methods are model agnostic and general enough to be applicable for addressing common biases seen in several datasets in different domains. We evaluate our models on challenging benchmarks in textual entailment and fact verification. For entailment, we run extensive experiments on HANS (Heuristic Analysis for NLI Systems) , and hard NLI sets of Stanford Natural Language Inference (SNLI) and MultiNLI (MNLI) datasets (. We additionally construct hard MNLI datasets from MNLI development sets to facilitate the out-of-domain evaluation on this dataset 3 . Furthermore, we evaluate our fact verification models on FEVER Symmetric test set . The selected datasets are highly challenging and have been carefully designed to be unbiased to allow proper evaluation of the out-of-domain performance of the models. We show that including our strategies on training baseline models including BERT provide substantial gain on out-of-domain performance in all the experiments. In summary, we make the following contributions: 1) Proposing several debiasing strategies to train neural models that make them more robust to existing biases in the dataset. 2) An empirical evaluation of the proposed methods on two large-scale NLI benchmarks and obtaining substantial gain on their challenging out-of-domain data, including 5.45 points on HANS and 4.78 points on SNLI hard set. 3) Evaluating our models on fact verification, obtaining 9.76 points gain on FEVER symmetric test set, improving the of prior work by 4.65 points. To facilitate future work, we release our datasets and code. Problem formulation We consider a general multi-class classification problem. Given a dataset consisting of the input data x i ∈ X, and labels y i ∈ Y, the goal of the base model is to learn a mapping f M parameterized by θ M which computes the predictions over the label space given the input data, shown as f M: X → R |Y|. Our goal is to optimize θ M parameters such that we build a model which is more resistant to benchmark biases to improve its robustness to domain changes when the typical biases observed in the training data do not exist in the evaluation dataset. The key idea of our approach, depicted in Figure 1 is first to identify the dataset biases and heuristic patterns which the base model is susceptible to relying on. Then, we use a bias-only branch to capture these biases. We propose several strategies to incorporate the bias-only knowledge into the training of the base model to make a robust version of it. After training we remove the biasonly model and use the predictions of the base model. In this section, we explain each of these components. We assume that we do not have access to any data from the out-of-domain dataset, so we need to know a priori about the possible types of shortcut patterns we would like the base model to avoid relying on them. Once these shortcut patterns are identified, we train a bias-only model designed to capture the identified biases which only uses the biased features. For instance, it has been shown that a hypothesis-only model in the large-scale NLI datasets can correctly classify the majority of samples using the artifacts . Therefore, our bias-only model for NLI only uses hypothesis sentences. But note that the bias-only model can, in general, have any form, and is not limited to models which are using only a part of input data. Let x b i ∈ X b be biased features of x i which are predictive of y i. We then formalize this bias-only model as a mapping f B: X b → R |Y| parameterized by θ B trained using cross-entropy loss L B: where a i is the one-hot representation of the true label for the i th example. In the next section, we explain how we use the bias-only model to make a robust version of the base model. We propose several strategies to incorporate the bias-only f B knowledge into training of the base model f M and update its parameters θ M using the obtained loss L C of the combined classifier f C. All these strategies have the form illustrated in Figure 1, where the predictions of the bias-only model are combined with either the predictions of the base model or its error to down-weight the loss from the biased examples, thereby affecting the error backpropagated into the base model. As also illustrated in Figure 1, it is often convenient for the bias-only model to share parameters with the base model, such as sharing a sentence encoder. To prevent the base model from learning the biases, the bias-only loss L B is not back-propagated to these shared parameters of the base model. To accommodate this sharing, the bias-only and the base models are trained together. Next, we explain how the loss of the combined classifier, L C, is computed for each of our debiasing methods. Our first approach is based on the idea of the product of experts ensemble method : "It is possible to combine multiple probabilistic models of the same data by multiplying the probabilities together and then renormalizing.". Here, we use this notion to combine the bias-only and base model predictions by computing the element-wise product between their predictions as f B (x We compute this combination in the logarithmic space, which works better in practice: The key intuition behind this model is to combine the probability distributions of the bias-only and the base model to allow them to make predictions based on different characteristics of the input; the bias-only branch covers prediction based on biases, and the base model focuses learning the actual task. We then compute L C as the cross-entropy loss of the combined predictions f C . Then the base model parameters θ M are trained using the cross-entropy loss of the combined classifier f C : When this loss is backpropagated to base model parameters θ M, the predictions of the bias-only model decrease the updates for examples which it can accurately predict. propose a model called RUBI to alleviate unimodal biases learned by Visual Question Answering (VQA) models.'s study is limited to alleviating biases in VQA benchmarks. We, however, evaluate the effectiveness of their formulation together with our newly proposed variations in the natural language understanding context on several challenging NLU datasets. We first apply a sigmoid function to the bias-only model's predictions to obtain a mask containing an importance weight between 0 and 1 for each possible label. We then compute the element-wise product between the obtained mask and the base model's predictions: The main intuition is to dynamically adjust the predictions of the base model to prevent the base model from leveraging the shortcuts. We note two properties of this loss. When the bias-only model correctly classifies the example, the mask increases the value of the correct prediction while decreases the scores for other labels. As a , the loss of biased examples is down-weighted. For the hard examples that cannot be correctly classified using bias-only model, the obtained mask increases the score of the wrong answer. This, in turn, increases the contribution of hard examples and encourages the base model to learn the importance of correcting them. We additionally propose the following new variants of this model: 1. Computing the combination in logarithmic space, which we refer to it as RUBI + log space. 2. Normalizing the output of the bias-only model, followed by RUBI model, which we refer to it as RUBI + normalize: As with our first method, we then update the parameters of the base model θ M by backpropagating the cross-entropy loss L C of the combined classifier. Focal loss was originally proposed in to improve a single classifier by downweighting the well-classified points. We propose a novel variant of this loss, in which we leverage the bias-only branch's predictions to reduce the relative importance of the most biased examples and allow the model to focus on learning the hard examples. We define Debiased Focal Loss as: where γ is the focusing parameter, which impacts the down-weighting rate. When γ is set to 0, our Debiased Focal Loss is equivalent to the normal cross-entropy loss. For γ > 0, as the value of γ is increased, the effect of down-weighting is increased. We set γ = 2 through all experiments, which works well in practice and avoid fine-tuning it further. We note the properties of the Debiased Focal Loss: When the example x i is unbiased, and bias-only branch does not do well, is small, therefore the scaling factor is close to 1, and the loss remains unaffected. As the sample is more biased and softmax (f B (x b i)) is closer to 1, the modulating factor approaches 0 and the loss for the most biased examples is down-weighted. For this debiasing strategy, Debiased Focal Loss is then used to update the parameters of the base model θ M. Note that this loss has a different form from that used for the first two methods. We provide experiments on two large-scale NLI datasets, namely SNLI and MNLI, and FEVER dataset for our fact verification experiment and evaluate the models' performance on their challenging unbiased evaluation datasets proposed very recently. In most of our experiments, we consider BERT 4 as our baseline which is known to work well for these tasks, and additionally, we have included other baselines used in the prior work to compare against them. In all the experiments, we kept the hyperparameters of baselines as the default. We include low-level details in the appendix. Dataset: FEVER dataset contains claim-evidence pairs generated from Wikipedia. collect a new evaluation set for FEVER dataset to avoid the idiosyncrasies observed in the claims of this benchmark. They make the original claim-evidence pairs of FEVER evaluation dataset symmetric, by augmenting the dataset and making each claim and evidence appear with each label. Therefore, by balancing the artifacts, relying on cues from claim to classify samples is equivalent to a random guess. The collected dataset is challenging and the performance of the models evaluated on this dataset drop significantly. Base models: We consider BERT as the baseline, which works the best on this dataset , and predicts the relations based on the concatenation of the claim and the evidence with a delimiter token (see Appendix A). The bias-only model predicts the labels using only claims as input. Results: Table 1 shows the . The obtained improvement of our debiasing methods varies between 1.11-9.76 absolute points. The Product of experts and Debiased Focal loss are highly effective, boosting the performance of the baseline model by 9.76 and 7.53 absolute points respectively, significantly surpassing the prior work . Datasets: We evaluate on hard SNLI and MNLI datasets which are the split of these datasets where a hypothesis-only model cannot correctly predict the labels. show that the success of the recent textual entailment models is attributed to the biased examples, and the performance of these models are substantially lower on hard sets. Base models: We consider InferSent , and BERT as our base models. We choose InferSent to be able to compare against the prior work (b). The bias-only model only uses the hypothesis to predict the labels (see Appendix B). , to predict the labels using only the hypothesis and consider the subset of the samples on which our trained hypothesis-only classifier failed as hard examples. Table 3 shows the on the development sets and their corresponding hard sets. For BERT baseline, on MNLI matched hard dataset, the product of experts and RUBI+normalize improve the the most by 1.46 and 1.11 points. On MNLI mismatched hard, the Debiased Focal Loss and product of experts obtain 1.37, and 1.68 points gain respectively. For InferSent baseline, on MNLI matched hard, the product of experts and RUBI improve the by 2.34 and 0.94 points. On MNLI mismatched hard, the Product of experts and Debiased Focal Loss improve the by 2.61 and 2.52 points. To comply with limited access to the submission system of MNLI, we evaluate only the best of baseline and our models on the test sets. Table 4 shows the on the MNLI test and hard sets. Our product of expert model improves the performance on MNLI matched hard set by 0.93 points and 1.08 points on MNLI Mismatched hard set while maintaining the in-domain accuracy. show that NLI models can rely on superficial syntactic heuristics to perform the task. They introduce HANS dataset, which covers several examples on which the models employing the syntactic heuristics fail. Base model: We use BERT as our base model and train it on MNLI dataset. We consider several features for the bias-only model. The first three features are based on the syntactic heuristics proposed in: 1) Whether all the words in the hypothesis are included in the premise. 2) If the hypothesis is the contiguous subsequence of the premise. 3) If the hypothesis is a subtree in the premise's parse tree 4) The number of tokens shared between premise and hypothesis normalized by the number of tokens in the premise. We additionally include some similarity features: 5) The cosine similarity between premise and hypothesis tokens followed by mean and max-pooling. We consider the same weight for contradiction and neutral labels in the bias-only loss to allow the model to recognize entailment from not-entailment. During the evaluation, we map the neutral and contradiction labels to not-entailment. Results: As shown in Table 5, the Product of experts and Debiased Focal loss improve the the most by 5.45, 3.89 points. We provide the accuracy for each label on HANS dataset in Appendix C. To understand the impact of γ in Debiased Focal Loss, we train InferSent models with this loss for different values of γ on SNLI dataset and evaluate its performance on SNLI and SNLI hard sets. As illustrated in Figure 2, increasing γ focuses the loss on learning hard examples, and reduces the attention on learning biased examples. Consequently, the in-domain accuracy on SNLI is dropped but out-of-domain accuracy on SNLI hard set is increased. Results: Through extensive experiments on different datasets, our methods improve out-of-domain performance in all settings. Debiased Focal Loss and Product of experts models consistently obtain the highest gains. Within RUBI variations, RUBI+log space outperforms the other variations on SNLI with BERT baseline and HANS dataset. RUBI+normalize does better than the rest on FEVER experiment and MNLI matched hard set with BERT baseline. RUBI performs the best on SNLI and MNLI experiments with InferSent baseline, and MNLI mismatched hard with BERT baseline. As expected, improving the out-of-domain performance could come at the expense of the decreased in-domain performance, since the removed biases are useful for performing the in-domain task. This especially happens for Debiased Focal Loss, in which there is a trade-off between in-domain and out-of-domain performance as discussed depending on the parameter γ, and when the baseline model is not very powerful like InferSent. Our other models with BERT baseline consistently remain the in-domain performance. Biases in NLU benchmarks and other domains Recent studies have shown that large-scale NLU benchmarks contain biases.;; demonstrate that textual entailment models can rely on annotation artifacts and heuristic patterns to perform unexpectedly well. On ROC Stories corpus , show that considering only sample endings without story contexts performs exceedingly well. A similar phenomenon is observed in fact verification , argument reasoning comprehension , and reading comprehension . Finally, several studies confirm biases in VQA datasets, leading to accurate question-only models ignoring visual content. Existing techniques to alleviate biases The most common strategy to date to address biases is to augment the datasets by balancing the existing cues . In another line of work, to address the shortcoming in Stanford Question Answering dataset , propose to create an adversarial dataset in which they insert adversarial sentences to the input paragraphs. However, collecting new datasets especially in large-scale is costly and it remains an unsatisfactory solution. It is, therefore, crucial to develop strategies to allow training models on the existing biased datasets, while improving their out-of-domain performance. propose to first compute the n-grams existing in the claims which are the most associated with each label. They then solve an optimization problem to assign a balancing weight to each training sample to alleviate the biases. In contrast, we propose several end-to-end debiasing strategies. Additionally, Belinkov et al. (2019a) propose adversarial techniques to remove from the sentence encoder the features which allow a hypothesis-only model to succeed. However, we believe that in general the features used by the hypothesis-only model can include some information necessary to perform the NLI task, and removing such information from the sentence representation can hurt the performance of the full model. Their approach consequently degrades the performance on hard SNLI dataset which is expected to be less biased. In contrast to their method, we propose to train a bias-only model to use its predictions to dynamically adapt the classification loss to reduce the importance of the most biased examples during training. Concurrently to our own work,; have also proposed to use the product of experts models. However, we have evaluated on new domains and datasets, and have proposed several different ensemble-based debiasing techniques. We propose several novel techniques to reduce biases learned by neural models. We introduce a bias-only model that is designed to capture biases and leverages the existing shortcuts in the datasets to succeed. Our debiasing strategies then work by adjusting the cross-entropy loss based on the performance of this bias-only model to focus learning on the hard examples and down-weight the importance of the biased examples. Our proposed debiasing techniques are model agnostic, simple and highly effective. Extensive experiments show that our methods substantially improve the model robustness to domain-shift, including 9.76 points gain on FEVER symmetric test set, 5.45 on HANS dataset and 4.78 points on SNLI hard set. Base model: InferSent uses a separate BiLSTM encoder to learn sentence representations for premise and hypothesis, it then combines these embeddings following and feeds them to the default nonlinear classifier. For InferSent we train all models for 20 epochs as default without using early-stopping. We use the default hyper-parameters and following, we set BiLSTM dimension to 512. We use the default nonlinear classifier with 512 and 512 hidden neurons with Tanh nonlinearity. For Bert model, we finetune the models for 3 epochs. Bias-only model For BERT model, we use the same shallow nonlinear classifier explained in Appendix A, and for the InferSent model, we use a shallow linear classifier with 512, and 512 hidden units. Base model: We finetune all the models for 3 epochs. We use a nonlinear classifier with 6 and 6 hidden units with Tanh nonlinearity. Results: Table 6 shows the performance for each label (entailment and non entailment) on HANS dataset and its individual heuristics.
We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings.
1,781
scitldr
Reconstruction of few-view x-ray Computed Tomography (CT) data is a highly ill-posed problem. It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT. Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low. This paper presents a deep network-driven approach to address extreme few-view CT by incorporating convolutional neural network-based inference into state-of-the-art iterative reconstruction. The proposed method interprets few-view sinogram data using attention-based deep networks to infer the reconstructed image. The predicted image is then used as prior knowledge in the iterative algorithm for final reconstruction. We demonstrate effectiveness of the proposed approach by performing reconstruction experiments on a chest CT dataset. Computed Tomography (CT) reconstruction is an inverse problem where images are reconstructed from a collection of multiple x-ray projections known as sinogram. Conventional CT imaging systems use densely sampled x-ray projections (roughly equal to one projection per detector column) with a full angular range (180-360 degrees). Unlike the conventional CT setup, on the other hand, some CT systems use different imaging configurations that require rapid scanning or reduced radiation dose. In those cases, the CT imaging uses a small number of x-ray projections, referred to as few-view CT. Reconstructing images from a few x-ray projections becomes an extremely under-determined inverse problem, which in significant image degradation. The reconstructed images from extremely few-view sinogram measurement (10 views or less) are often characterized by severe artifacts and noise, even with state-of-the-art regularized iterative algorithms as well as with the widely used Filtered Backprojection (FBP). In recent years, deep learning-based approaches have been successfully applied to a number of image restoration, denoising, inpainting and other image processing applications. Methods in this category use perceptual information as well as contextual features to improve the image quality. In CT imaging applications, several deep convolutional neural network (CNN) approaches have been proposed to address different ill-conditioned CT reconstruction applications. Methods in proposed CNN-based approaches to improve the image quality for low-dose (sparse-view) imaging. These approaches aim to infer the noise distribution to generate a cleaner image from the noisy image. However, these methods do not employ the sinogram to ensure that the reconstructed image is consistent with the measurement. Gupta et al. proposed a method using a CNN-based projector for moderate sparse-view reconstruction (45 and 144 views). Anirudh et al. proposed a CNN-based sinogram completion approach to address limited-angle CT reconstruction. In this paper, we present a CNN inference-based reconstruction algorithm to address extremely few-view CT imaging scenarios. For the initial reconstruction, we employ a CNN-based inference model, based on CT-Net, that directly uses the input measurement (few-view sinogram data) to predict the reconstructed image. In the cases where the sinogram measurements are extremely undersampled, the images reconstructed by existing analytic and iterative methods may suffer from too much noise with little high frequency information, and the methods in may repair the missing or noisy part with perceptually created, but incorrect content. Thus, we pursue a method that directly uses the sinogram so that the reconstructed content is consistent with the input measurement, as an inverse problem. Furthermore, instead of performing the sinogram completion in optimized for limited-angle reconstruction, we propose to use the predicted image from the CNN inference model as an image prior in state-of-the-art iterative algorithms in order to improve the final reconstruction. Our experiments on a chest CT dataset show that the proposed model outperforms existing analytical and state-of-the-art iterative algorithms as well as the sinogram completion. The proposed approach is comprised of two stages: (a) CNN-driven inference, (b) iterative reconstruction with the image prior, as illustrated in Figure 1. The algorithm begins with CNN-based deep inferencing to predict the reconstructed images from few-view sinogram data. Then we perform an iterative algorithm with the predicted images as initial guess. For the CNN-based deep inferencing, we adopt CT-Net proposed in but modify it for few-view CT applications. Our network architecture is similar to the original CT-Net, consisting of 1D-2D CNNs to infer images directly from the sinogram data. Unlike limited-angle reconstruction, however, the sinogram in few-view CT is spatially less coherent and the number of projections is smaller compared to limited-angle reconstruction, and we use smaller filter sizes in the 1D CNN. For the training loss, we use SSIM loss in addition to L 2 and adversarial loss with a discriminator to generate more realistic CT images. We empirically choose the weight between L 2 and SSIM losses (0.7 in our experiment). The predicted reconstructed images are then used as image prior in the Regularized Least Squares (RLS)-based iterative algorithm. In RLS, images are reconstructed by solving the following optimization:X = arg min where y is measured sinogram data,X is the reconstructed attenuation coefficients of the object, A is the x-ray projection matrix subject to the projection geometry, and R(x) is a Total Variation regularization functional, and β serves as a weight to control the strength of the regularization. The β term is determined empirically. The optimization typically requires 50 − 100 iterations, starting with a randomly chosen initial guess. However, the optimization with a carefully selected initial guess enables high quality reconstruction as well as fast convergence. To this end, we propose to use our deep network-driven prediction as image prior (initial guess) in this iterative algorithm, which enables to recover edge-preserving high frequency regions in the reconstructed images. There are a set of parameters to be chosen empirically in the RLS algorithm. For our experiment on the chest CT dataset, we chose 100 iterations with β = 2e −2 in and the non-negative constraint. To demonstrate effectiveness of the proposed approach, we perform experiments on a chest CT dataset provided by the National Cancer Institute. The chest CT dataset is comprised of 1595 patients of chest scans for the study of lung nodule detection as an early stage cancer diagnosis. We split the entire scans into 1,000 patients of scans for training and the rest for testing. The reconstructed volumes are 512 × 512 × N where N is the number of slices ranging from 94 to 541, and each slice is treated independently to demonstrate the parallel-beam projection based reconstruction. We down-sampled the original 512 × 512 slices into 128 × 128 to fit the proposed model to a single GPU memory. Since the dataset does not provide the original sinogram, we performed forward projection using Livermore Tomography Tools to generate sinogram of 180 × 128 where 180 is the number of projection views. We then sampled sinogram for the training and test input to perform few-view experiments. In 9-view reconstruction, for example, we sampled views at 0, 20, 40, 60, 80, 100, 120, 140, 160 degrees from 180 views. For the training, we used Adam optimizer with learning rate of 1 × 10 −3 and the exponential decay of every 10,000 steps with a base of 0.97. The mini-batch size is 50 and the number of epochs is 50. We trained our model using Tensorflow on NVIDIA GeForce RTX 2080 Ti. To evaluate the reconstruction performance, we randomly sampled 50 images from the test dataset and performed our reconstruction together with FBP, RLS and Sinogram Completion (SC) as baseline algorithms. We report PSNR and SSIM of the reconstruction . Figure 2 shows qualitative from two different views, 6 and 12, respectively. We observe that the proposed method yields sharper edges and more high frequency regions with less artifact and noise, compared to the baseline algorithms. Figure 3 shows quantitative evaluation on 6 different few-view experiments (3, 6, 9, 12, 15, 18 views). PSNR and SSIM show that the proposed approach outperforms the baselines although both metrics are not sufficient to measure perceptual quality. Another observation is that RLS improves the image quality as the number of views increases while our method outperforms by a large margin especially when the number of views is extremely small. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
We present a CNN inference-based reconstruction algorithm to address extremely few-view CT.
1,782
scitldr
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines. In order to get close to this goal, machines must master a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and finally output a response that both fulfills functional goals in the conversation while simultaneously being captivating to their human speaking partner. The current state-of-the-art approaches, sequence to sequence models of various kinds BID20 BID23 BID17 BID21 attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output. To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed. In this work we consider setups where this can be naturally measured and built. We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes. During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves. This is a challenging task as it requires several components not found in many standard models. We design a set of architectures specifically for this goal that combine elements of Memory Network architectures BID19 to retrieve knowledge and read and condition on it, and Transformer architectures BID21 to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks. As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, first crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them. Each topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles. In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge. Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations. We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers. Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction. Many existing dialogue tasks do not study the use of knowledge explicitly. For example, popular chit-chat datasets such as Open-Subtitles BID23, Persona-Chat BID26 and Twitter BID18 have tested the ability of sequence-to-sequence models that attend over the recent dialogue history, but do not attempt to recall long-term knowledge beyond encoding it directly into the weights of the feed-forward network. In the area of goal-directed dialogue, separate from open domain chit-chat, such as airline BID6 or restaurant booking BID9 BID24, knowledge conditioning is typically employed by allowing access to a database through API calls or otherwise. In contrast, our work investigates unstructured knowledge across a large, diverse set of topics potentially spanning all of Wikipedia. In question answering one does not produce a dialogue response based on a conversation history, but a factual answer based on a question. In that case, it is clear that retrieving and conditioning knowledge is vital. For example, in SQuAD neural models have been developed that attend to a given paragraph from Wikipedia to answer questions BID15, or Open-SQuAD which extends this to answering without being given the paragraph, instead performing retrieval over the entirety of Wikipedia BID3. Recently, the QuAC dataset investigates similar themes, but as a sequence of questions and answers in dialogue form instead BID4. In this work we do not address question answering, but focus on natural human dialogues which contain a diverse set of utterances, not just questions and answers. The closest work to ours lies in the area of non-goal directed dialogue incorporating knowledge. The work of BID5 employed Memory Networks to perform dialogue discussing movies in terms of recommendation and open-ended discussion from Reddit, conditioning on a structured knowledge base. also links Reddit to structured knowledge. Both BID14 and BID8 use unstructured text instead, as we do: the former to discuss news articles using Wikipedia summaries as knowledge, and the latter to discuss local businesses in two-turn dialogues using Foursquare tips as knowledge. BID8 uses an extended Encoder-Decoder where the decoder is provided with an encoding of the context along with the external knowledge encoding. Neither involves dialogue authored with the given knowledge, so it is unclear when knowledge is useful or not. In contrast, in our task, we know the Wikipedia articles and sentences that ground crowdworkers dialogues. Model-wise, BID14 uses a Bag-of-Words Memory Network type fact encoder and an RNN decoder. Our work compares Memory Networks BID19 and Transformers which have been shown to be on-par or superior to RNN encoder-decoders BID21, and develops an architecture that combines these approaches. Concurrently with our work BID13 proposed a dataset based on the closed domain of movie chats. Our paper shows models working on full multi-turn dialogue in an open-domain setting, which to our knowledge was not shown before. We consider the following general open-domain dialogue setting: two participants engage in chitchat, with one of the participants selecting a beginning topic, and during the conversation the topic is allowed to naturally change. The two participants, however, are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice).Apprentice At each stage of the conversation the apprentice talks to the wizard freely, playing the role of a curious learner, eager to chat. Their goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. Note that the instruction to delve deeply into a topic makes this different to more "shallow" chit-chat tasks. In this task the use of knowledge is emphasized more. Wizard The wizard is given the following instructions: "You have just met the other person, who seems quite curious, and you are eager to discuss a topic with them!" Their goal is to inform their conversation partner about a topic that one of them will choose. Crucially, the wizard has access to an information retrieval system that shows them paragraphs from Wikipedia possibly relevant to the conversation, which are unobserved by the apprentice. Before each conversation turn the wizard can read these paragraphs and then potentially base their next reply on that observed knowledge. Note, the wizard is particularly instructed not to simply parrot this knowledge, but to use it to craft a relevant reply, and to present any relevant knowledge in a fun and engaging way, if possible. The flow of the conversation thus takes place as follows.1. Either the wizard or apprentice is picked to choose the topic and speak first. The other player receives the topic information, and the conversation begins.2. When the apprentice sends the wizard a message, the wizard is shown relevant knowledge (described below), and chooses a relevant sentence in order to construct a response, or else chooses the no sentence used option.3. The Wizard responds to the apprentice basing their response on their chosen sentence.4. The conversation repeats until one of the conversation partners ends the chat (after a minimum of 4 or 5 turns each, randomly chosen beforehand).After collecting data of such wizard-apprentice conversations between humans, the goal is to then replace the human wizard with a learned agent that will speak to a human apprentice instead, similar to the procedure in Wizard of Oz experiments BID0.Topics We crowd-sourced a set of 1365 natural, open-domain dialogue topics, each linked to a Wikipedia article. These include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Knowledge Retrieval At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context. While this is a potentially learnable part of the model, we required for this to be fixed so that we could present the to the annotator when collecting the dataset. We thus used exactly the same retriever that is commonly used for the Open-SQuAD dataset in BID3. It uses a simple inverted index lookup followed by term vector model scoring. Articles and queries are compared as TF-IDF weighted bag-of-word and n-gram vectors, using the hashing trick. We retrieve the top 7 articles (first paragraph only) for the last two turns of dialogue (by wizard and apprentice) and the article (first 10 sentences only) for the original topic, and present these articles to the wizard as knowledge context, along with their titles. Note that while this system is used to build the dataset, a superior method can in principle be learned and used by a model at test time. Knowledge Selection and Response Generation During data collection, the wizard can click on any of the retrieved article titles in the dialogue UI to expand that article, at which point they can click on a sentence that is most relevant to the response they want to make (only one article, and one sentence can be selected on any turn, for simplicity). If they see no relevant article or sentence they can choose no sentence used instead. The wizard then enters their response to the apprentice. An image of the Wizard's UI is shown in Appendix A.1. In this work we consider learning dialogue models to replace the wizard in our learning tasks, i.e. the knowledgeable speaker. The dialogue model thus can have access to a knowledge source, in this case Wikipedia, to ground the conversation with. We thus develop extensions of the Memory Network BID19 and Transformer BID21 models that can (i) retrieve from a large memory relevant information conditioned on the dialogue history, (ii) carefully read and attend over the retrieved set of knowledge, and then (iii) generate the next dialogue utterance. This model is then used consecutively on each turn to form an entire dialogue with a user. We develop two classes of models capable of leveraging knowledge: (i) retrieval models that produce an output among a set of candidate responses (the set of utterances from the training set); and (ii) generative models that generate word-by-word (using a beam).The input to either model is the same: at each dialogue turn where the model is intended to make a response, it is given the current dialogue context x 1,..., x t of t dialogue turns, where x 1 is always the initial starting topic (e.g. "Kurt Cobain"), and the remaining turns swap between the two speakers. The goal at each stage is to output the next utterance x t+1.Knowledge Retrieval We assume a large knowledge base (memory) m 1,..., m N which is hierarchically organized into documents consisting of paragraphs and sentences. As it is infeasible for current neural attention techniques to operate on this scale, we use standard information retrieval (IR) techniques (c = IR(x, m)) as a first step to return a smaller set of candidates m c1,..., m c K for fine-grained selection. In our experiments, we use the IR system provided to the human annotators during dataset creation, detailed in Section 3. The retriever operates on the topic (x 1) and the last two turns (x t and x t−1) if they exist, effectively calling the IR system three times with three different queries. Empirically, this provided better performance compared to merging into one query, likely because it can address quite different topics. We retrieve the top 7 articles (first paragraph only) for each lookup and then flatten all the into separate sentences (i.e. remove the organization of sentences belonging to articles), but prepend every sentence with its article title. In this way the candidates m c1,..., m c K given to the neural model in the next stage can be attended to independently without having to deal with hierarchical issues. Knowledge Attention We use an attention mechanism to perform fine-grained selection of which knowledge sentences will be used to produce the next turn of dialogue. Each sentence in the memory is independently encoded with a Transformer encoder BID21, and the same Trans- former is used to encode the dialogue context x. We then perform standard dot-product attention between the memory candidates and the dialogue context. Utterance Prediction Given the hidden state derived from the memory attention process described above, the final stage is to predict the output utterance that will form the next dialogue turn. We consider different variants of the two stages above, knowledge attention and utterance prediction, when considering retrieval and generative variants of our models. We will now detail these in turn. This model encodes each knowledge sentence m c1,..., m c K and the dialogue context x with a Transformer, as described above. The final input encoding is calculated by performing dot-product attention over enc(m c1),..., enc(m c K) and adding the ing weighted sum of these vectors to enc(x) to get the representation rep LHS (m c1, . . ., m c K, x). The candidate responses r 1,..., r L are encoded with a separate Transformer to get rep RHS (r i) for each i. We choose as a response r where DISPLAYFORM0 The model is trained to minimize the cross-entropy loss, where the negative candidates for each example are the responses to the other examples in the batch BID10. We consider two versions: a Two-stage and an End-to-end version. Both models find the most relevant piece of knowledge m best, and then perform an encoding step by concatenating it with the dialogue context, allowing the decoder to attend over both the knowledge and dialogue when formulating its response. We employ a beam search of 5 to select our best response. All generative models employ BPE encoding BID16, which we found effective at enabling generators to copy rare words from Wikipedia sentences BID7.In the End-to-end version, a shared Transformer encoder is used to encode all candidates m ci and the dialogue history. The encoded candidates are flattened into vectors using the normalization from BID2 (summing, and normalizing by the square root of the sentence length in order to balance short and long sentences) to produce an attention prediction over the memory. The full sequence encoding of the single highest selected knowledge m best is concatenated with the encoding of the dialogue, and passed into a Transformer decoder. An illustration of our End-to-end model is shown in FIG0. We train the model to minimize the negative log-likelihood of the response utterance. We can add additional supervision by forcing the knowledge selection to correctly choose the same knowledge candidate as the human wizard in the training set by adding an additional crossentropy loss over the knowledge attention, modulated by a weight λ: DISPLAYFORM0 In the Two-stage version, we employ two separately trained models for each of these two tasks, knowledge selection and utterance prediction. As the knowledge selection step creates a hard deci- sion influencing the output of the generator, we find maximizing the performance of this component to be vital. We can also improve performance of the decoder by employing knowledge dropout (K.D.), wherein we artificially prevent the model from attending to knowledge a fraction of the time during training. We find this helps the generator be more resilient to errors at the knowledge selection stage, and makes training faster. K. D. is a novel technique we propose here, however it is similar to many other dropout techniques, e.g. feature dropout used in BID25. We describe each of our experimental setups and . We first investigate the ability of our models to select knowledge appropriately, and then consider the full task of dialogue with knowledge. Before looking at the full Wizard dialogue task, we assess the ability of models to predict the knowledge selected by human wizards in the dataset given the dialogue history. This will inform us of the feasibility of this task and the best models to use in a two-stage architecture. We compare Transformers against various baselines including a random baseline; an Information Retrieval (IR) baseline, which uses simple word overlap; and a Bag-of-Words Memory Network BID19. Where noted, the Transformer is pretrained on Reddit data BID12, and fine-tuned for our task. The are shown in TAB1. Transformers work best, as long as they are pretrained on a large dataset (Reddit), while multi-tasking on SQuAD provides marginal impact. Further analysis of this task using other models is provided in Appendix B.1. We use the best performing Transformer model reported here for our two-stage generative Memory Network in the full dialogue task. We evaluate our models on the full task of dialogue generation given knowledge in two settings: given the gold knowledge sentence chosen by a human, or where the model needs to predict which knowledge to use. We separately describe experiments for retrieval and generative models. We use similar baselines as in the knowledge selection experiments, but now also apply Transformer Memory Networks, which attend over knowledge. Models are evaluated measuring Recall@1 when ranking the gold response among 99 randomly chosen candidates, and unigram F1 of the model's prediction with the gold response. The are shown in TAB2. We find that the addition of knowledge improves all models (improving Bow MemNet from 56 to 71 R@1 and the Transformer MemNet from 79 to 87 R@1) for predicted knowledge. Performance improves dramatically when models are provided gold knowledge, but otherwise retain similar trends. Generative Experiments We compare our generative End-to-end and Two-stage Transformer Memory Network models to two more baselines: repeating the last utterance, and a generative Transformer model trained to respond to dialogue but without access to knowledge. Models are evaluated using perplexity (PPL) of the gold response and unigram F1. The are given in TAB3. Our experiments show that both the End-to-end and Two-stage models employ the knowledge in their response predictions, as they outperform their counterpart Transformer without knowledge, and demonstrate substantial improvements when provided the gold knowledge. While the Two-stage model produces significantly stronger perplexity and F1 scores using the predicted knowledge, the End-to-end model outperforms the Two-stage model in the gold knowledge experiments. This suggests that the Two-stage model benefits from the strong knowledge selection module (Section 5.1), but that the End-to-end model is better at employing the selected knowledge. Furthermore, we find that the additional knowledge selection supervision (auxiliary loss) in the End-to-end model improves it on every metric, suggesting that tightly integrating these tasks is beneficial. Knowledge dropout (K. D.) also helps (compare last two rows). More evidenceTopic: E-book (seen) Human: i do not know why, but I have never gotten into E-books Model: Same here. I have a kindle, but I don't use it for books. It's nicer to hold a real book and turn pages. Human: yes, i like the physical feel and smell of a real book Model: I actually had a Nook e-book where you had to download the e-book after selecting it online. I grew out of it, I prefer paper books also. Human: i tried e-books but I do not like reading on a screen, its disorientated for me Model: Yes, reading a book on a tablet just isn't the same! Human: plus I like owning the physical copy of a real book, I feel like I own it Model: I prefer print and newsreels. for this is shown in Appendix B.1. Lastly, we note that both Two-stage models give higher F1 scores than any of the retrieval models shown in TAB2. We perform human evaluation of our models using crowd-sourced workers. Humans are paired with our models and asked to chat about a specific topic (given a choice of 2-3 topics) for 3-5 dialogue turns. Following their conversation, the humans are asked to rate their dialogue partner on a scale of 1-5, with the rating indicating how much they "liked" the conversation (5 is best), which we refer to as the engagingness rating. Using the collected conversations, we also calculate a metric we call the Wiki F1 sore: the F1 overlap of the model's utterances with the first 10 sentences of the Wikipedia page for the chosen topic as a proxy for how much knowledge the model exhibits. We seek a model that can be simultaneously engaging and knowledgeable, hence we would like to maximize both these metrics 1. For comparison, we also collect 100 human-human conversations, with only one human choosing the topic and performing evaluation. In total, we collect a total of 546 conversations with ratings from 464 distinct workers. These are shown in TAB4.We find that the retrieval models significantly outperform the generative models on the human engagingness evaluation(Student's t-test, p < .05). The human engagingness differences between retriever models with and without knowledge are not significant, but note they both trend toward use of knowledge due to the candidate sentences retrieved, with the knowledgeable version obtaining significantly higher Wiki F1 scores in both seen and unseen test sets. For the generative models, we find human engagingness ratings are significantly improved by the use of knowledge (p < .01). The significantly higher Wiki F1 scores indicate that (i) these models convey more knowledge than their counterparts without knowledge conditioning; and (ii) on both seen and unseen sets they convey more knowledge than the retrieval models. In particular, on unseen data the gap between retrieval and generative models is larger. This is understandable, as retrieval models are limited to producing a response from the training set where the unseen topic did not appear. There is still a considerable gap to human ratings of other humans compared to all our models (first row of TAB4). FIG2 shows example conversations with the retrieval and generative models. Additional analysis and examples can be found in Appendix B.3 and C. In this work we build dialogue agents which are able to employ large memory systems containing encyclopedic knowledge about the world in order to conduct engaging open-domain conversations. We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes. To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments. Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will in significant advances in this important research direction. There is much future work to be explored using our task and dataset. Some of these include: (i) bridging the gap between the engagingness of retrieval responses versus the ability of generative models to work on new knowledge and topics, (ii) learning to retrieve and reason simultaneously rather than using a separate IR component; and (iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems. The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent. Examples of collected conversations from the dataset, where both wizard and apprentice are humans. The wizard has access to an information retrieval system over Wikipedia, so that they can ask and answer questions, and make statements relevant to the discussion. For each utterance, knowledge retrieval is performed based on dialogue history, giving ∼61 knowledge candidates per turn, with wizards clicking no sentence used 6.2% of the time. Assuming that a question contains a question mark or begins with'how','why','who','where','what' or'when', in the dataset Apprentices ask questions in 13.9% of training set utterances, and answer questions (i.e., the Wizard has asked a question) 39.5% of the time, while saying new or follow-on statements (neither asking nor answering a question) 49.3% of the time. Hence, the wizard and apprentice conduct conversations with a variety of dialogue acts. To choose between topics that are natural we employed the existing Persona-Chat dataset BID26 where crowdworkers where asked to create personas of typical speakers. There are ∼1000 personas, each of which consists of 4-5 sentences describing that person's interests, e.g. "I love watching Game of Thrones", "I like to eat cheetos" and "I recently got a cat". These can thus naturally be seen as topics of interest, and using another set of annotators we mapped each sentence to 1 or more relevant Wikipedia pages, if possible, e.g. "Ariel is my favorite princess" was labeled with the Wikipedia page for The Little Mermaid. As some sentences are harder to connect with Wikipedia, e.g. "I am witty", they are left unlabeled. We thus obtain 1,431 topics in total to use for our task. We retain the persona topic sets and thus present 2-3 related topic choices as conversation starters per dialogue during data collection. B ADDITIONAL EXPERIMENTS B.1 KNOWLEDGE SELECTION Table 6: Test performance of the Knowledge Selection Tasks. We also tested the performance of our models trained to do the full dialogue task (see Section 5.2) on the knowledge selection task. For our retrieval system, this refers to the performance of the knowledge attention. The show that our retrieval system could be improved, and the auxiliary loss clearly helps the generative models. We perform an analysis of the dialogues produced from the human evaluation experiments detailed in Section 5.3. We sample 20 conversations from each experimental setting, split between seen and unseen. Conversations are re-tokenized and lowercased to reduce superficial differences between models, and then analyzed in a single-blind setup. We note of common errors and behaviors exhibited in each of the different conversations. In general, the human-human conversations are starkly different than any of the bot conversations -humans tend to have more small talk, or use the topic of discussion as a mere icebreaker, with neither human behaving as a wizard. This is in contrast to human-human conversations from the Wizard dataset itself, where one human has access to Wikipedia, and the conversation becomes more grounded in factual sentences. Similarly, all models attempt to play the role of wizard and produce more factual sentences too. In some rounds, humans treat the bot as a sort of question-answer machine, suggesting that the models could be improved by additionally employing SQuAD-like training data. The retriever without knowledge is particularly prone to non sequiturs, or rapidly changing the subject. During unseen conversations, it is especially likely to discuss something other than the chosen topic. In contrast, the retriever with knowledge tends to stick to the chosen topic strongly, but has difficulty if the human changes the subject. Frequently in unseen topics, the retriever with Table 7: Retrieval methods on the full Wizard task. In addition to the models we tested in the paper, we also tested a two-stage retrieval system in which we used our best-performing model on the knowledge selection task to choose a single knowledge sentence to condition on for the dialogue retrieval task. This outperformed our best retrieval method in terms of F1 but not not in terms of Recall@1. Furthermore, we compared these to a two-stage retrieval system in which the dialogue retrieval module is optimized for seeing the gold chosen knowledge sentence. The performance of this system on the gold knowledge task suggests that the retrieval system could be improved by increasing performance on the knowledge selection subtask. Table 8: Human Experiments. We calculate the Wiki F1 score for the wizard and apprentice as they appear in the dataset for the sake of comparison to our human evaluations. Note that this differed from the human-human evaluation set-up in the sense that the wizard had direct access to Wikipedia passages in the UI, which explains the higher values of Wiki F1 both for the wizard (who uses Wikipedia) and for the apprentice (who would likely reference that use). knowledge produces similar, but factually inaccurate answers to user queries. For example, when one user asks about parts of Ireland to visit, the model enumerates a list of locations in Greece. Nonetheless, its repertoire of available responses often include inviting responses, allowing the bot to have a more natural conversational flow. Selected conversations with the retriever with knowledge may be found in FIG5, for both seen and unseen topics. The generator without knowledge is particularly prone to many of the typical behaviors of seq2seq systems BID11 BID22, including local repetition ("cookies are made of flour, oil, oil, and oil"), global repetition (producing the near same utterance for multiple turns), or inconsistencies in its personality (saying it both likes and dislikes movies). The generator with knowledge has significantly fewer issues with repetition, as it errs on the side of copying large fragments from the Wikipedia knowledge. The generator with knowledge can also act as a selfish conversationalist, choosing to respond or detail information without inviting a response. Although it generally produces accurate statements, it sometimes produces statements using an incorrect date, name or word. It also frequently produces formulaic responses, like "I don't know, but I do know that [Wikipedia excerpt]". Nonetheless, we find the generator with knowledge is able to successfully generalize to unseen topics using the knowledge from Wikipedia. Selected conversations with the generator with knowledge may be found in Figure 5.
We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task.
1,783
scitldr
We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems. We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation. We then take the best of both approaches to develop multi-GCN embedded contextual bandit. Our algorithms are verified on several real world datasets. We formulate the problem of Online Partially Rewarded (OPR) learning. Our problem is a synthesis of the challenges often considered in the semi-supervised and contextual bandit literature. Despite a broad range of practical cases, we are not aware of any prior work addressing each of the corresponding components. Online: data incrementally collected and systems are required to take an action before they are allowed to observe any feedback from the environment. Partially: oftentimes there is no environment feedback available, e.g. a missing label Rewarded: instead of the true label, we can only hope to observe feedback indicating whether our prediction is good or bad (1 or 0 reward), the latter case obscuring the true label for learning. Practical scenarios that fall under the umbrella of OPR range from clinical trials to dialog orchestration. In clinical trials, reward is partial, as patients may not return for followup evaluation. When patients do return, if feedback on their treatment is negative, the best treatment, or true label, remains unknown. In dialog systems, a user's query is often directed to a number of domain specific agents and the best response is returned. If the user provides negative feedback to the returned response, the best available response is uncertain and moreover, users can also choose to not provide feedback. In many applications, obtaining labeled data requires a human expert or expensive experimentation, while unlabeled data may be cheaply collected in abundance. Learning from unlabeled observations is the key challenge of semi-supervised learning BID2. We note that the problem of online semi-supervised leaning is rarely considered, with few exceptions BID14 BID13. In our setting, the problem is further complicated by the bandit-like feedback in place of labels, rendering existing semi-supervised approaches inapplicable. We will however demonstrate how one of the recent approaches, Graph Convolutional Networks (GCN) BID9, can be extended to our setting. The multi-armed bandit problem provides a solution to the exploration versus exploitation tradeoff while maximizing cumulative reward in an online learning setting. In Linear Upper Confidence Bound (LINUCB) BID10 BID4 and in Contextual Thompson Sampling (CTS) BID0, the authors assume a linear dependency between the expected reward of an action and its context. However, these algorithms assume that the bandit can observe the reward at each iteration. Several authors have considered variations of partial/corrupted rewards BID1 BID6, but the case of entirely missing rewards has not been studied to the best of our knowledge. The rest of the paper is structured as follows. In section 2, we formally define the Online Partially Rewarded learning setup and present two extensions to GCN to suit our problem setup. Section 3 presents quantitative evidence of these methods applied to four datasets and analyses the learned latent space of these methods. We first formally define each of the OPR keywords:Online: at each step t = 1,..., T we observe observation x t and seek to predict its labelŷ t using x t and possibly any information we had obtained prior to step t. Partially: after we make a predictionŷ t, the environment may not provide feedback (we will use -1 to encode its absence) and we must proceed to step t + 1 without knowledge of the true y t.Rewarded: suppose there are K possible labels y t ∈ {1, . . ., K}. The environment at step t will not provide true y t, but instead a response h t ∈ {−1, 0, 1}, where h t = 0 indicatesŷ t = y t and h t = 1 indicatesŷ t = y t (-1 indicates missing response). Update with new edges using x t Update GCN weights W DISPLAYFORM0 Retrieve GCN embeddings g(X) (k) 8: DISPLAYFORM1 (k) t 10: DISPLAYFORM2 (k) t Predictŷ t = argmax k (µ k + σ k) and observe h t 14: DISPLAYFORM0 ∀k, y DISPLAYFORM1 Append t to each C k and 1 to r ·,k ifŷ t = k and 0 otherwise DISPLAYFORM2 Append t to Cŷ t, output ofŷ t -th GCN to r ·,ŷtRewarded Online GCN (ROGCN) is a natural extension of GCN, adapted to the online, partially rewarded setting along with a potential absence of true graph information. We assume availability of a small portion of data and labels (size T 0) available at the start, X 0 ∈ R T0×D and y 0 ∈ {−1, 1, . . ., K} T0. When there is no graph available we can construct a k-NN graph (k is a parameter chosen a priori) based on similarities between observations -this approach is common in convolutional neural networks on feature graphs BID8 BID5 and we adopt it here for graph construction between observations X 0 to obtain graph adjacency A 0. Using X 0, y 0, A 0, we can train GCN with L hidden units (a parameter chosen a priori) to obtain initial estimates of hidden layer weights W 1 ∈ R D×L and softmax weights W 2 ∈ R L×K. Next we start to observe the stream of data -as new observation x t arrives, we add it to the graph and data matrix, and append -1 (missing label) to y. Then we run additional training steps of GCN and output a prediction to obtain environment response h t ∈ {−1, 0, 1}. Here 1 indicates correct prediction, hence we include it to the set of available labels for future predictions; 0 indicates wrong prediction and -1 an absence of a response, in the later two cases we continue to treat the label of x t as missing. ROGCN is unable to learn from missclassified observations and has to treat them as missing labels. The bandit perspective allows one to learn from missclassfied observations, i.e. when the environment response h t = 0, and the neural network perspective facilitates learning better features such that linear classifier is sufficient. This observation motivates us to develop a more sophisticated synthesis of GCN and LINUCB approaches, where we can combine advantages of both perspectives. Notice that if K = 2, a h t = 0 environment response identifies the correct class, hence the OPR reduces to online semi-supervised learning for which GCN can be trivially adjusted using ideas from ROGCN. To take advantage of this for K > 2, we propose to use a suite of class specific GCNs, where the hidden layer representation from the k-th class GCN, i.e. g(X)(k) = ReLU(ÂXW (k) 1 ) and g(X) (k) t denotes the embedding of observation x t, is used as context by the contextutal bandit for the predictions of the k-th arm. Based on the environment response to the prediction, we update the labels and the reward information to reflect a correct, incorrect, or a missing environment response. The reward is imputed from the corresponding GCN when the response is missing. As we add new observation x t+1 to the graph and update weights of the GCNs, the embedding of the previous observations x 1,..., x t evolves. Therefore instead of dynamically updating bandit parameters, we maintain a set of indices for each of the arms C k = {t :ŷ t = k or h t = 1} and use observations and responses from only these indices to update the corresponding bandit parameters. Similar to ROGCN, we can use a small amount of data X 0 and labels y 0 converted to binary labels y DISPLAYFORM0 T0 (as before -1 encodes missing label) for each class k to initialize GCNs weights DISPLAYFORM1 for k = 1,..., K. We present the GCNUCB in Algorithm 1, where r t,k ∈ denotes the reward observed or imputed at step t for arm k as described in the algorithm. In this section we compare baseline method LINUCB which ignores the data with missing rewards to ROGCN and GCNUCB. We consider four different datasets: CNAE-9 and Internet Advertisements from the the UCI Machine Learning Repository 1, Cora 2, and Warfarin BID12. Cora is naturally a graph structured data which can be utilized by ROGCN and GCNUCB. For other datasets we use a 5-NN graph built online from the available data as follows. Suppose at step t we have observed data points x i ∈ R D for i = 1,..., t. Weights of the similarity graph computed as follows: DISPLAYFORM0. As it was done by we set σ = A ij is the diagonal matrix of node degrees. For pre-processing we discarded features with large magnitudes (3 features in Internet Advertisements and 2 features in Warfarin) and row normalized all observations to have unit l 1 norm. For all the methods that use GCN, we use 16 hidden units for GCN, and use Adam optimizer with a learning rate of 0.01, and regularization strength of 5e-4, along with a dropout of 0.5. To simulate the OPR setting, we randomly permute the order of the observations in a dataset and remove labels for 25% and 75% of the observations chosen at random. For all methods we consider initial data X 0 and y 0 to represent a single observation per class chosen randomly (T 0 = K). At a step t = T 0 + 1,..., T each algorithm is given a feature vector x t and is ought to make a predictionŷ t. The environment response h t ∈ {−1, 0, 1} is then observed and algorithms moves onto step t + 1. To compare performance of different algorithms at each step t we compareŷ t to true label y t available from the dataset (but concealed from the algorithms themselves) to evaluate running accuracy. DISPLAYFORM1 For GCNUCB we use baseline LINUCB for first 300 steps, and for both we use explorationexploitation trade-off parameter α = 0.25. Results are summarized in TAB0. Since ordering of the data can affect the problem difficulty, we performed 10 data resampling for each setting to obtain error margins. GCNUCB outperforms the LINUCB baseline and ROGCN in all of the ex-periments, validating our intuition that a method synthesizing the exploration capabilities of bandits coupled with the effective feature representation power of neural networks is the best solution to the OPR problem. We see the greatest increase in accuracy between GCNUCB and the alternative approaches on the Cora dataset which has a natural adjacency matrix. This suggests that GCNUCB has a particular edge in OPR applications with graph structure. Such problems are ubiquitous. Consider our motivating example of dialog systems -for dialog systems deployed in social network or workplace environments, there exists graph structure between users, and user information can be considered alongside queries for personalization of responses. Visualizing GCNUCB context space. Recall that the context for each arm of GC-NUCB is provided by the corresponding binary GCN hidden layer. The motivation for using binary GCNs to provide the context to LINUCB is the ability of GCN to construct more powerful features using graph convolution and neural networks expressiveness. To see how this procedure improves upon the baseline LINUCB utilizing input features as context, we project the context and the corresponding bandit weight vectors, θ 1,..., θ K, for both LINUCB and GCNUCB to a 2-dimensional space using t-SNE BID11. In this experiment we analyzed CNAE-9 dataset with 25% missing labels. Recall that the bandit makes prediction based on the upper confidence bound of the regret: argmax k (θ k x k,t + σ k) and that x k,t = x t ∀k = 1,..., K for LINUCB and x k,t = g(X) (k) t for GCNUCB. To better visualize the quality of the learned weight vectors, for this experiment we set α = 0 and hence σ k = 0 ing in a greedy bandit, always selecting an arm maximizing expected reward θ k x t,k. In this case, a good combination of contexts and weight vectors is the one where observations belonging to the same class are well clustered and corresponding bandit weight vector is directed at this cluster. For LINUCB FIG2, 68% accuracy) the bandit weight vectors mostly point in the direction of their respective context clusters, however the clusters themselves are scattered, thereby inhibiting the capability of LINUCB to effectively distinguish between different arms given the context. In the case of GCNUCB (Figure 2, 77% accuracy) the context learned by each GCN is tightly clustered into two distinguished regions -one with context for corresponding label and binary GCN when it is the correct label (points with bolded colors), and the other region with context for the label and GCN when a different label is correct (points with faded colors). The tighter clustered contexts allow GCNUCB to effectively distinguish between different arms by assigning higher expected reward to contexts from the correct binary GCN than others, thereby ing in better performance of GCNUCB than other methods.
Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks
1,784
scitldr
A collection of scientific papers is often accompanied by tags: keywords, topics, concepts etc., associated with each paper. Sometimes these tags are human-generated, sometimes they are machine-generated. We propose a simple measure of the consistency of the tagging of scientific papers: whether these tags are predictive for the citation graph links. Since the authors tend to cite papers about the topics close to those of their publications, a consistent tagging system could predict citations. We present an algorithm to calculate consistency, and experiments with human- and machine-generated tags. We show that augmentation, i.e. the combination of the manual tags with the machine-generated ones, can enhance the consistency of the tags. We further introduce cross-consistency, the ability to predict citation links between papers tagged by different taggers, e.g. manually and by a machine. Cross-consistency can be used to evaluate the tagging quality when the amount of labeled data is limited. A part of a construction of a knowledge graph is the analysis of publications and adding to them tags: concept names, keywords, etc. This often involves natural language processing or other machine learning methods BID8. To develop such methods one must have a measure of success: one should be able to determine whether the given tagging is "good" or "bad". The most direct way to test the machine produced tags is to compare them to the tags produced by humans. One creates a "golden set" of papers tagged by humans, and penalizes the algorithms for any deviation from these tags. There are, however, certain problems with this approach. First, human tagging is expensiveeven more so for scientific papers, where human taggers must have a specialized training just to understand what the papers are about. Second, even the best human taggers' are inconsistent. This provides a natural limitation for this method BID7. The latter problem is exacerbated when the tagging dictionary is large. For example, the popular US National Library of Medicine database of Medical Subject Headings (MeSH, https://www.nlm.nih.gov/mesh/) has just under 30 000 entries. A superset of MeSH, Unified Medical Language System (UMLS, https://www.nlm.nih.gov/research/ umls/knowledge_sources/metathesaurus/release/statistics.html) contains a staggering amount of 3 822 832 distinct concepts. It is doubtful a human can do a good job choosing the right tags from a dictionary so large. A domain expert usually deals with a subsystem of the dictionary, covering her area of expertise. This presents obvious difficulties for tagging papers about multidisciplinary research, that may require a combination of the efforts of several highly qualified taggers. Another problem is the evaluation of tag augmentation. Suppose we have papers tagged by humans, and we want to add machine-generated tags, for example, to improve the search function in the collection. Do the new tags actually add to the quality or subtract from it? How can we evaluate the if our tags are by definition different from those produced by humans?Thus a measure of the tagging quality other than a direct comparison with manually produced tags may be useful for the assessing the work of the tagging engines. This is especially important for an ongoing quality control of an engine that continuously ingests and tags fresh publications. In this paper we propose such a measure. The idea for this measure is inspired by the works on graph embeddings [, Grover and BID3 . In these works one tags graph nodes and compares different sets of tags. The usual comparison criterion is whether the tags can predict graph edges: nodes connected by an edge should have similar tags, while nodes not connected by an edge should have dissimilar tags. To use this approach we need to represent papers as nodes on a graph. A natural choice is the citation graph: and edge from paper A to paper B means that paper A cites paper B. This leads to the following assumptions:1. Scientific papers cited by the given paper A are more similar to A than the other (non cited) papers.2. A good tagging system must reflect this. In other words, a good set of tags must be able to predict links on the citation graph, and the quality of the prediction reflects the quality of the tags. We will call this property consistency: a good tagger consistently gives similar tags to similar papers. It is worth stressing that consistency is just one component of the quality of a tagger. If a tagger consistently uses keyword library instead of keyword bread BID0, this measure would give it high marks, despite tags being obviously wrong. A way to overcome this deficiency is to calculate cross-consistency with a known "good" tagger. For example, we can tag some papers manually, and some papers using machine generated tags, and then predict citation links between these papers. This cross-consistency measures the similarity between these taggers. This application is interesting because it allows us to expand the number of labeled papers for evaluation of machine-based taggers. We can create a golden set of manually tagged papers, and then generate tags for the papers in their reference lists, and the random samples using the machine-based tagger. Since a typical paper cites many publications, this approach significantly expands the quantity of data available for training and testing. To create a measure based on these ideas one should note that citation links strongly depend on the time the candidate for citation was published. Even a very relevant paper may not be cited if it is too old or too new. In the first case the citing authors may prefer a newer paper on the same topic. In the second case they may overlook the most recent publications. Therefore we recast our assumptions in the following way:A consistent tagging system should be able to predict citation links from a given paper to a set of simultaneously published papers. The rest of the paper is organized as follows. In Section 2 we discuss the algorithm to calculate the consistency of the given tagging system. Experiments with this measure are discussed in Section 3. In Section 4 we present the . The algorithm to calculate the consistency measure is shown as Algorithm 1. We select n seed papers. For each seed paper we take up to k random papers from its reference list, and label them with the label y = 1. These are our cited papers, or positive samples. For each cited paper we randomly choose k papers from the corpus with the same publication date as the cited paper. These are our negative samples FIG0 ). The notion of "publication date" depends on the granularity g of publication dates. We label the chosen negative samples with the label y = 0. Now we have a vector of labels y i of size k + km, with k ones and km zeros. We calculate tags for the seed paper, positive and negative samples, and the number of overlapping tags between the seed paper and each of positive and negative samples. This gives us k + km overlap numbers t i of each seed paper. We can solve the one dimension classification problem y ∼ t and calculate its ROC curve BID2. The area under curve (AUC) for this problem is the measure of the consistency of the tagging. The average AUC and its variation are the numbers we are interested in. Algorithm 1 can be used for calculation of consistency or cross-consistency of taggers. In the latter case we just choose different sources of tags for seed papers and samples. For experiments we used the papers extracted from the PubMed database (https://www. ncbi.nlm.nih.gov/pubmed/). These papers have MeSH tags attached manually by the human taggers. We added to them additional tags by processing papers' titles and abstracts using several packages. Gene names were identified using GNAT BID4, diseases were identified using DNORM BID6, and additional UMLS concepts Algorithm 1 Calculation of consistency measure for the given tagging system Input: Parameters: n seed papers, k cited papers per seed paper, m negative samples per cited paper, granularity g of the publication date 1: for all seed papers s do Select min(k, bibliography length) random papers from the reference list of s (cited papers or positive samples). Label them as y i = 1 for all cited papers c do Select m random papers with same publication date as c, using date granularity g (negative samples). Label them as y i = 0. Tag the seed paper, positive and negative samples. for all positive and negative samples p do Calculate the number t i of overlapping tags between the seed paper s and the sample p 9: end for Calculate AUC for the classification problem y ∼ t 11: end for 12: return The set of AUCs. The average AUC is the consistency measure, while the variation provides the error estimate were mapped using NEJI BID1. The coverage of papers by different tagging sources is shown in Table 1.The number of seed papers n in Algorithm 1 was chosen to be n = 100. Based on the preliminary experiments we chose the following hyperparameters which produced a good convergence of the measure: the number of cited papers per see paper k = 10, the number of negative samples per cited paper m = 2. The date granularity chosen was year-month: two papers were considered published at the same time if their years and months of publication in PubMed coincided. The baseline to compare the against was constructed by randomizing the tag sets. We linked each paper (seeds, references, and negative samples) to the tags from all the sources, and then randomly shuffled the papers, so each paper got a set of tags from some other paper in our sample. We expect the average AUC for this random tag set to be 0.5, reflecting the lack of discrimination between the cited papers and negative samples. We calculate the AUC for each of our seed papers, and report the distribution of the in Tukey "box and whiskers" plots. On Figure 2 we show the consistency measure for the tagging sources: DNORM, GNAT, MANUAL, NEJI as well as the combined automatic taggers (NEJI + DNORM + GNAT).Adding machine-generated tags to the manual ones is explored on Figure 3. Here we take manually created tags and add machine-generated ones from different sources, again using random tags as a baseline. On Figure 4 we show cross-consistency between the manual tags and NEJI-generated ones (since GNAT and DNORM are used to predict only specific concepts like genes and diseases, they are omitted from the experiment). We used one source for tagging seed papers, and another source for tagging samples (cited papers and negative samples). First, there is clear difference between the consistency of the randomly generated tags and the real ones (Figure 2). As expected, the consistency of the random tags is concentrated at AUC = 0.5, with some outliers both above and below this value. In contrast, the consistency of the real tags is almost always above AUC = 0.5. An exception is tagging sources of low coverage like GNAT (see Table 1), where consistency is close to 0.5. Obviously when the coverage is low, most positive and negative samples have zero overlap with their seed papers, which lowers AUC. Unexpectedly, the consistency of high coverage machine generated sources like NEJI is on par with the human tags. Tags augmentation is explored on Figure 3. As expected, adding random tags to the manually generated ones does not noticeably change the consistency of the . However, adding "real" machine generated tags is improving our measure, which is another evidence that the measure itself is reasonable. The cross-consistency between manual tags and machine-generated ones is shown on Figure 4. Here we used different sources for seed papers and for samples. While crossconsistency is lower than the internal consistency of each tagger, is still is significantly higher than for random tags. In , a simple measure of consistency of tagging: whether it is predictive for citation links in a knowledge graph,-seems to be informative about the tagging process and can be used, along with other measures, to assess and evaluate it. Cross-consistency between different taggers can be used to estimate their similarity, especially when some taggers (e.g. manual tagging) are too expensive to run on a large set of papers. Cross consistency between manual tags and NEJI generated ones. X axis shows the source for the seed papers, Y axes shows the source for samples
A good tagger gives similar tags to a given paper and the papers it cites
1,785
scitldr
Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks (CNNs) on the task of image recognition, through creating adversarial samples which `"slightly" differ from legitimate samples. This vulnerability indicates that these powerful models are sensitive to specific perturbations and cannot filter out these adversarial perturbations. In this work, we propose a quantization-based method which enables a CNN to filter out adversarial perturbations effectively. Notably, different from prior work on input quantization, we apply the quantization in the intermediate layers of a CNN. Our approach is naturally aligned with the clustering of the coarse-grained semantic information learned by a CNN. Furthermore, to compensate for the loss of information which is inevitably caused by the quantization, we propose the multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space. We enclose our design in a quantization layer named as the Q-Layer. The obtained on MNIST and Fashion-MNSIT datasets demonstrate that only adding one Q-Layer into a CNN could significantly improve its robustness against both white-box and black-box attacks. In recent years, along with the massive success of deep neural networks (DNNs) witnessed in many research fields, we have also observed their impressive failures when confronted with adversarial examples, especially for image recognition tasks. Prior work has demonstrated that an adversarial image can be easily synthesized by adding to a legitimate image a specifically crafted perturbation, which is typically imperceptible for human visual inspection. The generated adversarial image, however, is strikingly effective for causing convolutional neural network (CNN) classifiers to make extreme confident misclassification . This vulnerability of DNNs has stimulated the unceasing arms race between research on both attacking (; ; ; ; ;) and defending (; Samangouei et al. (2018b);; ) these powerful models. Among much existing work and a large variety of defense methods, several prior studies (; ;) have spent concerted efforts on defending adversarial attacks through input quantization. The principle idea of these methods is to use quantization to filter out small-scale adversarial perturbations. Recall that in prior work (; ;), it has been shown that the shallow layers of a CNN mostly capture fine-grained features including lines and curves. In the meantime, deeper layers learn coarse-grained yet semantically more critical features, which essentially discriminate different samples. Especially for classification tasks, it is natural to expect samples with the same classification label to share similar semantic information. As such, the semantic similarity between samples may be better revealed if we attend to their latent features learned by the intermediate layers of a CNN. Here we hypothesize that data points with similar semantic information should be distributed densely in the latent feature space. Thus, in order to more effectively filter out adversarial perturbations, we propose an alternative approach which quantizes the data representations embedded in the feature space produced by the intermediate layers of CNN classifiers. Interestingly, there have been other studies that develop similar approaches but for different purposes. For example, Wang et al. (2017; have applied k-means clustering on the intermediate feature maps of CNN models to discover explainable visual concepts. Recent methods, including VQ-VAE (van den) and SOM-VAE , were proposed to construct generative models for images and time-series data with discrete latent representations, which offer better explainability. However, to the best of our knowledge, the approach of applying intermediate layer quantization for CNN models has not been explored in the context of defending adversarial examples. In this work, we propose a quantization method that is realized by an extra intermediate layer, i.e., the quantization layer (Q-Layer). Our Q-Layer can be easily integrated into any existing architecture of CNN models. Specifically, the Q-Layer splits the mainstream of information that flows forward in a regular CNN model into two separate flows. Both flows share the same information passed by layers before the Q-Layer, but differ in the subsequent networks after the Q-Layer. These two flows produce two outputs, one is the quantized output, and the other is the Non-quantized output. Specifically, the non-quantized path is introduced to facilitate the gradient-based training, and to regularize the quantization operation. In the quantized path, we introduce non-differentiability to defend gradient-based attacks. It is important to note that, while gradient-based attacks cannot be directly applied to the quantized network, they can still be conducted by following the nonquantized path. Also, similar to most input transformation methods proposed for defending adversarial examples, our quantization will inevitably lose some feature information, which might be useful for classification. In order to compensate for this loss of information, we further propose multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space. In particular, we perform the projection by re-weighting the input-channels of CNN with trainable parameters. This projection process can be interpreted as performing feature extraction from different points of view, hence help retain the overall effectiveness of our method without causing much performance degradation for the model to be protected. Last but not least, our proposed method can be readily combined with other existing defenses, e.g., adversarial training , to jointly improve the adversarial robustness of a protected CNN classifier. In summary, we make the following contribution: • We propose a quantization-based defense method for the adversarial example problem by designing a quantization Layer (Q-Layer) which can be integrated into existing architectures of CNN models. Our implementation is online available 1. • We propose multi-head quantization to compensate for the possible information loss caused by the quantization process, and bring significant improvement to the adversarial robustness of an armed model under large perturbation. • We evaluate our method under several representative attacks on MNIST and Fashion-MNIST datasets. Our experiment demonstrate that the adoption of the Q-Layer can significantly enhance the robustness of a CNN against both black-box and white-box attack, and the robustness can be further improved by combining our method with adversarial training. 2 RELATED WORK 2.1 ADVERSARIAL ATTACK Given a neural network classifier N with parameters denoted by w, N can be regarded as a function that takes an input x ∈ R dx and produces an classification label y, i.e., N (x; w) = y or N (x) = y for notation simplicity. In principle, the goal of the adversarial attack is to create a perturbation δ ∈ R dx to be added to a legitimate sample x for creating an adversarial example, i.e., x + δ, which causes the target model N to produce a wrong classification . Depending on different threat models, adversarial attacks are categorized as black-box attacks or white-box attacks . Specifically, it is commonly assumed in the white-box attack scenario, that an attacker knows every detail of the target model. This dramatically eases the generation of impactful adversarial examples, and has stimulated researchers to propose various white-box attack methods, including the fast gradient sign method (FGSM) , the basic iterative method (BIM) , the Carlini-Wagner (CW) attack , and DeepFool . On the contrary, in the black-box attack scenario, an attacker is typically assumed to be restricted for accessing detailed information, e.g., the architecture, values of parameters, training datasets, of the target model. There have been many black-box attack methods proposed in prior work (; ;). Representative black-box attacks typically exploit the transferability of the adversarial examples, hence is also referred to as transfer black-box attacks. Explicitly, in transfer black-box attacks, an attacker can train and maintain a substitute model, then conduct white-box attacks on the substitute model to generate adversarial samples which retain a certain level of attack power to the target model. Since both black-box and white-box attacks rely on the white-box assumption, in the following, we mainly introduce several representative white-box attacks, namely the FGSM, BIM and CW attacks, which are also employed in our experiments due to their wide adoption as the benchmark attack methods (Samangouei et al. (2018a; b) ). Fast gradient sign method (FGSM) proposed FGSM, in which δ is calculated by scaling the l ∞ norm of the gradient of the loss function L with respect to a legitimate input x as follows: where represents the maximally allowed scale of perturbation. This method represents a one-step approximation for the direction in the input space that affects the loss function most significantly. Basic iterative method (BIM) proposed the BIM attack, which iteratively performs the FGSM hence generates more impactful adversarial examples at the expense of computational efficiency. Carlini-Wagner (CW) attack aimed to find the smallest perturbation to fool the target model, by solving the following optimization problem: where c > 0 is a tunable positive constant and p represents different norms. In our experiment, we consider l ∞ norm. L is designed to satisfy that L(x, δ) < 0 if and only if N (x + δ) = N (x). There have been many different defense methods (; ; ; Samangouei et al. (2018a; b);; ) developed to battle with a large body of attacking methods. Here we briefly introduce three representative ones. Adversarial training In the same work that introduced also proposed to train a target model with an augmented training set, which contains both original samples and adversarial samples generated by the target model itself. Considering the low computational cost needed by FGSM, it is usually used for generating target-specific adversarial examples in adversarial training. introduced a more general framework for generating adversarial examples and proposed to use iterative attacks, such as BIM, to produce adversarial samples for augmenting the training set. The ing defense is regarded as among the most effective defense methods . Input quantization proposed feature squeezing, which transforms the input by reducing the range of pixel values then employs a filter to smooth the input image. propose to encode the input with one-hot encoding or thermometer coding. add Gaussian noise to the input and cluster input pixels to perform quantization. Input purification Samangouei et al. (2018b) propose PixelDefend, where they used a PixelCNN (van den) to model the distribution of input pixels, and differentiate adversarial samples from legitimate samples. Their show that the pixel distribution of adversarial samples is indeed "unnatural" in comparison with legitimate samples. Then they proposed to purify input pixels to obtain a natural distribution and pass the purified input to the target model for further processing. Given a CNN which has the Q-Layer injected into the intermediate part (e.g., after the convolutional block and before the fully connected block or between two convolutional blocks) of the network, its input goes through the quantization path (Q-path), which consists of several steps of processing, including projection, quantization, and concatenation, then produce the quantized output for the subsequent network, as depicted in Figure 1. Note, that the quantization step forces pre-quantization data representation (outputs of the projection step), i.e., z e,j (j = 1, 2, 3), to be replaced by the post-quantization representation, i.e., z q,i, (i = 1, 2, 3). Since the quantization step introduces nondifferentiability in the Q-path, the backpropagation operation cannot be directly applied for this path. In order to make the enhanced model still trainable, we propose to concatenate the multihead projection and pass the concatenation to a separate subsequent network along the nonquantization path (E-path). It is natural to consider about having both paths connected to subsequent networks that are identical by sharing their weights, whereas, in our experiments, we observed inferior performance in comparison with having both paths connected to networks with different weights. Interested readers could refer to Appendix G for an ablation study which demonstrates this effect. Correspondingly, in Figure 1, we illustrate four backward paths that jointly contribute to updating the model. In particular, path 1, 3, and 4 constitute the backward path for the quantization operation, and path 2 represents the backward path for the non-quantization step. Given a neural network N, we can split it into two sub-network, denoted by N F and N B respectively, from somewhere in the middle of the network. Specifically, in this work, we split a regular CNN into an image encoder network which contains convolutional blocks and a fully-connected classifier network. Then given an input x, the output y of such a neural network is calculated by: We further assume the size of the intermediate output, i.e., z e = N F (x), as n×w ×h×d, where n is the batch size, w, h are the width and height of the intermediate output, d is the number of channels. We consider z e to be composed by n * w * h panels, where an arbitrary panel j, i.e., z j e, is a vector of length d. As shown in Figure 1, the Q-Layer takes z e as input panels, and outputs quantized panels After identifying the closest concept Q i * for z j e, we model the quantization operation as multiplying the entire concept matrix Q by an n c -dimensional one-hot encoded identification vector I i *. As previously mentioned, we pass z q and z e to two different subsequent networks, denoted by N Q and N E respectively. Accordingly, we refer y q and y e as the outputs produced by the Q-path and the E-path, respectively. Then given an input x, the final output is as follows: 3.3 MULTI-HEAD QUANTIZATION Our quantization step introduced above essentially describes a built-in online clustering process. However, clustering in high dimensional space is challenging, mainly due to the existence of many irrelevant dimensions . Furthermore, in order to find rich concepts that reflect the similarity and difference between samples, our quantization step needs to locate clusters that may be embedded in different sub-spaces. As such, we introduce sub-space clustering into our design and propose multi-head quantization to address these challenges as well as alleviate the possible information loss caused by the single-head quantization step. More specifically, we first project an input representation to K sub-spaces, where K is a pre-defined hyper-parameter. Then we perform single-head quantization within each sub-space. At last, we concatenate the quantization produced in all sub-spaces to obtain the final output. In the projection step, we apply a re-weight mechanism to d channels of an input representation. As mentioned before, we can consider a input batch as a (n×w×h) panels, where a panel is represented as a d-dimensional vector containing features generated by d channels. Given the j-th panel z j e ∈ R 1×d, let z j e,i be the i-th sub-space projection and W i ∈ R d×d, b i ∈ R 1×d be the projection parameters of the i-th sub-space, then we have: where denotes the Hadamard product. Denote the projection operation conducted in the i-th subspace by P i and the corresponding concept matrix in this sub-space by Q i. Then the operation of concatenating the outputs from all sub-spaces is represented as follows: Given an input x, the formulations for y q, y e in the case of the multi-head quantization is as follows: By following the design of VQ-VAE (van den) and SOM-VAE , we decompose the quantization loss into two separate loss terms and finally specify the following four terms (as shown in Figure 1) in the final training loss, i.e., where c 1, c 2, α, β are tunable hyper-parameters. Specifically, the first loss L Q represents the standard cross-entropy loss caused by the quantized output. This loss is specified to optimize the weight parameters of N Q and the Q-Layer to fit the classification task. As previously mentioned, the Qpath does not allow direct backpropagation, as such, in order to optimize N F, we set the second cross-entropy loss, i.e., L E, to train N F and N E by following the E-path. The last two loss terms are specified for optimizing the quantization performance by directly evaluating the distance between concept vectors, which can be regarded as clustering centroids, and learned data projections. In particular, the third loss L Q→E measures the l 2 distance between z q and a fixed z e. In other words, we apply the "stop-gradient" operation (denoted as sg) to z e to ensure that this loss only moves z q to z e instead of the opposite. This is a direct analogy to clustering as by minimizing L Q→E, all row vectors of the parameter matrix Q keep moving towards different clustering centers. Similarly, the fourth loss, i.e., L E→Q, measures the "commitment" of data projections to their corresponding clustering centroids. Here we apply the stop-gradient operation to z q while minimizing this loss. By optimizing L E→Q, we try to shape the learned projections of samples to be more similar to their corresponding concepts, hence eventually forming densely distributed clusters. Although we have followed prior work (van den) to decompose the quantization loss as described above and allowed more flexible control over L Q→E and L E→Q by tuning α and β, however, in our experiments, after tuning α and β with cross-validation, we observed the ratio of these two hyper-parameters, if within certain range (for example, 0.1 to 10), had insignificant influence on the final performance. A similar effect has also been reported by van den. As such, in Section 4, we mainly present the obtained by setting α = β. If the concept matrix is not properly initialized or optimized, a concept which is far away from all data projections may remain constant during training. We refer to concepts that behave as described above as "inactive" concepts and refer to other concepts that can be effectively updated as "active" concepts. This issue may severely impact the final performance when there are relatively many inactive concepts. To update inactive concepts, we design two updating strategies. One is to force an inactive concept to move its closest panel directly. The other is to re-initialize an inactive concept as its closest panel. In our experiments, we mainly use the first strategy due to its better effectiveness. Specifically, the first strategy is implemented by adding a special loss term, which calculates the distance between each inactive concept to its closest panel. We then optimize this loss after each training epoch. In our experiments, we considered two types of attacks, namely the black-box and the white-box attacks. Under each type of attacks, we evaluated the effectiveness of our proposed Q-Layer by comparing both the accuracy and robustness of two standard CNNs built with and without the Q-Layer obtained on a clean testing set and an adversarial testing set constructed from this clean testing set. The accuracy achieved by a CNN on the clean testing set reflects its standard generalization performance, while its robustness is represented by the accuracy obtained by this CNN on the adversarial testing set. In particular, under white-box attacks, for a CNN equipped with the Q-layer, the direct gradient-based attack can only be applied to the E-path, as mentioned previously. We refer to this type of attack as the E-path white-box attack. Furthermore, we assumed that an attacker might attack the non-differentiable Q-path by ignoring the quantization operation and directly concatenate z e to N Q to build a substitute attack path following equation 9, which we refer as the Q-path white-box attack. Moreover, under both types of attacks, we also evaluated CNNs built with and without the Q-Layer by considering the case where both models were enhanced by adversarial training. The models considered in this case are referred to as adversarial-trained models. Correspondingly, the models trained without adversarial training are referred to as raw-trained models. Note, in the following experiments, we only focus on the classification accuracy obtained by the Q-path by CNNs armed by the Q-Layer. We report the accuracy obtained by the E-path in Appendix D. The datasets adopted in our experiments include the MNIST and the Fashion-MNIST datasets 2. For each dataset, we had five subsets of samples, including a training set, an adversarial training set, a validation set, a clean testing set, and an adversarial testing set. A clean validation set was first constructed by randomly selecting 10,000 samples from the original training set, hence leaving the final training set contained the rest 50,000 training samples. Then we created adversarial examples from this clean validation set by using a pre-trained CNN A 3 and the FGSM method. This pre-trained CNN A only acted as a source model for creating adversarial validation samples and was not involved in either the training or testing process. The clean validation set was then mixed with its adversarial counterpart to build the final validation set. This mixed validation set was used for selecting models that achieved the best performance. Through this, we attempted to avoid the case of selecting models that can only perform well with clean samples. Instead, we tried to evaluate the performance of models that are naturally robust against adversarial examples. A comparison between the obtained following the above scheme and the obtained by models selected by using the clean validation set is provided in Appendix C. During the testing phase, depending on different attack scenarios, we constructed different adversarial testing sets. Specifically, under black-box attacks, we independently trained two source models for generating two sets of adversarial testing samples. One of the source models had the same structure as the CNN A but was trained with a different random initialization. The other source model, denoted by CNN B, had a structure that was different from that of CNN A. Both source models were used to simulate the scenario where an attacker generated adversarial examples from some unknown models. Under the white-box attacks, for each target model, its associated adversarial testing set was generated by directly attacking this model. Recall that we can apply our method in tandem with adversarial training. Therefore, on the MNIST, we created adversarial examples for adversarial training by setting = 0.2. For the mixed validation and adversarial testing set, we created adversarial examples by setting = 0.2/0.3. This setup is due to the observation that FGSM-based adversarial training is less effective against adversarial samples created with a larger scale of perturbation. As a , we attempted to simulate a more critical scenario in this setup. On the Fashion-MNIST, we noticed that by setting = 0.2/0.3, the ing adversarial images were severely distorted. As such, we created adversarial examples by setting = 0.1/0.2. In our experiments, we compared the robustness of three target models: CNN A, CNN with a basic Q-Layer (K = 1, n c = 64, denoted as Q-base), and CNN with a large Q-Layer (K = 4, n c = 64, denoted as Q-large). Q-base and Q-large share the same architecture with CNN A, but with an additional Q-Layer after the convolution blocks and before the fully-connect layer. We have also evaluated the case where the Q-Layer was inserted between two convolution blocks of CNN A. The are showed in Section 4.4. Besides, we provided the architectures of the target and substitute models, the hyper-parameters, and other implementation details in Appendix A/B. Table 1: Comparison of classification accuracy for different target models on the MNIST under black-box attack with source model CNN A. We use "N +adv" to refer adversarial-trained models. As previously mentioned in Section 4.1.1, we use two substitute models, i.e., CNN A and CNN B, to implement black-box attacks for a target model. Notably, in our experiments, we observed that the attacks conducted by using CNN A were generally more impactful than the attacks conducted by using CNN B. As such, due to space limitations, here we only demonstrate the obtained under attacks performed by CNN A and provide the obtained for CNN B in Appendix E. Also, we have adopted three representative attacks, namely the FGSM attack, the BIM attack, and the CW attack (introduced in Section 2). In Table 1a and Table 1b, we only present the obtained under the FGSM and BIM attacks, as we have observed that, despite being a powerful attack under the white-box scenario, the CW attack produced less effective black-box attacks. MNIST From Table 1a, we can observe that, by inserting the Q-Layer into the CNN, its robustness is improved significantly. Take the for raw-trained Q-large as an example, in comparison with the unarmed CNN, the accuracy under the FGSM attack of = 0.3 rises from 40.03 to 65.19, and the accuracy under BIM attack of = 0.3 rises from 16.02 to 67.01, which is even higher than the accuracy obtained by the adversarial-trained CNN. With adversarial training, the robustness of all models increases. In particular, while these models have similar robustness under small-scale perturbations, as the scale of perturbation increases, the robustness of the target CNN decreases more significantly than Q-large and Q-base, especially for the BIM attack. Table 1b 4, we show the obtained under black-box attacks for Fashion-MNIST. Similar to MNIST, we can observe improved robustness for raw-trained Q-base and Qlarge, in comparison with CNN. Also, after adversarial training, the robustness of the three models are close under small perturbation attack, while Q-base and Q-large outperform CNN under large perturbation. In addition to the shown above, we have also performed several sets of ablation study for exploring the influence of the number of sub-spaces and the number of concepts on black-box robustness and presented the in Appendix F. Visualizing the distributions of pixels and concepts Inspired by PixelDefend (Samangouei et al. (2018b) ), which proposed to use a PixelCNN van den to model and compare the distribution of pixels in clean and adversarial samples, here we use a PixelCNN to model and compare the distribution of concepts learned from clean and adversarial samples. As shown in Figure 2a /2b, for the MNIST dataset and under the FGSM attack, the distribution of pixels exhibits much more significant distortion than the distribution of learned concepts. This clearly indicates that the quantization effectively blocks the adversarial perturbations, hence ing in improved robustness. Table 2: Comparison of classification accuracy for different target models on the MNIST under white-box attack. As mentioned in Section 4.1, in order to conduct attacks even when the Q-path does not suffer from gradient-based attacks, we assume that an attacker might use a shortcut from z e to N Q to perform attacks. However, we observed in our experiments (shown in Appendix H) that this type of attack was generally weaker than the E-path attack. Therefore, here we only present obtained by the E-path attack in Table 2a and Table 2b. As shown in Table 2a, raw-trained Q-large and Q-base significantly outperform CNN. For attacks with small perturbations, adversarial trained CNN performs slightly better than Q-large and Q-base. However, as the scale of perturbation increases, Q-large and Q-base restore their superior robustness. Similar can also be observed for Fashion-MNIST, as shown in Table 2b. In Section 4.4, we also show that inserting the Q-Layer between two convolution blocks brings even stronger robustness against white-box attack. In the following experiments, we show the flexibility of varying the position where the Q-Layer is inserted. In particular, we refer to a CNN which has a Q-Layer inserted between its two convolution blocks as Q-inner. The Q-inner model has the same parameter setting as the Q-large (K = 4, n c = 64) model used in the previous experiments. Table 3: Comparison of classification accuracy for Q-large and Q-inner on the MNIST, under blackbox attack and white-box attacks. In Table 3a and Table 3b, we demonstrate the comparison under both black-box and whitebox attacks on the MNIST dataset. Under black-box attacks, we observe that Q-inner and Q-large achieve comparable robustness. However, under the white-box attacks, it can be seen that Q-inner performs much better than Q-large. These not only demonstrate the flexibility of the Q-Layer, but also indicate that applying the quantization at an early stage of the information propagation can be more effective for filtering out an adversarial perturbation, of which the scale has not been sufficiently amplified as the perturbation propagates to the deeper parts of the network. In this paper, we have designed and implemented a quantization layer (Q-Layer) to protection CNN classifiers from the adversarial attacks, and presented the experiment which show that, by simply inserting one Q-Layer into a regular CNN, its adversarial robustness under both white-box and black-box attacks obtains significant improvement. Moreover, we have combined our method in tandem with adversarial training. The empirical show that the Q-layer can make a CNN benefit more from adversarial training and even perform well under attacks with larger perturbations. One limitation of this work is due to the uncertainty introduced by the random initialization of concept matrix. This issue also exists in many other clustering algorithms. In this work, we alleviate the impact of this issue by reactivating inactivate concepts. Future work would pursue other approaches on constructing the concept matrix, e.g., regularizing the concept matrix with specific semantic constrains, and using the E-path as a learned index to retrieve information stored in the concept matrix, which acts as an external memory. Model A,B are two neural network architectures used in Samangouei et al. (2018a) for the MNIST and Fashion-MNIST datasets. In this paper, A is used for both target CNN and substitute model. B is only used for substitute model. When train CNN with a Q-Layer, we use RAdamOptimizer and set learning rate as 0.001. As for losses, we set c 1 = 1, c 2 = 1, α = 0.001, β = 0.001. The scale of α and β are chosen to make α · L Q→E and β · L E→Q have smaller but close magnitude of L Q and L E. The ratio of c 1 and c 2 do not have significant influence on the , for they are optimizing different part of the network. In our experiments, We didn't find significant change of when we tuned the ratio of α and β. As for inactive concepts updating strategy one, i.e., optimizing the distance between inactive concepts to its closest data point, we use AdamOptimizer and set learning rate as 0.001. Specifically, when training Q-inner on MNIST, we set α = 0.0001, β = 0.0001 and use the second inactive concept updating strategy (otherwise it could not converge), keeping other hyperparameters unchanged. All the hyper-parameter tuning are based on train set and validation set. We train each model three times with different random initialization and select the model with highest validation accuracy as the final model to test. We use Adversarial Robustness 360 Toolbox to generate adversarial samples. For each attack, we adjust, set batch size to be 128, leaving other hyper-paramters as default settings in Adversarial Robustness 360 Toolbox. Additionally, when implementing BIM attack, we set step to be 0.01. In this section, we compare two CNN models on MNIST, all share the same structure with CNN A, independently trained, but are selected on the clean validation set and mixed validation set, respectively. As shown in Table 5, by selecting models on the mixed validation set, we get a much more robust model, with slight sacrificing on clean accuracy. In this section, take MNIST as an example, we compare the Q-path classification accuracy and Epath classification accuracy under black-box attack and while-box attack. In the following, we've reported two classification accuracy scores for each of the Q-base and Q-large models. One score represents the accuracy obtained by following the Q-path, and the other represents the accuracy obtained by following the E-path. As shown in Table 6a and Table 6b, for both raw-trained Q-base and Q-large, the accuracy obtained by the Q-path is significantly higher than that obtained by the E-path, especially when is large, which prove the efficiency of quantization. We also note that in certain cases, after adversarial training, for Q-large and Q-base under small perturbations, the accuracy obtained by the Q-path becomes slightly lower than that obtained by the E-path. Recall that the E-path represents the passage in a regular CNN. Therefore, this difference between the Q-path and E-path may be due to the reason that the quantization inevitably causes loss of information, hence leading to a lower accuracy score for the Q-path. In this appendix, we compare the classification of different target models on MNIST and Fashion-MNIST, under black-box attack with source model CNN B, whose architecture is different from the target CNN. The on MNIST is showed in Table 7a. The on Fashion-MNIST is showed in Table 7b. Same as the observation in Table 1a and Table 1b, inserting a Q-Layer improves the robustness of the network. To further study the effect of K (the number of sub-spaces) and n c (the number of concepts), we insert Q-layer with K = 1, 2, 4 and n c = 16, 64, 128 to CNN and compare its robustness on MNIST under FGSM black-box attack with = 0.3. As shown in Table 8a, before adversarial training, more sub-spaces bring stronger robustness. After adversarial training, when n c = 16, models with larger K demonstrate higher accuracy; however, when n c = 64/128, the relationship of K and accuracy is not clear. We speculate that when n c = 16, quantizing features causes too much loss in information, therefore additional information introduced by adding more sub-spaces significantly improves accuracy. However, when n c is large enough to represent different concepts, the importance of K decreases after adversarial training. In the shared subsequent network setting, we let N Q and N E share parameters, while in the separated subsequent network setting, N Q and N E do not. We compare the robustness of two independent Qlarge models with separated subsequent networks and with shared subsequent network. We report the black-box attack on MNIST in Table 9 and white-box attack in Table 10. The proves that, the separated subsequent networks do help robustness under both black-box attack and white-box attack. Nonetheless, separated subsequent networks are harder to optimize, thus we recommend users to use separated subsequent networks if they are not very deep, and use shared subsequent networks otherwise. Table 10: Classification accuracy comparison of Q-large with separated subsequent networks and shared subsequent networks on MNIST, under white-box attack. H WHITE-BOX ATTACK ON Q-PATH As mentioned before, the original Q-path for Q-layer could not be attacked with gradient-based method. A clever attacker might build a shortcut path from z e to N Q to attack, which we refer as Q-path white-box attack. However, due to the differences between N Q and N E, attack through short-cut path is usually weak. In this section, we show the Q-path white-box attack and E-path white-box attack in Table 11. We could observe that, at most of the time, attacking Q-path reduces less accuracy than attack E-path. Table 11: Classification accuracy comparison of different target models on MNIST, under Q-path and E-path white-box attack. We underline the when Q-path attack is stronger than E-path attack.
We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations.
1,786
scitldr
Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \emph{linear} layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is $2$ and $15$, respectively. More generally, for graph data defined on $k$-tuples of nodes, the dimension is the $k$-th and $2k$-th Bell numbers. Orthogonal bases for the layers are computed, including generalization to multi-graph data. The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs. From the theoretical point of view, our generalize and unify recent advancement in equivariant deep learning. In particular, we show that our model is capable of approximating any message passing neural network. Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable to state-of-the-art and to have better expressivity than previous invariant and equivariant bases. We consider the problem of graph learning, namely finding a functional relation between input graphs (more generally, hyper-graphs) G and corresponding targets T, e.g., labels. As graphs are common data representations, this task received quite a bit of recent attention in the machine learning community BID2; BID13;; BID38.More specifically, a (hyper-)graph data point G = (V, A) consists of a set of n nodes V, and values A attached to its hyper-edges 1. These values are encoded in a tensor A. The order of the tensor A, or equivalently, the number of indices used to represent its elements, indicates the type of data it represents, as follows: First order tensor represents node-values where A i is the value of the i-th node; Second order tensor represents edge-values, where A ij is the value attached to the (i, j) edge; in general, k-th order tensor encodes hyper-edge-values, where A i1,...,i k represents the value of the hyper-edge represented by (i 1, . . ., i k). For example, it is customary to represent a graph using a binary adjacency matrix A, where A ij equals one if vertex i is connected to vertex j and zero otherwise. We denote the set of order-k tensors by R n k.The task at hand is constructing a functional relation f (A) ≈ T, where f is a neural network. If T = t is a single output response then it is natural to ask that f is order invariant, namely it should produce the same output regardless of the node numbering used to encode A. For example, if we represent a graph using an adjacency matrix A = A ∈ R n×n, then for an arbitrary permutation matrix P and an arbitrary adjacency matrix A, the function f is order invariant if it satisfies f (P T AP) = f (A). If the targets T specify output response in a form of a tensor, T = T, then it is natural to ask that f is order equivariant, that is, f commutes with the renumbering of nodes operator acting on tensors. Using the above adjacency matrix example, for every adjacency matrix A and Figure 1: The full basis for equivariant linear layers for edge-value data A ∈ R n×n, for n = 5. The purely linear 15 basis elements, B µ, are represented by matrices n 2 × n 2, and the 2 bias basis elements (right), C λ, by matrices n × n, see equation 9.every permutation matrix P, the function f is equivariant if it satisfies f (P T AP) = P T f (A)P. To define invariance and equivariance for functions acting on general tensors A ∈ R n k we use the reordering operator: P A is defined to be the tensor that from renumbering the nodes V according to the permutation defined by P. Invariance now reads as f (P A) = f (A); while equivariance means f (P A) = P f (A). Note that the latter equivariance definition also holds for functions between different order tensors, f: R n k → R n l.Following the standard paradigm of neural-networks where a network f is defined by alternating compositions of linear layers and non-linear activations, we set as a goal to characterize all linear invariant and equivariant layers. The case of node-value input A = a ∈ R n was treated in the pioneering works of BID39; BID26. These works characterize all linear permutation invariant and equivariant operators acting on node-value (i.e., first order) tensors, R n. In particular it it shown that the linear space of invariant linear operators L: R n → R is of dimension one, containing essentially only the sum operator, L(a) = α1T a. The space of equivariant linear operators L: DISPLAYFORM0 The general equivariant tensor case was partially treated in where the authors make the observation that the set of standard tensor operators: product, element-wise product, summation, and contraction are all equivariant, and due to linearity the same applies to their linear combinations. However, these do not exhaust nor provide a full and complete basis for all possible tensor equivariant linear layers. In this paper we provide a full characterization of permutation invariant and equivariant linear layers for general tensor input and output data. We show that the space of invariant linear layers L: R n k → R is of dimension b(k), where b(k) is the k-th Bell number. The k-th Bell number is the number of possible partitions of a set of size k; see inset for the case k = 3. Furthermore, the space of equivariant linear layers DISPLAYFORM1 Remarkably, this dimension is independent of the size n of the node set V. This allows applying the same network on graphs of different sizes. For both types of layers we provide a general formula for an orthogonal basis that can be readily used to build linear invariant or equivariant layers with maximal expressive power. Going back to the example of a graph represented by an adjacency matrix A ∈ R n×n we have k = 2 and the linear invariant layers L: Figure 1 shows visualization of the basis to the linear equivariant layers acting on edge-value data such as adjacency matrices. DISPLAYFORM2 In BID12 the authors provide an impressive generalization of the case of node-value data to several node sets, V 1, V 2,..., V m of sizes n 1, n 2,..., n m. Their goal is to learn interactions across sets. That is, an input data point is a tensor A ∈ R n1×n2×···×nm that assigns a value to each element in the cartesian product V 1 × V 2 × · · · × V m. Renumbering the nodes in each node set using permutation matrices P 1,..., P m (resp.) in a new tensor we denote by P 1:m A. Order invariance means f (P 1:m A) = f (A) and order equivariance is f (P 1:m A) = P 1:m f (A). BID12 introduce bases for linear invariant and equivariant layers. Although the layers in BID12 satisfy the order invariance and equivariance, they do not exhaust all possible such layers in case some node sets coincide. For example, if V 1 = V 2 they have 4 independent learnable parameters where our model has the maximal number of 15 parameters. Our analysis allows generalizing the multi-node set case to arbitrary tensor data over V 1 × V 2 × · · · × V m. Namely, for data points in the form of a tensor A ∈ R n k 1 1 ×n k 2 2 ×···×n km m. The tensor A attaches a value to every element of the Cartesian product DISPLAYFORM3 2, that is, k 1 -tuple from V 1, k 2 -tuple from V 2 and so forth. We show that the linear space of invariant linear layers DISPLAYFORM4, while the equivariant linear layers L: DISPLAYFORM5 We also provide orthogonal bases for these spaces. Note that, for clarity, the discussion above disregards biases and features; we detail these in the paper. In appendix C we show that our model is capable of approximating any message-passing neural network as defined in BID9 which encapsulate several popular graph learning models. One immediate corollary is that the universal approximation power of our model is not lower than message passing neural nets. In the experimental part of the paper we concentrated on possibly the most popular instantiation of graph learning, namely that of a single node set and edge-value data, e.g., with adjacency matrices. We created simple networks by composing our invariant or equivariant linear layers in standard ways and tested the networks in learning invariant and equivariant graph functions: (i) We compared identical networks with our basis and the basis of BID12 and showed we can learn graph functions like trace, diagonal, and maximal singular vector. The basis in BID12, tailored to the multi-set setting, cannot learn these functions demonstrating it is not maximal in the graph-learning (i.e., multi-set with repetitions) scenario. We also demonstrate our representation allows extrapolation: learning on one size graphs and testing on another size; (ii) We also tested our networks on a collection of graph learning datasets, achieving that are comparable to the state-of-the-art in 3 social network datasets. Our work builds on two main sub-fields of deep learning: group invariant or equivariant networks, and deep learning on graphs. Here we briefly review the relevant works. Invariance and equivariance in deep learning. In many learning tasks the functions that we want to learn are invariant or equivariant to certain symmetries of the input object description. Maybe the first example is the celebrated translation invariance of Convolutional Neural Networks (CNNs) BID20 BID19; in this case, the image label is invariant to a translation of the input image. In recent years this idea was generalized to other types of symmetries such as rotational symmetries BID3 b; BID35. BID3 introduced Group Equivariant Neural Networks that use a generalization of the convolution operator to groups of rotations and reflections; BID35; also considered rotational symmetries but in the case of 3D shapes and spherical functions. showed that any equivariant layer is equivalent to a certain parameter sharing scheme. If we adopt this point of view, our work reveals the structure of the parameter sharing in the case of graphs and hyper-graphs. In another work, show that a neural network layer is equivariant to the action of some compact group iff it implements a generalized form of the convolution operator. BID37 suggested certain group invariant/equivariant models and proved their universality. To the best of our knowledge these models were not implemented. Learning of graphs. Learning of graphs is of huge interest in machine learning and we restrict our attention to recent advancements in deep learning on graphs. BID10; BID28 introduced Graph Neural Networks (GNN): GNNs hold a state (a real valued vector) for each node in the graph, and propagate these states according to the graph structure and learned parametric functions. This idea was further developed in BID22 that use gated recurrent units. Following the success of CNNs, numerous works suggested ways to define convolution operator on graphs. One promising approach is to define convolution by imitating its spectral properties using the Laplacian operator to define generalized Fourier basis on graphs BID2. Multiple follow-up works BID13 BID6 BID16 BID21 suggest more efficient and spatially localized filters. The main drawback of spectral approaches is that the generalized Fourier basis is graph-dependent and applying the same network to different graphs can be challenging. Another popular way to generalize the convolution operator to graphs is learning stationary functions that operate on neighbors of each node and update its current state BID1 BID7 BID11 BID25 BID32 BID31. This idea generalizes the locality and weight sharing properties of the standard convolution operators on regular grids. As shown in the important work of BID9, most of the the above mentioned methods (including the spectral methods) can be seen as instances of the general class of Message Passing Neural Networks. In this section we characterize the collection of linear invariant and equivariant layers. We start with the case of a single node set V of size n and edge-value data, that is order 2 tensors A = A ∈ R n×n. As a typical example imagine, as above, an adjacency matrix of a graph. We set a bit of notation. Given a matrix X ∈ R a×b we denote vec(X) ∈ R ab×1 its column stack, and by brackets the inverse action of reshaping to a square matrix, namely [vec(X)] = X. Let p denote an arbitrary permutation and P its corresponding permutation matrix. Let L ∈ R 1×n 2 denote the matrix representing a general linear operator L: R n×n → R in the standard basis, then L is order invariant iff Lvec(P T AP) = Lvec(A). Using the property of the Kronecker product that vec(XAY) = Y T ⊗ Xvec(A), we get the equivalent equality DISPLAYFORM0. Since the latter equality should hold for every A we get (after transposing both sides of the equation) that order invariant L is equivalent to the equation DISPLAYFORM1 For equivariant layers we consider a general linear operator L: R n×n → R n×n and its cor- DISPLAYFORM2 Using the above property of the Kronecker product again we get DISPLAYFORM3 Noting that P T ⊗ P T is an n 2 × n 2 permutation matrix and its inverse is P ⊗ P we get to the equivalent equality P ⊗ P LP T ⊗ P T vec(A) = Lvec(A). As before, since this holds for every A and using the properties of the Kronecker product we get that L is order equivariant iff for all permutation matrices P DISPLAYFORM4 From equations 1 and 2 we see that finding invariant and equivariant linear layers for the order-2 tensor data over one node set requires finding fixed points of the permutation matrix group represented by Kronecker powers P ⊗ P ⊗ · · · ⊗ P of permutation matrices P. As we show next, this is also the general case for order-k tensor data A ∈ R n k over one node set, V. That is, DISPLAYFORM5 for every permutation matrix P, where DISPLAYFORM6 k is the matrix of an invariant operator; and in equation 4, L ∈ R n k ×n k is the matrix of an equivariant operator. We call equations 3,4 the fixed-point equations. To see this, let us add a bit of notation first. Let p denote the permutation corresponding to the permutation matrix P. We let P A denote the tensor that from expressing the tensor A after renumbering the nodes in V according to permutation P. Explicitly, the (p(i 1), p(i 2),..., p(i k))-th entry of P A equals the (i 1, i 2, . . ., i k)-th entry of A. The matrix that corresponds to the operator P in the standard tensor basis e (i1) ⊗ · · · ⊗ e (i k) is the Kronecker power P T ⊗k = (P T) ⊗k. Note that vec(A) is exactly the coordinate vector of the tensor A in this standard basis and therefore we have vec(P A) = P T ⊗k vec(A). We now show: DISPLAYFORM7 Proof. Similarly to the argument from the order-2 case, let L ∈ R 1×n k denote the matrix corresponding to a general linear operator L: R n k → R. Order invariance means DISPLAYFORM0 Using the matrix P T ⊗k we have equivalently LP T ⊗k vec(A) = Lvec(A) which is in turn equivalent to P ⊗k vec(L) = vec(L) for all permutation matrices P. For order equivariance, let L ∈ R n k ×n k denote the matrix of a general linear operator L: DISPLAYFORM1 Similarly to above this is equivalent to LP T ⊗k vec(A) = P T ⊗k Lvec(A) which in turn leads to P ⊗k LP T ⊗k = L, and using the Kronecker product properties we get P ⊗2k vec(L) = vec(L). We have reduced the problem of finding all invariant and equivariant linear operators L to finding all solutions L of equations 3 and 4. Although the fixed point equations consist of an exponential number of equations with only a polynomial number of unknowns they actually possess a solution space of constant dimension (i.e., independent of n).To find the solution of P ⊗ vec(X) = vec(X), where X ∈ R n, note that P ⊗ vec(X) = vec(Q X), where Q = P T. As above, the tensor Q X is the tensor ed from renumbering the nodes in V using permutation Q. Equivalently, the fixed-point equations we need to solve can be formulated as Q X = X, ∀Q permutation matricesThe permutation group is acting on tensors X ∈ R n with the action X → Q X. We are looking for fixed points under this action. To that end, let us define an equivalence relation in the index space of tensors R n, namely in [n], where with a slight abuse of notation (we use light brackets) we set DISPLAYFORM0 The equality pattern equivalence relation partitions the index set [n] into equivalence classes, the collection of which is denoted [n] / ∼. Each equivalence class can be represented by a unique partition of the set [] where each set in the partition indicates maximal set of identical values. Let us exemplify. For = 2 we have two equivalence classes γ 1 = {{1}, {2}} and γ 2 = {{1, 2}}; γ 1 represents all multi-indices (i, j) where i = j, while γ 2 represents all multi-indices (i, j) where i = j. For = 4, there are 15 equivalence classes DISPLAYFORM1 For each equivalence class γ ∈ [n] / ∼ we define an order-tensor B γ ∈ R n by setting Proof. Let us first show that: X is a solution to equation 7 iff X is constant on equivalence classes of the equality pattern relation, ∼. Since permutation q: [n] → [n] is a bijection the equality patterns of a = (i 1, i 2, . . ., i) ∈ [n] and q(a) = (q(i 1), q(i 2),..., q(i)) ∈ [n] are identical, i.e., a ∼ q(a). Taking the a ∈ [n] entry of both sides of equation 7 gives X q(a) = X a. Now, if X is constant on equivalence classes then in particular it will have the same value at a and q(a) for all a ∈ [n] and permutations q. Therefore X is a solution to equation 7. For the only if part, consider a tensor X for which there exist multi-indices a ∼ b (with identical equality patterns) and X a = X b then X is not a solution to equation 7. Indeed, since a ∼ b one can find a permutation q so that b = q(a) and using the equation above, X b = X q(a) = X a which leads to a contradiction. To finish the proof note that any tensor X, constant on equivalence classes, can be written as a linear combination of B γ, which are merely indicators of the equivalence class. Furthermore, the collection B γ have pairwise disjoint supports and therefore are an orthogonal basis. Combining propositions 1 and 2 we get the characterization of invariant and equivariant linear layers acting on general k-order tensor data over a single node set V: DISPLAYFORM2 DISPLAYFORM3 Biases Theorem 1 deals with purely linear layers, that is without bias, i.e., without constant part. Nevertheless extending the previous analysis to constant layers is straight-forward. First, any constant layer R n k → R is also invariant so all constant invariant layers are represented by constants c ∈ R. For equivariant layers L: R n k → R n k we note that equivariance means C = L(P A) = P L(A) = P C. Representing this equation in matrix form we get P T ⊗k vec(C) = vec(C). This shows that constant equivariant layers on one node set acting on general k-order tensors are also characterized by the fixed-point equations, and in fact have the same form and dimensionality as invariant layers on k-order tensors, see equation 3. Specifically, their basis is B λ, λ ∈ [n] k / ∼. For example, for k = 2, the biases are shown on the right in figure 1.Features. It is pretty common that input tensors have vector values (i.e., features) attached to each hyper-edge (k-tuple of nodes) in V, that is A ∈ R n k ×d. Now linear invariant R n k ×d → R 1×d or equivariant R n k ×d → R n k ×d layers can be formulated using a slight generalization of the previous analysis. The operator P A is defined to act only on the nodal indices, i.e., i 1,..., i k (the first k indices). Explicitly, the (p(i 1), p(i 2),..., p(i k), i k+1 )-th entry of P A equals the (i 1, i 2, . . ., i k, i k+1)-th entry of A.Invariance is now formulated exactly as before, equation 5, namely Lvec(P A) = Lvec(A). The matrix that corresponds to P acting on R n k ×d in the standard basis is P T ⊗k ⊗ I d and therefore DISPLAYFORM4 Since this is true for all A we have (P DISPLAYFORM5, using the properties of the Kronecker product. Equivariance is written as in equation 6, [Lvec(P A)] = P [Lvec(A)]. In matrix form, the equivariance equation becomes DISPLAYFORM6, since this is true for all A and using the properties of the Kronecker product again we get to DISPLAYFORM7 The basis (with biases) to the solution space of these fixed-point equations is defined as follows. We use a, Note that these basis elements are similar to the ones in equation 8 with the difference that we have different basis tensor for each pair of input j and output j feature channels. DISPLAYFORM8 An invariant (equation 10a)/ equivariant (equation 10b) linear layer L including the biases can be written as follows for input A ∈ R n k ×d: DISPLAYFORM9 where the learnable parameters are w ∈ R b(k)×d×d and b ∈ R d for a single linear invariant layer R n k ×d → R d; and it is w ∈ R b(2k)×d×d and b ∈ R b(k)×d for a single linear equivariant layer DISPLAYFORM10 The natural generalization of theorem 1 to include bias and features is therefore: DISPLAYFORM11 with basis elements defined in equation 9; equation 10a (10b) show the general form of such layers. Since, by similar arguments to proposition 2, the purely linear parts B and biases C in equation 9 are independent solutions to the relevant fixed-point equations, theorem 2 will be proved if their number equals the dimension of the solution space of these fixed-point equations, namely dd b(k) for purely linear part and d for bias in the invariant case, and dd b(2k) for purely linear and d b(k) for bias in the equivariant case. This can be shown by repeating the arguments of the proof of proposition 2 slightly adapted to this case, or by a combinatorial identity we show in Appendix B.For example, figure 1 depicts the 15 basis elements for linear equivariant layers R n×n → R n×n taking as input edge-value (order-2) tensor data A ∈ R n×n and outputting the same dimension tensor. The basis for the purely linear part are shown as n 2 × n 2 matrices while the bias part as n × n matrices (far right); the size of the node set is |V| = n = 5.Mixed order equivariant layers. Another useful generalization of order equivariant linear layers is to linear layers between different order tensor layers, that is, L: R n k → R n l, where l = k. For example, one can think of a layer mapping an adjacency matrix to per-node features. For simplicity we will discuss the purely linear scalar-valued case, however generalization to include bias and/or general feature vectors can be done as discussed above. Consider the matrix L ∈ R n l ×n k representing the linear layer L, using the renumbering operator, P, order equivariance is equivalent to [Lvec(P A)] = P [Lvec(A)]. Note that while this equation looks identical to equation 6 it is nevertheless different in the sense that the P operator in the l.h.s. of this equation acts on k-order tensors while the one on the r.h.s. acts on l-order tensor. Still, we can transform this equation to a matrix equation as before by remembering that P T ⊗k is the matrix representation of the renumbering operator P acting on k-tensors in the standard basis. Therefore, repeating the arguments in proof of proposition 1, equivariance is equivalent to P ⊗(k+l) vec(L) = vec(L), for all permutation matrices P. This equation is solved as in section 3.1. The corresponding bases to such equivariant layers are computed as in equation 9b, with the only difference that now DISPLAYFORM12 Implementation details. We implemented our method in Tensorflow BID0. The equivariant linear basis was implemented efficiently using basic row/column/diagonal summation operators, see appendix A for details. The networks we used are composition of 1 − 4 equivariant linear layers with ReLU activation between them for the equivariant function setting. For invariant function setting we further added a max over the invariant basis and 1 − 3 fully-connected layers with ReLU activations. Synthetic datasets. We tested our method on several synthetic equivariant and invariant graph functions that highlight the differences in expressivity between our linear basis and the basis of BID12. Given an input matrix data A ∈ R n×n we considered: (i) projection onto the symmetric matrices 1 2 (A+A T); (ii) diagonal extraction diag(diag(A)) (keeps only the diagonal and plugs zeros elsewhere); (iii) computing the maximal right singular vector arg max v 2 =1 Av 2; and (iv) computing the trace tr(A). Tasks (i)-(iii) are equivariant while task (iv) is invariant. We created accordingly 4 datasets with 10K train and 1K test examples of 40×40 matrices; for tasks (i), (ii), (iv) we used i.i.d. random matrices with uniform distribution in; we used mean-squared error (MSE) as loss; for task (iii) we random matrices with uniform distribution of singular values in [0, 0.5] and spectral gap ≥ 0.5; due to sign ambiguity in this task we used cosine loss of the form l(x, y) = 1 − x/ x, y/ y 2. We trained networks with 1, 2, and 3 hidden layers with 8 feature channels each and a single fullyconnected layer. Both our models as well as BID12 use the same architecture but with different bases for the linear layers. TAB1 logs the best mean-square error of each method over a set of hyper-parameters. We add the MSE for the trivial mean predictor. This experiment emphasizes simple cases in which the additional parameters in our model, with respect to BID12, are needed. We note that BID12 target a different scenario where the permutations acting on the rows and columns of the input matrix are not necessarily the same. The assumption taken in this paper, namely, that the same permutation acts on both rows and columns, gives rise to additional parameters that are associated with the diagonal and with the transpose of the matrix (for a complete list of layers for the k = 2 case see appendix A). In case of an input matrix that represents graphs, these parameters can be understood as parameters that control self-edges or node features, and incoming/outgoing edges in a different way. TAB2 shows the of applying the learned equivariant networks from the above experiment to graphs (matrices) of unseen sizes of n = 30 and n = 50. Note, that although the network was trained on a fixed size, the network provides plausible generalization to different size graphs. We note that the generalization of the invariant task of computing the trace did not generalize well to unseen sizes and probably requires training on different sizes as was done in the datasets below. Graph classification. We tested our method on standard benchmarks of graph classification. We use 8 different real world datasets from the benchmark of BID36: five of these datasets originate from bioinformatics while the other three come from social networks. In all datasets the adjacency matrix of each graph is used as input and a categorial label is assigned as output. In the bioinformatics datasets node labels are also provided as inputs. These node labels can be used in our framework by placing their 1-hot representations on the diagonal of the input. TAB3 specifies the for our method compared to state-of-the-art deep and non-deep graph learning methods. We follow the evaluation protocol including the 10-fold splits of BID40. For each dataset we selected learning and decay rates on one random fold. In all experiments we used a fixed simple architecture of 3 layers with features accordingly. The last equivariant layer is followed by an invariant max layer according to the invariant basis. We then add two fully-connected hidden layers with features. We compared our to seven deep learning methods: DGCNN BID40, PSCN BID25, DCNN BID1, ECC BID31, DGK BID36, DiffPool BID38 and CCN. We also compare our to four popular graph kernel methods: Graphlet Kernel (GK) BID29,Random Walk Kernel (RW) BID34, Propagation Kernel (PK) BID24, and Weisfeiler-lehman kernels (WL) BID30 and two recent feature-based methods: Family of Graph Spectral Distance (FGSD) BID33 and Anonymous Walk Embeddings (AWE) BID15. Our method achieved comparable to the state-of-the-art on the three social networks datasets, and slightly worse than state-of-the-art on the biological datasets. Lastly, we provide a generalization of our framework to data that is given on tuples of nodes from a collection of node sets V 1, V 2,..., V m of sizes n 1, n 2,..., n m (resp.), namely A ∈, where for simplicity we do not discuss features that can be readily added as discussed in section 3. Note that the case of k i = l i = 1 for all i = 1,..., m is treated in BID12. The reordering operator now is built out of permutation matrices P i ∈ R ni×ni (p i denotes the permutation), i = 1,..., m, denoted P 1:m, and defined as follows: the (p 1 (a 1), p 2 (a 2),..., p m (a m))-th entry of the tensor P 1:m A, where DISPLAYFORM0 ki is defined to be the (a 1, a 2, . . ., a m)-th entry of the tensor A. Rewriting the invariant and equivariant equations, i.e., equation 5, 6, in matrix format, similarly to before, we get the fixed-point equa- where DISPLAYFORM1 The number of these tensors is m i=1 b(i) for invariant layers and m i=1 b(k i + l i) for equivariant layers. Since these are all linear independent (pairwise disjoint support of non-zero entries) we need to show that their number equal the dimension of the solution of the relevant fixed-point equations above. This can be done again by similar arguments to the proof of proposition 2 or as shown in appendix B. To summarize: DISPLAYFORM2 Orthogonal bases for these layers are listed in equation 11. This research was supported in part by the European Research Council (ERC Consolidator Grant, "LiftMatch" 771136) and the Israel Science Foundation (Grant No. 1830/17).We normalize each operation to have unit max operator norm. We note that in case the input matrix is symmetric, our basis reduces to 11 elements in the first layer. If we further assume the matrix has zero diagonal we get a 6 element basis in the first layer. In both cases our model is more expressive than the 4 element basis of BID12 and as the output of the first layer (or other inner states) need not be symmetric nor have zero diagonal the deeper layers can potentially make good use of the full 15 element basis. We prove a useful combinatorial fact as a corollary of proposition 2. This fact will be used later to easily compute the dimensions of more general spaces of invariant and equivariant linear layers. We use the fact that if V is a representation of a finite group G then DISPLAYFORM0 is a projection onto V G = {v ∈ V | gv = v, ∀g ∈ G}, the subspace of fixed points in V under the action of G, and consequently that tr(φ) = dim(V G) (see BID8 for simple proofs). Proposition 3. The following formula holds: DISPLAYFORM1 where Π n is the matrix permutation group of dimensions n × n. Proof. In our case, the vector space is the space of order-k tensors and the group acting on it is the matrix group G = P ⊗k | P ∈ Π m. dim(V G) = tr(φ) = 1 |G| g∈G tr(g) = 1 n! P ∈Πn tr(P ⊗k) = 1 n! P ∈Πn tr(P) k,where we used the multiplicative law of the trace with respect to Kronecker product. Now we use proposition 2 noting that in this case V G is the solution space of the fixed-point equations. Therefore, dim(V G) = b(k) and the proof is finished. Recall that for a permutation matrix P, tr(P) = | {i ∈ [n] s.t. P fixes e i } |. Using this, we can interpret the equation in proposition 3 as the k-th moment of a random variable counting the number of fixed points of a permutation, with uniform distribution over the permutation group. Proposition 3 proves that the k-th moment of this random variable is the k-th Bell number. We can now use proposition 3 to calculate the dimensions of two linear layer spaces: (i) Equivariant layers acting on order-k tensors with features (as in 3); and (ii) multi-node sets (as in section 5).Theorem 2. The space of invariant (equivariant) linear layers R n k,d → R d (R n k ×d → R n k ×d) is of dimension dd b(k) + d (for equivariant: dd b(2k) + d b(k)) with basis elements defined in equation 9; equations 10a (10b) show the general form of such layers. Proof. We prove the dimension formulas for the invariant case. The equivariant case is proved similarly. The solution space for the fixed point equations is the set V G for the matrix group G = P ⊗k ⊗ I d ⊗ I d | P ∈ Π n. Using the projection formula 12 we get that the dimension of the solution subspace, which is the space of invariant linear layers, can be computed as follows: DISPLAYFORM2 5. The last step is to apply an MLP to the last d + d feature channels of the diagonal of Z 4.After this last step we have Z The errors i depend on the approximation error of the MLP to the relevant function, the previous errors i−1 (for i > 1), and uniform bounds as-well as uniform continuity of the approximated functions. Corollary 1. Our model can represent any message passing network to an arbitrary precision on compact sets. In other words, in terms of universality our model is at-least as powerful as any message passing neural network (MPNN) that falls into the framework of BID9.
The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data.
1,787
scitldr
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations. The ability to predict future outcomes of hypothetical decisions is a key aspect of intelligence. One approach to capture this ability is via model-based reinforcement learning (MBRL) (; ; ;). In this framework, an agent builds an internal representation s t by sensing an environment through observational data y t (such as rewards, visual inputs, proprioceptive information) and interacts with the environment by taking actions a t according to a policy π(a t |s t). The sensory data collected is used to build a model that typically predicts future observations y >t from past actions a ≤t and past observations y ≤t. The ing model may be used in various ways, e.g. for planning (; a), generation of synthetic training data , better credit assignment , learning useful internal representations and belief states , or exploration via quantification of uncertainty or information gain . Within MBRL, commonly explored methods include action-conditional, next-step models (; ; ; ; ; ; ; ; ; ; ; ;). However, it is often not tractable to accurately model all the available information. This is both due to the fact that conditioning on high-dimensional data such as images would require modeling and generating images in order to plan over several timesteps , and to the fact that modeling images is challenging and may unnecessarily focus on visual details which are not relevant for acting. These challenges have motivated researchers to consider simpler models, henceforth referred to as partial models, i.e. models which are neither conditioned on, nor generate the full set of observed data (; ;). In this paper, we demonstrate that partial models will often fail to make correct predictions under a new policy, and link this failure to a problem in causal reasoning. Prior to this work, there has been a growing interest in combining causal inference with RL research in the directions of non-model based bandit algorithms (; ; ; ;) and causal discovery with RL . Contrary to previous works, in this paper we focus on model-based approaches and propose a novel framework for learning better partial models. A key insight of our methodology is the fact that any piece of information about the state of the environment that is used by the policy to make a decision, but is not available to the model, acts as a confounding variable for that model. As a , the learned model is causally incorrect. Using such a model to reason may lead to the wrong about the optimal course of action as we demonstrate in this paper. We address these issues of partial models by combining general principles of causal reasoning, probabilistic modeling and deep learning. Our contributions are as follows. • We identify and clarify a fundamental problem of partial models from a causal-reasoning perspective and illustrate it using simple, intuitive Markov Decision Processes (MDPs) (Section 2). • In order to tackle these shortcomings we examine the following question: What is the minimal information that we have to condition a partial model on such that it will be causally correct with respect to changes in the policy? (Section 4) • We answer this question by proposing a family of viable solutions and empirically investigate their effects on models learned in illustrative environments (simple MDPs and 3D environments). Our method is described in Section 4 and the experiments are in Section 5. We illustrate the issues with partial models using a simple example. Consider the FuzzyBear MDP shown in Figure 1 (a): an agent at initial state s 0 transitions into an encounter with either a teddy bear or a grizzly bear with 50% random chance, and can then take an action to either hug the bear or run away. In order to plan, the agent may learn a partial model q θ (r 2 |s 0, a 0, a 1) that predicts the reward r 2 after performing actions {a 0, a 1} starting from state s 0. This model is partial because it conditions on a sequence of actions without conditioning on the intermediate state s 1. The model is suitable for deterministic environments, but it will have problems on stochastic environments, as we shall see. Such a reward model is usually trained on the agent's experience which consists of sequences of past actions and associated rewards. Now, suppose the agent wishes to evaluate the sequence of actions {a 0 = visit forest, a 1 = hug} using the average reward under the model q θ (r 2 |s 0, a 0, a 1). From Figure 1 (a), we see that the correct average reward is 0.5 × 1 + 0.5 × (−0.5) = 0.25. However, if the model has been trained on past experience in which the agent has mostly hugged the teddy bear and ran away from the grizzly bear, it will learn that the sequence {visit forest, hug} is associated with a reward close to 1, and that the sequence {visit forest, run} is associated with a reward close to 0. Mathematically, the model will learn the following conditional probability: where s 1 is the state corresponding to either teddy bear or grizzly bear. In the above expression, p(s 1 |s 0, a 0) and p(r 2 |s 1, a 1) are the transition and reward dynamics of the MDP, and π(a 1 |s 1) is the agent's behavior policy that generated its past experience. As we can see, the behavior policy affects what the model learns. The fact that the reward model q θ (r 2 |s 0, a 0, a 1) is not robust to changes in the behavior policy has serious implications for planning. For example, suppose that instead of visiting the forest, the agent could have chosen to stay at home as shown in Figure 1 (b). In this situation, the optimal action is to stay home as it gives a reward of 0.6, whereas visiting the forest gives at most a reward of 0.5×1+0.5×0 = 0.5. However, an agent that uses the above reward model to plan will overestimate the reward of going into the forest as being close to 1 and choose the suboptimal action. One way to avoid this bias is to use a behavior policy that doesn't depend on the state s 1, i.e. π(a 1 |s 1) = π(a 1). Unfortunately, this approach does not scale well to complex environments as it requires an enormous amount of training data for the behavior policy to explore interesting states. A better approach is to make the model robust to changes in the behavior policy. Fundamentally, the problem is due to causally incorrect reasoning: the model learns the observational conditional p(r 2 |s 0, a 0, a 1) instead of the interventional conditional given by: where the do-operator do(·) means that the actions are performed independently of the unspecified context (i.e. independently of s 1). The interventional conditional is robust to changes in the policy and is a more appropriate quantity for planning. In contrast, the observational conditional quantifies the statistical association between the actions a 0, a 1 and the reward r 2 regardless of whether the actions caused the reward or the reward caused the actions. In Section 3, we review relevant concepts from causal reasoning, and based on them we propose solutions that address the problem. Finally, although using p(r 2 |s 0, do(a 0), do(a 1)) leads to causally correct planning, it is not optimal either: it predicts a reward of 0.25 for the sequence {visit forest, hug} and 0 for the sequence {visit forest, run}, whereas the optimal policy obtains a reward of 0.5. The optimal policy makes the decision after observing s 1 (teddy bear vs grizzly bear); it is closed-loop as opposed to open-loop. The solution is to make the intervention at the policy level instead of the action level, as we discuss in the following sections. Many applications of machine learning involve predicting a variable y (target) from a variable x (covariate). A standard way to make such a prediction is by fitting a model q θ (y|x) to a dataset of (x, y)-pairs. Then, if we are given a new x and the data-generation process hasn't changed, we can expect that a well trained q θ (y|x) will make an accurate prediction of y. (a) (b) (c) (d) (e) (f) In many situations however, we would like to use the data to make different kinds of predictions. For example, what prediction of y should we make, if something in the environment has changed, or if we set x ourselves? In the latter case x didn't come from the original data-generation process. This may cause problems in our prediction, because there may be unobserved variables u, known as confounders, that affected both x and y during the data-generation process. That is, the actual process was of the form p(u)p(x|u)p(y|x, u) where we only observed x and y as shown in Figure 2 (b). Under this assumption, a model q θ (y|x) fitted on (x, y)-pairs will converge to the target p(y|x) ∝ p(u)p(x|u)p(y|x, u)du. However, if at prediction time we set x ourselves, the actual distribution of y will be p(y|do(x)) = p(u)p(y|x, u)du. This is because setting x ourselves changes the original graph from Figure 2 (b) to the one in Figure 2 (c). Interventions: The operation of setting x to a fixed value x 0 independently of its parents, known as the do-operator , changes the data-generation process to p(u)δ(x − x 0)p(y|x, u), where δ(x − x 0) is the delta-function. As explained above, this in a different target distribution p(u)p(y|x 0, u)du, which we refer to as p(y|do(x = x 0)), or simply p(y|do(x)) when x 0 is implied. Let par j be the parents of x j. The do-operator is a particular case of the more general concept of an intervention: given a generative process p(x) = j p j (x j |par j), an intervention is defined as a change that replaces one or more factors by new factors. For example, the intervention The do-operator is a "hard" intervention whereby we replace a node by a delta function; that is, p(, where x /k denotes the collection of all variables except x k . In general, for graphs of the form of Figure 2 (b), p(y|x) does not equal p(y|do(x)). As a consequence, it is not generally possible to recover p(y|do(x)) using observational data, i.e. (x, y)-pairs sampled from p(x, y), regardless of the amount of data available or the expressivity of the model. However, recovering p(y|do(x)) from observational data alone becomes possible if we assume additional structure in the data-generation process. Suppose there exists another observed variable z that blocks all paths from the confounder u to the covariate x as shown in Figure 2 (d). This variable is a particular case of the concept of a backdoor (, Chapter 3.3) and is said to be a backdoor for the pair x − y. In this case, we can express p(y|do(x)) entirely in terms of distributions that can be obtained from the observational data as: This formula holds as long as p(x|z) > 0 and is referred to as backdoor adjustment. The same formula applies when z blocks the effect of the confounder u on y as in Figure 2 (f). More generally, we can use p(z) and p(y|z, x) from Equation to compute the marginal distribution p(y) under an arbitrary intervention of the form p(x|z) → ψ(x|z) on the graph in Figure 2 (b). We refer to the new marginal as p do(ψ) (y) and obtain it by: A similar formula can be derived when there is a variable z blocking the effect of x on y, which is known as a frontdoor, shown in Figure 2 (e). Derivations for the backdoor and frontdoor adjustment formulas are provided in Appendix A. Causally correct models: Given data generated by an underlying generative process p(x), we say that a learned model q θ (x) is causally correct with respect to a set of interventions I if the model remains accurate after any intervention in I. That is, if q θ (x) ≈ p(x) and q θ (x) is causally correct with respect to I, then Backdoor-adjustment and importance sampling: Given a dataset of N tuples (z n, x n, y n) generated from the joint distribution p(u)p(z|u)p(x|z)p(y|x, u), we could alternatively approximate the marginal distribution p do(ψ) (y) after an intervention p(x|z) → ψ(x|z) by fitting a distribution q θ (y) to maximize the re-weighted likelihood: where w(x, z) = ψ(x|z)/p(x|z) are the importance weights. While this solution is a mathematically sound way of obtaining p do(ψ) (y), it requires re-fitting of the model for any new ψ(x|z). Moreover, if ψ(x|z) is very different from p(x|z) the importance weights w(x, z) will have high variance. By fitting the conditional distribution p(y|z, x) and using Equation we can avoid these limitations. Connection to MBRL: As we will see in much greater detail in the next section, there is a direct connection between partial models in MBRL and the causal concepts discussed above. In MBRL we are interested in making predictions about some aspect of the future (observed frames, rewards, etc.); these would be the dependent variables y. Such predictions are conditioned on actions which play the role of the covariates x. When using partial models, the models will not have access to the full state of the policy and so the policy's state will be a confounding variable u. Any variable in the computational graph of the policy that mediates the effect of the state in the actions will be a backdoor with respect to the action-prediction pair. We consider environments with a hidden state e t and dynamics specified by an unknown transition probability of the form p(e t |e t−1, a t−1). At each step t, the environment receives an action a t−1, updates its state to e t and produces observable data y t ∼ p(y t |e t) which includes a reward r t and potentially other forms of data such as images. An agent with internal state s t interacts with the environment via actions a t produced by a policy π(a t |s t) and updates its state using the observations y t+1 by s t+1 = f s (s t, a t, y t+1), where f s can for instance be implemented with an RNN. The agent will neither observe nor model the environment state e t; it is a confounder on the data generation process. Figure 3 (a) illustrates the interaction between the agent and the environment. Consider an agent at an arbitrary point in time and whose current state 2 is s 0, and assume we are interested in generative models that can predict the outcome 3 y T of a sequence of actions {a 0, . . ., a T −1} on the environment, for an arbitrary time T. A first approach, shown in Figure 3 (c), would be to use an action-conditional autoregressive model of observations; initializing the model state h 1 to a function of (s 0, a 0), sample y 1 from p(.|h 1), update the state h 2 = f s (h 1, a 1, y 1), sample y 2 from p(.|h 2), and so on until y T is sampled. In other words, the prediction of observation y T is conditioned on all available observations (s 0, y <T) and actions a <T. This approach is for instance found in . In contrast, another approach is to predict observation y T given actions but using no observation data beyond s 0. This family of models, sometimes called models with overshoot, can for instance be found in (b; ; ; ; ; ;) and is illustrated in Figure 3 (d). The model deterministically updates its state h t+1 = f h (h t, a t), and generates y T from p(.|h T). An advantage of those models is that they can generate y T directly without generating intermediate observations. More generally, we define a partial view v t as any function of past observations y ≤t and actions a ≤t. We define a partial model as a generative model whose predictions are only conditioned on s 0, the partial views v <t and the actions a <t: to generate y T, the agent generates v 1 from p(.|h 1), updates the state to h 2 = f h (h 1, v 1, a 1), and so on, until it has computed h T and sampled y T from p(.|h T). Both previous examples can be seen as special cases of a partial model, with v t = y t and v t = ∅ respectively. NCPM architecture (overshoot) CPM architecture A subtle consequence of conditioning the model only on a partial view v t is that the variables y <T become confounders for predicting y T, in addition to the state of the environment which is always a confounder. In Section 3 we showed that the presence of confounders makes it impossible to correctly predict the target distribution after changes in the covariate distribution. In the context of partial models, the covariates are the actions a <T executed by the agent and the agent's initial state s 0, whereas the targets are the predictions y T we want to make at time T. A corollary of this is that the learned partial model may not be robust against changes in the policy and thus cannot be used to make predictions under different policies π, and therefore should not be used for planning. In Section 3 we saw that if there was a variable blocking the influence of the confounders on the covariates (a backdoor) or a variable blocking the influence of the covariates on the targets (a frontdoor), it may be possible to make predictions under a broad range of interventions if we learn the correct components from data, e.g. using the backdoor-adgustment formula in Equation. In general it may not be straightforward to apply the backdoor-adjustment formula because we may not have enough access to the graph details to know which variable is a backdoor. In reinforcement learning however, we can fully control the agent's graph. This means that we can choose any node in the agent's computational graph that is between its internal state s t and the produced action a t as a backdoor variable for the actions. Given the backdoor z t, the action a t is conditionally independent of the agent state s t. To make partial models causally correct, we propose to choose the partial view v t to be equal to the backdoor z t. This allows us to learn all components we need to make predictions under an arbitrary new policy. In the rest of this paper we will refer to such models as Causal Partial Models (CPM), and all other partial models will be henceforth referred to as Non-Causal Partial Models (NCPM). We assume the backdoor z t is sampled from a distribution m(z t |s t) and the policy is a distribution conditioned on z t, π(a t |z t). This is illustrated in Figure 3 (b) and described in more details in Table 1 (right). We can perform a simulation under a new policy ψ(a t |h t, z t) by directly applying the backdoor-adjustment formula, Equation, to the RL graph as follows: where the components p(z t |h t) and p(y t+1 |h t+1) with h t+1 = f h (h t, z t, a t) can be learned from observational data produced by the agent. Modern deep-learning agents (e.g. as in ; ;) have complex graphs, which means that there are many possible choices for the backdoor z t. So an important question is: what are the simplest choices of z t? Below we list a few of the simplest choices we can use and discuss their advantages and trade-offs; more choices for z t are listed in Appendix C. Agent state: Identifying z t with the agent's state s t can be very informative about the future, but this comes at a cost. As part of the generative model, we have to learn the component p(z t |h t). This may be difficult in practice when z t = s t due to the high-dimensionality of s t, hence and performing simulations would be computationally expensive. Policy probabilities: The z t can be the vector of probabilities produced by a policy when we have discrete actions. The vector of probabilities is informative about the underlying state, if different states produce different probabilities. Intended action: The z t can be the intended action before using some form of exploration, e.g. ε-greedy exploration. This is an interesting choice when the actions are discrete, as it is simple to model and, when doing planning, in a low branching factor which is independent of the complexity of the environment (e.g. in 3D, visually rich environments). The causal correction methods presented in this section can be applied to any partial model. In our experiments, we will focus on environment models of the form proposed by. These models consist of a deterministic "backbone" RNN that integrates actions and other contextual information. The states of this RNN are then used to condition a generative model of the observed data y t, but the observations are not fed back to the model autoregressively, as shown in Table 1 (left). This corresponds to learning a model of the form p(y t |s 0, a 0, . . ., a t−1). We will compare this against our proposed model, which allows us to simulate the outcome of any policy using Equation. In this setup, a policy network produces z t before an action a t. For example, if the z t is the intended action before ε-exploration, z t will be sampled from a policy m(z t |s t) and the executed action a t will then be sampled from an ε-exploration policy π(a t |z t) = (1 − ε)δ zt,at + ε 1 na, where n a is the number of actions and ε is in. Acting with the sampled actions is diagrammed in Figure 3 (b) and the mathematical description is provided in Table 1. The model components p(z t |h t) and p(y t |h t) are trained via maximum likelihood on observational data collected by the agent. The partial model does not need to model all parts of the y t observation. For example, a model to be used for planning can model just the reward and the expected return. In any case, it is imperative that we use some form of exploration to ensure that π(a t |z t) > 0 for all a t and z t as this is a necessary to allow the model to learn the effects of the actions. The model usage is summarized in Algorithms 1 and 2 in Appendix D and we discuss the model properties in Appendix E. We analyse the effect of the proposed corrections on a variety of models and environments. When the enviroment is an MDP, such as the FuzzyBear MDP from Section 2, we can compute exactly both the non-causal and the causal model directly from the MDP transition matrix and the behavior policy. In Section 5.1, we compare the optimal policies computed from the non-causal and the causal model via value iteration. For this analysis, we used the intended-action backdoor, since it's compatible with a tabular representation. In Section 5.2, we repeat the analysis using a learned model instead. For these experiments, we used the policy-probabilities backdoor. The optimal policies corresponding to a given model were computed using a variant of the Dyna algorithm or expectimax . Finally in Section 5.3, we provide an analysis of the model rollouts in a visually rich 3D environment. Given an MDP and a behavior policy π, the optimal values V * M (π) of planning based on a NCPM and CPM are derived in Appendix I. The theoretical analysis of the MDP does not use empirically trained models from the policy data, but rather assumes that the transition probabilities of the MDP and the policy from which training data are collected are accurately learned by the model. This allows us to isolate the quality of planning using the model from how accurate the model is. Optimal behavior policy: The optimal policy of the FuzzyBear MDP (Figure 1(a) ) is to always hug the teddy bear and run away from the grizzly bear. Using training data from this behavior policy, we show in Figure 7 (Appendix I) the difference in the optimal planning based on the NCPM (Figure 3(d) ) and CPM with the backdoor z t being the intended action (Figure 3(e) ). Learning from optimal policies with ε-exploration, the converged causal model is independent of the exploration parameter ε. We see effects of varying ε on learned models in Figure 8 (Appendix I). Sub-optimal behavior policies: We empirically show the difference between the causal and noncausal models when learning from randomly generated policies. For each policy, we derive the corresponding converged model M (π) using training data generated by the policy. We then compute the optimal value of V * M (π) using this model. On FuzzyBear (Figure 4(a) ), we see that the causal model always produces a value greater than or equal to the value of the behavior policy. The value estimated by the causal model can always be achieved in the real environment. If the behavior policy was already good, the simulation policy used inside the model can reproduce the behavior policy by respecting the intended action. If the behavior policy is random, the intended action is uninformative about the underlying state, so the simulation policy has to choose the most rewarding action, independently of the state. And if the behavior policy is bad, the simulation policy can choose the opposite of the intended action. This allows to find a very good simulation policy, when the behavior policy is very bad. To further improve the policy, the search for better policies should be done also in state s 1. And the model can then be retrained on data from the improved policies. If we look at the non-causal model, we see that it displays the unfortunate property of becoming more unrealistically optimistic as the behavior policy becomes better. Similarly, the worse the policy is, i.e. the lower V π env is, the non-causal model becomes less able to improve the policy. On AvoidFuzzyBear (Figure 4(b) ), the optimal policy is to stay at home. Learning from data generated by random policies, the causal model indeed always prefers to stay home with any sampled intentions, ing in a constant evaluation for all policies. On the other hand, the non-causal model gives varied, overly-optimistic evaluations, while choosing the wrong action (visit forest). We previously analyzed the case where the transition probabilities and theoretically optimal policy are known. We will now describe experiments with learned models trained by gradient descent, using the same training setup as described in Section 4. In this experiment we demonstrate that we can learn the optimal policy purely from off-policy experience using a general n-step-return algorithm derived from a causal model. The algorithm is described in detail in Appendix F. In short, we simulate experiences from the partial model, and use policy gradient to learn the optimal policy on these experiences as if they were real experiences (this is possible since the policy gradient only needs action probabilities, values, predicted rewards and ends of episodes). We compare a non-causal model and a causal model where the backdoor z t is the intended action. For the environment we use AvoidFuzzyBear (Figure 1(b) ). We collect experiences that are sub-optimal: half the time the agent visits the forest and half the time it stays home, but once in the forest it acts optimally with probability 0.9. This is meant to simulate situations either where the agent has not yet learned the optimal policy but is acting reasonably, or where it is acting with a different objective (such as exploration or intrinsic reward), but would like to derive the optimal policy. We expect the non-causal model to choose the sub-optimal policy of visiting the forest, since the sequence of actions of visiting the forest and hugging typically yields high reward. This is what we indeed find, as shown in Figure 5(a). We see that the non-causal model indeed achieves a sub-optimal reward (less than 0.6), but believes that it will achieve a high reward (more than 0.6). On the other hand, the causal model achieves the optimal reward and correctly predicts that it will achieve the corresponding value. AvoidFuzzyBear with Expectimax: In this experiment, we used the classical expectimax search . On the simple AvoidFuzzyBear MDP, it is enough to use a search depth of 3: a decision node, a chance node and a decision node. The behavior policy was progressively improving as the model was trained. In Figure 5 (b), we see the for the different models. Only the non-causal model was not able to solve the task. Planning with the non-causal model consistently preferred the stochastic path with the fuzzy bear, as predicted by our theoretical analysis with value iteration. The models with clustered probabilities and clustered observations approximate modeling of the probabilities or observations. These models are described in Appendix H. The setup for these experiments is similar to , where the agent is trained using the IMPALA algorithm , and the model is trained alongside the agent via ELBO optimization on the data collected by the agent. The architecture of the agent and model is based on and follows the description in Table 1 (right). For these experiments, the backdoor z t was chosen to be the policy probabilities, and p(z t |h t) was parametrized as a mixture of Dirichlet distributions. See Appendix J for more details. We demonstrate the effect of the causal correction on the 3D T-Maze environment where an agent walks around in a 3D world with the goal of collecting the reward blocks (food). The layout of this environment is shown in Figure 6 (a). From our previous , we expect NCPMs to be unrealistically optimistic. This is indeed what we see in Figure 6 (b). Compared to NCPM, CPM with generated z generates food at the end of a rollout with around 50% chance, as expected given that the environment randomly places the food on either side. In Figure 6 (c) and Figure 6 In all rollouts depicted, the top row shows the real frames observed by an agent following a fixed policy (Ground Truth, GT). Bottom 5 rows indicate model rollouts, conditioned on 3 previous frames without revealing the location of the food. CPM and NCPM differ in their state-update formula and action generation (see Table 1), but frame generation yt ∼ p(yt|ht) is the same for both, as introduced in. For CPM, we compare rollouts with forced actions and generated z to rollouts with forced actions and forced z from the ground truth. We can observe that rollouts with the generated z (left) respect the randomness in the food placement (with and without food), while the rollouts with forced z (right) consistently generate food blocks, if following actions consistent with the backdoor z from the well-trained ground truth policy. We have characterized and explained some of the issues of partial models in terms of causal reasoning. We proposed a simple, yet effective, modification to partial models so that they can still make correct predictions under changes in the behavior policy, which we validated theoretically and experimentally. The proposed modifications address the correctness of the model against policy changes, but don't address the correctness/robustness against other types of intervention in the environment. We will explore these aspects in future work. Starting from a data-generation process of the form illustrated in Figure 2 (b), p(x, y, u) = p(u)p(x|u)p(y|x, u), we can use the do-operator to compute p(y|do(x)) = p(u)p(y|x, u)du. Without assuming any extra structure in p(x|u) or in p(y|x, u) it is not possible to compute p(y|do(x)) from the knowledge of the joint p(x, y) alone. If there was a variable z blocking all the effects of u on x, as illustrated in Figure 2(d), then p(y|do(x)) can be derived as follows: Conditioning the new joint where we used the formula If instead of just fixing the value of x, we perform a more general intervention p(x|z) → ψ(x|z), then p do(ψ(x|z)) (y) can be derived as follows: New marginal Applying the same reasoning to the graph shown in Figure 2 (e), we obtain the formula where p(z|x), p(x) and p(y|x, z) can be directly measured from the available (x, y, z) data. This formula holds as long as p(z|x) > 0, ∀x, z and it is a simple instance of frontdoor adjustment . Here, we will show in more detail that the models (c) and (e) in Figure 3 are causally correct, whereas model (d) is causally incorrect. Specifically, we will show that given an initial state s 0 and after setting the actions a 0 to a T to specific values, models (c) and (e) make the same prediction about the future observation y T +1 as performing the intervention in the real world, whereas model (d) does not. Using the do-operator, a hard intervention in the model is given by: where h t is a deterministic function of s 0, a 0:t−1 and y 1:t−1. The same hard intervention in the real world is given by: p(y t |s 0, a 0:t−1, y 1:t−1) dy 1:T. If the model is trained perfectly, the factors q θ (y t |h t) will become equal to the conditionals p(y t |s 0, a 0:t−1, y 1:t−1). Hence, an intervention in a perfectly trained model makes the same prediction as in the real world, which means that the model is causally correct. The interventional conditional in the model is simply: where h T +1 is a deterministic function of s 0 and a 0:T. In a perfectly trained model, we have that q θ (y T +1 |h T +1) = p(y T +1 |s 0, a 0:T). However, the observational conditional p(y T +1 |s 0, a 0:T) is not generally equal to the inverventional conditional p(y T +1 |s 0, do(a 0:T)), which means that the model is causally incorrect. Model (e) Finally, the interventional conditional in this model is: where h t is a deterministic function of s 0, a 0:t−1 and z 1:t−1. The same intervention in the real world can be written as follows: In a perfectly trained model, we have that q θ (y T +1 |h T +1) = p(y T +1 |s 0, a 0:T, z 1:T) and q θ (z t |h t) = p(z t |s 0, a 0:t−1, z 1:t−1). That means that the intervention in a perfectly trained model makes the same prediction as the same intervention in the real world, hence the model is causally correct. The first alternative backdoor we consider is the empty backdoor: Empty backdoor z t = ∅: This backdoor is in general not appropriate; it is however appropriate when the behavior policy does in fact depend on no information, i.e. is not a function of the state s t. For example, the policy can be uniformly random (or any non-state dependent distribution over actions). This severely limits the behavior policy. Because the backdoor contains no information about the observations, the simulations are open-loop, i.e. we can only consider plans which consist of a sequence of fixed actions, not policies. In principle, the z t can be any layer from the policy. To model the layer with a p(z t |h t) distribution, we would need to know the needed numerical precision of the considered layer. For example, a quantized layer can be modeled by a discrete distribution. Alternatively, if the layer is produced by a variational encoder or variational information bottleneck, we can train p(z t |h t) to minimize the KL(p encoder (z t |s t) p(z t |h t)). Finally, if a backdoor is appropriate, we can combine it with additional information: Combinations: It is possible to combine a layer with information from other layers. For example, the intended action can be combined with extra bits from the input layer. Such z t can be more informative. For example, the extra bits can hold a downsampled and quantized version of the input layer. Algorithms 1 and 2 describe how the model is trained and used to simulate trajectories. The algorithm for training assumes a distributed actor-learner setup . Data collection on an actor: For each step: z t ∼ m(z t |s t)... sample the backdoor (e.g., the partial view with the intended action) a t ∼ π(a t |z t)... sample the executed action (e.g., add ε-exploration) Table 2 provides an overview of properties of autoregressive models, deterministic non-causal models and the causal partial models. The causal partial models have to generate only a partial view. The partial view can be small and easy to model. For example, a partial view with the discrete intended action can be flexibly modeled by a categorical distribution. The causal partial models are fast and causally correct in stochastic environments. The causal partial models have a low simulation variance, because they do not need to model and generate unimportant distractors. If the environment has deterministic regions, the model can quickly learn to ignore the small partial view and collect information only from the executed action. It is interesting that the causal partial models are invariant of the π(a t |z t) distribution. For example, if the partial view z t is the intended action, the optimally learned model would be invariant of the used ε-exploration: π(a t |z t). Analogously, the autoregressive models are invariant of the whole policy π(a t |s t). This allows the autoregressive models to evaluate any other policy inside of the model. The causal partial model can run inside the simulation only policies conditioned on the starting state s 0, the actions a <t and the partial views z ≤t. If we want to evaluate a policy conditioned on different features, we can collect trajectories from the policy and retrain the model. The model can always evaluate the policy used to produce the training data. We can also improve the policy, because the model allows to estimate the return for an initial (s 0, a 0) pair, so the model can be used as a critic for a policy improvement. In this section we derive an algorithm for learning an optimal policy given a (non-optimal) experience that utilizes n-step returns from partial models presented in this paper. In general, a model of the environment can be used in a number of ways for reinforcement learning. In Dyna , we sample experiences from the model, and apply a model-free algorithm (Q-learning in the original implementation, but more generally we could consider SARSA or policy gradient) as if these were real experiences. In Dyna-2 , the same process is applied but in the context the agent is currently in-starting the simulations from the current state-and adapting the policy locally (for example through separate fast weights). In MCTS, the model is used to build a tree of possibilities. Can we apply our model directly in these scenarios? While we don't have a full model of the environment, we can produce a causally correct simulation of rewards and values; one that should generalize to policies different from those the agent was trained on. Policy probabilities, values, rewards and ends of episodes are the only variables that the above RL algorithms need. Here we propose a specific implementation of Dyna-style policy-gradient algorithm based on the models discussed in the paper. This is meant as a proof of principle, and more exploration is left for future work. As the agent sees an observation y t+1, it forms an internal agent state s t from this observation and the previous agent state: s t+1 = RNN s (s t, a t, y t+1). The agent state in our implementation is the state of the recurrent network, typically LSTM . Next, let us assume that at some point in time with state s 0 the agent would like to learn to do a simulation from the model. Let h t be the state of the simulation at time t. The agent first sets h 1 = g(s 0, a 0) and proceeds with n-steps of the simulation recurrent network update h t+1 = RNN(h t, z t, a t). The agent learns the model p(z t |h t) which it can use to simulate forward. We assume that the model was trained on some (non-optimal) policy/experience. We would like to derive an optimal policy and value function. Since these need to be used during acting (if the agent were to then act optimally in the real environment), they are functions of the agent state s t: π(a t |s t), V (s t). Now in general, h t = s t but we would like to use the simulation to train an optimal policy and value function. Thus we define a second pair of functions π h (a t |h t, z t), V h (h t, z t). Here the extra z t's are needed, since the h t has seen z's only up to point z t−1. Next we are going to train these functions using policy gradients on simulated experiences. We start with some state s t and produce a simulation h t+1,..., h T by sampling z t from the model at each step and action a t ∼ π h (a t |h t, z t). However at the initial point t, we sample from π, not π h, and compute the value V, not V h. Sequence of actions, values and policy parameters are the quantities needed to compute a policy gradient update. We use this update to train all these quantities. There is one last element that the algorithm needs. The values and policy parameters are trained at the start state and along the simulation by n-step returns, computed from simulated rewards and the bootstrap value at the end of the simulation. However this last value is not trained in any way because it depends on the simulated state V h (h T) not the agent state s T. We would like this value to equal to what the agent state would produce: V (s T). Thus, during training of the model, we also train V h (h T) to be close to V (s T) by imposing an L 2 penalty. In our implementation, we actually impose a penalty at every point t during simulation but we haven't experimented with which choice is better. Variance reduction. To reduce the variance of a simulation, it is possible to sample the z t from a proposal distribution q(z t |h t). The correct expectation can be still recovered by using an importance weight: w = p(zt|ht) q(zt|ht). Data efficient training. Usually, we know the distribution of the used partial view: z t ∼ m(z t |s t). When training the p(z t |h t) model, we can then minimize the exact KL(m(Z t |s t) p(Z t |h t)). When using a tree-search, we want to have a small branching factor at the chance nodes. A good z t variable would be discrete with a small number of categories. This is satisfied, if the z t is the intended action and the number of the possible actions is small. We do not have such compact discrete z t, if using as z t the observation, the policy probabilities or some other modeled layer. Here, we will present a model that approximates such causal partial models. The idea is to cluster the modeled layers and use just the cluster index as z t. The cluster index is discrete and we can control the branching factor by choosing the the number of clusters. Concretely, let's call the modeled layer x t. We will model the layer with a mixture of components. The mixture gives us a discrete latent variable z t to represent the component index. To train the mixture, we use a clustering loss to train only the best component to model the x t, given h t and z t: where p(z t |h t) is a model of the categorical component index and β clustering ∈ is a hyperparameter to encourage moving the information bits to the latent z t. During training, we use the index of the best component as the inferred z t. In theory, a better inference can be obtained by smoothing. In contrast to training by maximum likelihood, the clustering loss uses just the needed number of the mixture components. This helps to reduce the branching factor in a search. In general, the cluster index is not guaranteed to be sufficient as a backdoor, if the reconstruction loss − log p(x t |h t, z t) is not zero. For example, if x t is the next observation, the number of mixture components may need to be unrealistically large, if the observation can contains many distractors. We derive the following two model-based evaluation metrics for the MDP environments. • V * NCPM(π) (s 0): optimal value computed with the non-causal model, which is trained with training data from policy π, starting from state s 0. • V * CPM(π) (s 0): optimal value computed with the causal model, which is trained with training data from policy π, starting from state s 0. The theoretical analysis of the MDP does not use empirically trained models from the policy data but rather assumes that the transition probabilities p(s i+1 | s i, a i) of the MDP, and the policy, π(a i | s i) or π(z i | s i), from which training data are collected are accurately learned by the model. Notice that the probability of s i is affected by a i here, because the network gets a i as an input, when predicting the r i+1. This will introduce the non-causal bias. The network implements the expectation implicitly by learning the mean of the reward seen in the training data. We can compute the expectation exactly, if we know the MDP. The p(s i | s 0, a 0, . . ., a i) can be computed recursively in two-steps as: Here, we see the dependency of the learned model on the policy π. The remaining terms can be expressed as: Denoting p(s i | s 0, a 0, . . ., a j) by S i,j, we have the two-step recursion with where Denoting p(s i | s 0, a 0, z 1 . . ., z i−1, a i−1) by Z i, we have where we used the fact that s i−1 is independent of a i−1, given z i−1. Furthermore, Therefore we can compute Z i recursively, with In Figure 7 (a), the non-causal agent always chooses hug at step t = 1, since it has learned from the optimal policy that a reward of +1 always follows after taking a 1 = hug. Thus from the noncausal agent's point of view, the expected reward is always 1 after hugging. This is wrong since only hugging a teddy bear gives reward 1. Moreover it exceeds the maximum expected reward 0.5 of the FuzzyBear MDP. In Figure 7 (b), the causal agent first samples the intention z 1 from the optimal policy, giving equal probability of landing in either of the two chance nodes. Then it chooses hug if z 1 = 0, indicating a teddy bear since the optimal policy intends to hug only if it observes a teddy bear. Likewise, it chooses run if z 1 = 1, indicating a grizzly bear. While the non-causal model expects unrealistically high reward, the causal model never over-estimates the expected reward. We analyze learning from optimal policy with varying amounts of ε-exploration for models on FuzzyBear (Figure 8(a) ) and AvoidFuzzyBear (Figure 8(b) ). As the parameter ε-exploration varies in range, the causal model has a constant evaluation since the intended action is not affected by the randomness in exploration. The non-causal model, on the other hand, evaluates based on the deterministic optimal policy data (i.e. at ε = 0) at an unrealistically high value of 1.0 when the maximum expected reward is 0.5. As ε → 1, the training data becomes more random and its optimal evaluation expectantly goes down to match the causal evaluation based on a uniformly random policy. The causal evaluation based on the optimal policy V gives an unrealistically high value 1.0 learned from the deterministic optimal policy (ε = 0). Expectantly, it decreases to the level of CPM optimal value V * CPM(π rand) learned from the uniformly random policy as ε → 1. The CPM optimal values V * CPM(π *) are constant for any value of ε based on the theoretical analysis in Section I.1. (b) shows the same plots as (a) for the AvoidFuzzyBear environment. Learning from any policy π, the CPM optimal value always equals the maximum expected reward 0.6, by correctly choosing to stay home. When the backdoor variable z t was chosen to be the action probabilities, the distribution p(z t |h t) was chosen as a mixture-network with N c Dirichlet components. The concentration parameters α k (h t) of each component were parametrized as α k (h t) = α softmax(f k (h t)), where f k is the output of a relu-MLP with layer sizes [256, 64, N c × N a], α is a total concentration parameter and N a is the number of actions. The hyper-parameter value ranges used in our 3D experiments are similar to and are shown in Table 3. To speed up training, we interleaved training on the T-maze level with a simple "Food" level, in which the agent simply had to walk around and eat food blocks (described by). For each episode, 5 rollouts are generated after having observed the first 3 frames from the environment. For the 5 rollouts, we processed the first 25 frames to classify the presence of food blocks by performing color matching of RGB values, using K-means and assuming 7 clusters. Rollouts were generated shortly after the policy had achieved ceiling performance (15-20 million frames seen), but before the entropy of the policy reduces to the point that there is no longer sufficient exploration. See Figure 9 for these same for later training. action + forced z action + gen. z Figure 9: While earlier in training, CPM generates a diverse range of outcomes (food or no food), as the policy becomes more deterministic (as seen in the right plot of the policy entropy over training), CPM starts to generate more food and becomes overoptimistic, similar to NCPM. This can be avoided by training the model with non-zero ε-exploration.
Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments.
1,788
scitldr
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM , dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularization-based methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency Intelligent systems, whether they are natural or artificial, must be able to quickly adapt to changes in the environment and to quickly learn new skills by leveraging past experiences. While current learning algorithms can achieve excellent performance on a variety of tasks, they strongly rely on copious amounts of supervision in the form of labeled data. The lifelong learning (LLL) setting attempts at addressing this shortcoming, bringing machine learning closer to a more realistic human learning by acquiring new skills quickly with a small amount of training data, given the experience accumulated in the past. In this setting, the learner is presented with a stream of tasks whose relatedness is not known a priori. The learner has then the potential to learn more quickly a new task, if it can remember how to combine and re-use knowledge acquired while learning related tasks of the past. Of course, for this learning setting to be useful, the model needs to be constrained in terms of amount of compute and memory required. Usually this means that the learner should not be allowed to merely store all examples seen in the past (in which case this reduces the lifelong learning problem to a multitask problem) nor should the learner be engaged in computations that would not be feasible in real-time, as the goal is to quickly learn from a stream of data. Unfortunately, the established training and evaluation protocol as well as current algorithms for lifelong learning do not satisfy all the above desiderata, namely learning from a stream of data using limited number of samples, limited memory and limited compute. In the most popular training paradigm, the learner does several passes over the data BID1 BID22, while ideally the model should need only a handful of samples and these should be provided one-by-one in a single pass BID15. Moreover, when the learner has several hyper-parameters to tune, the current practice is to go over the sequence of tasks several times, each time with a different hyper-parameter value, again ignoring the requirement of learning from a stream of data and, strictly speaking, violating the assumption of the LLL scenario. While some algorithms may work well in a single-pass setting, they unfortunately require a lot of computation BID15 or their memory scales with the number of tasks, which greatly impedes their actual deployment in practical applications. In this work, we propose an evaluation methodology and an algorithm that better match our desiderata, namely learning efficiently -in terms of training samples, time and memory -from a stream of tasks. First, we propose a new learning paradigm, whereby the learner performs cross validation on a set of tasks which is disjoint from the set of tasks actually used for evaluation (Sec. 2). In this setting, the learner will have to learn and will be tested on an entirely new sequence of tasks and it will perform just a single pass over this data stream. Second, we build upon GEM BID15, an algorithm which leverages a small episodic memory to perform well in a single pass setting, and propose a small change to the loss function which makes GEM orders of magnitude faster at training time while maintaining similar performance; we dub this variant of GEM, A-GEM (Sec. 4). Third, we explore the use of compositional task descriptors in order to improve the fewshot learning performance within LLL showing that with this additional information the learner can pick up new skills more quickly (Sec. 5). Fourth, we introduce a new metric to measure the speed of learning, which is useful to quantify the ability of a learning algorithm to learn a new task (Sec. 3). And finally, using our new learning paradigm and metric, we demonstrate A-GEM on a variety of benchmarks and against several representative baselines (Sec. 6). Our experiments show that A-GEM has a better trade-off between average accuracy and computational/memory cost. Moreover, all algorithms improve their ability to quickly learn a new task when provided with compositional task descriptors, and they do so better and better as they progress through the learning experience. Currently, most works on lifelong learning BID23 BID16 adopt a learning protocol which is directly borrowed from supervised learning. There are T tasks, and each task consists of a training, validation and test sets. During training the learner does as many passes over the data of each task as desired. Moreover, hyperparameters are tuned on the validation sets by sweeping over the whole sequence of tasks as many times as required by the cross-validation grid search. Finally, metrics of interest are reported on the test set of each task using the model selected by the previous cross-validation procedure. Since the current protocol violates our stricter definition of LLL for which the learner can only make a single pass over the data, as we want to emphasize the importance of learning quickly from data, we now introduce a new learning protocol. We consider two streams of tasks, described by the following ordered sequences of datasets DISPLAYFORM0 } is the dataset of the k-th task, T CV < T (in all our experiments T CV = 3 while T = 20), and we assume that all datasets are drawn from the same distribution over tasks. To avoid cluttering of the notation, we let the context specify whether D k refers to the training or test set of the k-th dataset. CV is the stream of datasets which will be used during cross-validation; D CV allows the learner to replay all samples multiple times for the purposes of model hyper-parameter selection. Instead, D EV is the actual dataset used for final training and evaluation on the test set; the learner will observe training examples from D EV once and only once, and all metrics will be reported on the test sets of D EV. Since the regularization-based approaches for lifelong learning BID29 are rather sensitive to the choice of the regularization hyper-parameter, we introduced the set D CV, as it seems reasonable in practical applications to have similar tasks that can be used for tuning the system. However, the actual training and testing are then performed on D EV using a single pass over the data. See Algorithm 1 for a summary of the training and evaluation protocol. Each example in any of these dataset consists of a triplet defined by an input (x k ∈ X), task descriptor (t k ∈ T, see Sec. 5 for examples) and a target vector (y k ∈ y k), where y k is the set of labels specific to task k and y k ⊂ Y. While observing the data, the goal is to learn a predictor f θ: X × T → Y, parameterized by θ ∈ R P (a neural network in our case), that can map any test pair (x, t) to a target y. Below we describe the metrics used to evaluate the LLL methods studied in this work. In addition to Average Accuracy (A) and Forgetting Measure (F) BID4, we define a new measure, the Learning Curve Area (LCA), that captures how quickly a model learns. The training dataset of each task, D k, consists of a total B k mini-batches. After each presentation of a mini-batch of task k, we evaluate the performance of the learner on all the tasks using the corresponding test sets. Let a k,i,j ∈ be the accuracy evaluated on the test set of task j, after the model has been trained with the i-th mini-batch of task k. Assuming the first learning task in the continuum is indexed by 1 (it will be T CV + 1 for D EV) and the last one by T (it will be T CV for D CV), we define the following metrics:Average Accuracy (A ∈) Average accuracy after the model has been trained continually with all the mini-batches up till task k is defined as: DISPLAYFORM0 In particular, A T is the average accuracy on all the tasks after the last task has been learned; this is the most commonly used metric used in LLL.Forgetting Measure (F ∈ [−1, 1]) BID4 Average forgetting after the model has been trained continually with all the mini-batches up till task k is defined as: DISPLAYFORM1 where f k j is the forgetting on task'j' after the model is trained with all the mini-batches up till task k and computed as: DISPLAYFORM2 Measuring forgetting after all tasks have been learned is important for a two-fold reason. It quantifies the accuracy drop on past tasks, and it gives an indirect notion of how quickly a model may learn a new task, since a forgetful model will have little knowledge left to transfer, particularly so if the new task relates more closely to one of the very first tasks encountered during the learning experience. Learning Curve Area (LCA ∈) Let us first define an average b-shot performance (where b is the mini-batch number) after the model has been trained for all the T tasks as: DISPLAYFORM3 LCA at β is the area of the convergence curve Z b as a function of b ∈ [0, β]: DISPLAYFORM4 LCA has an intuitive interpretation. LCA 0 is the average 0-shot performance, the same as forward transfer in BID15. LCA β is the area under the Z b curve, which is high if the 0-shot performance is good and if the learner learns quickly. In particular, there could be two models with the same Z β or A T, but very different LCA β because one learns much faster than the other while they both eventually obtain the same final accuracy. This metric aims at discriminating between these two cases, and it makes sense for relatively small values of β since we are interested in models that learn from few examples. So far we discussed a better training and evaluation protocol for LLL and a new metric to measure the speed of learning. Next, we review GEM BID15, which is an algorithm that has been shown to work well in the single epoch setting. Unfortunately, GEM is very intensive in terms of computational and memory cost, which motivates our efficient variant, dubbed A-GEM. In Sec. 5, we will describe how compositional task descriptors can be leveraged to further speed up learning in the few shot regime. GEM avoids catastrophic forgetting by storing an episodic memory M k for each task k. While minimizing the loss on the current task t, GEM treats the losses on the episodic memories of tasks DISPLAYFORM0, as inequality constraints, avoiding their increase but allowing their decrease. This effectively permits GEM to do positive backward transfer which other LLL methods do not support. Formally, at task t, GEM solves for the following objective: DISPLAYFORM1 Where f t−1 θ is the network trained till task t − 1. To inspect the increase in loss, GEM computes the angle between the loss gradient vectors of previous tasks g k, and the proposed gradient update on the current task g. Whenever the angle is greater than 90°with any of the g k's, it projects the proposed gradient to the closest in L2 norm gradientg that keeps the angle within the bounds. Formally, the optimization problem GEM solves is given by: DISPLAYFORM2 Eq.7 is a quadratic program (QP) in P -variables (the number of parameters in the network), which for neural networks could be in millions. In order to solve this efficiently, GEM works in the dual space which in a much smaller QP with only t − 1 variables: DISPLAYFORM3 where DISPLAYFORM4 ×P is computed at each gradient step of training. Once the solution v * to Eq. 8 is found, the projected gradient update can be computed asg = G v * + g. While GEM has proven very effective in a single epoch setting BID15, the performance gains come at a big computational burden at training time. At each training step, GEM computes the matrix G using all samples from the episodic memory, and it also needs to solve the QP of Eq. 8. Unfortunately, this inner loop optimization becomes prohibitive when the size of M and the number of tasks is large, see Tab. 7 in Appendix for an empirical analysis. To alleviate the computational burden of GEM, next we propose a much more efficient version of GEM, called Averaged GEM (A-GEM).Whereas GEM ensures that at every training step the loss of each individual previous tasks, approximated by the samples in episodic memory, does not increase, A-GEM tries to ensure that at every training step the average episodic memory loss over the previous tasks does not increase. Formally, while learning task t, the objective of A-GEM is: DISPLAYFORM5 The corresponding optimization problem reduces to: DISPLAYFORM6 where g ref is a gradient computed using a batch randomly sampled from the episodic memory, DISPLAYFORM7, of all the past tasks. In other words, A-GEM replaces the t − 1 constraints of GEM with a single constraint, where g ref is the average of the gradients from the previous tasks computed from a random subset of the episodic memory. The constrained optimization problem of Eq. 10 can now be solved very quickly; when the gradient g violates the constraint, it is projected via: DISPLAYFORM8 The formal proof of the update rule of A-GEM (Eq. 11) is given in Appendix C. This makes A-GEM not only memory efficient, as it does not need to store the matrix G, but also orders of magnitude faster than GEM because 1) it is not required to compute the matrix G but just the gradient of a random subset of memory examples, 2) it does not need to solve any QP but just an inner product, and 3) it will incur in less violations particularly when the number of tasks is large (see Tab. 7 and Fig. 6 in Appendix for empirical evidence). All together these factors make A-GEM faster while not hampering its good performance in the single pass setting. Intuitively, the difference between GEM and A-GEM loss functions is that GEM has better guarantess in terms of worst-case forgetting of each individual task since (at least on the memory examples) it prohibits an increase of any task-specific loss, while A-GEM has better guaratees in terms of average accuracy since GEM may prevent a gradient step because of a task constraint violation although the overall average loss may actually decrease, see Appendix Sec. D.1 and D.2 for further analysis and empirical evidence. The pseudo-code of A-GEM is given in Appendix Alg. 2. In this section, we discuss how we can improve forward transfer for all the LLL methods including A-GEM. In order to speed up learning of a new task, we consider the use of compositional task descriptors where components are shared across tasks and thus allow transfer. Examples of compositional task descriptors are, for instance, a natural language description of the task under consideration or a matrix specifying the attribute values of the objects to be recognized in the task. In our experiments, we use the latter since it is provided with popular benchmark datasets BID26 BID12 ). For instance, if the model has already learned and remembers about two independent properties (e.g., color of feathers and shape of beak), it can quickly recognize a new class provided a descriptor specifying the values of its attributes (yellow feathers and red beak), although this is an entirely unseen combination. Borrowing ideas from literature in few-shot learning BID13 BID30 BID5 BID27, we learn a joint embedding space between image features and the attribute embeddings. Formally, let x k ∈ X be the input (e.g., an image), t k be the task descriptor in the form of a matrix of size C k × A, where C k is the number of classes in the kth task and A is the total number of attributes for each class in the dataset. The joint embedding model consists of a feature extraction module, φ θ: DISPLAYFORM0, and a task embedding module, ψ ω: DISPLAYFORM1 In this work, φ θ is implemented as a standard multi-layer feed-forward network (see Sec. 6 for the exact parameterization), whereas ψ ω is implemented as a parameter matrix of dimensions A × D. This matrix can be interpreted as an attribute look-up table as each attribute is associated with a D dimensional vector, from which a class embedding vector is constructed via a linear combination of the attributes present in the class; the task descriptor embedding is then the concatenation of the embedding vectors of the classes present in the task (see FIG9 for the pictorial description of the joint embedding model). During training, the parameters θ and ω are learned by minimizing the cross-entropy loss: DISPLAYFORM2 where (DISPLAYFORM3 where [a] i denotes the i-th element of the vector a. Note that the architecture and loss functions are general, and apply not only to A-GEM but also to any other LLL model (e.g., regularization based approaches). See Sec. 6 for the actual choice of parameterization of these functions. We consider four dataset streams, see Tab.1 in Appendix Sec. A for a summary of the statistics. Permuted MNIST ) is a variant of MNIST dataset of handwritten digits where each task has a certain random permutation of the input pixels which is applied to all the images of that task. Split CIFAR BID29 consists of splitting the original CIFAR-100 dataset BID11 ) into 20 disjoint subsets, where each subset is constructed by randomly sampling 5 classes without replacement from a total of 100 classes. Similarly to Split CIFAR, Split CUB is an incremental version of the fine-grained image classification dataset CUB BID26 of 200 bird categories split into 20 disjoint subsets of classes. Split AWA, on the other hand, is the incremental version of the AWA dataset of 50 animal categories, where each task is constructed by sampling 5 classes with replacement from the total 50 classes, constructing 20 tasks. In this setting, classes may overlap among multiple tasks, but within each task they compete against different set of classes. Note that to make sure each training example is only seen once, the training data of a each class is split into disjoint sets depending on the frequency of its occurrence in different tasks. For Split AWA, the classifier weights of each class are randomly initialized within each head without any transfer from the previous occurrence of the class in past tasks. Finally, while on Permuted MNIST and Split CIFAR we provide integer task descriptors, on Split CUB and Split AWA we stack together the attributes of the classes (specifying for instance the type of beak, the color of feathers, etc.) belonging to the current task to form a descriptor. In terms of architectures, we use a fully-connected network with two hidden layers of 256 ReLU units each for Permuted MNIST, a reduced ResNet18 for Split CIFAR like in BID15, and a standard ResNet18 BID8 for Split CUB and Split AWA. For a given dataset stream, all models use the same architecture, and all models are optimized via stochastic gradient descent with mini-batch size equal to 10. We refer to the joint-embedding model version of these models by appending the suffix'-JE' to the method name. As described in Sec. 2 and outlined in Alg. 1, in order to cross validate we use the first 3 tasks, and then report metrics on the remaining 17 tasks after doing a single training pass over each task in sequence. Lastly, we compared A-GEM against several baselines and state-of-the-art LLL approaches which we describe next. VAN is a single supervised learning model, trained continually without any regularization, with the parameters of a new task initialized from the parameters of the previous task. ICARL BID17 ) is a class-incremental learner that uses nearest-exemplar-based classifier and avoids catastrophic forgetting by regularizing over the feature representation of previous tasks using a knowledge distillation loss. EWC, PI BID29, RWALK BID4 and MAS BID1 A T, F T and LCA 10 values and confidence intervals are computed over 10 runs. Other baselines are given in Tab. 5, 6 and 7 in the Appendix, which are used to generate the plots.whereby a new "column" with lateral connections to previous hidden layers is added once a new task arrives. GEM described in Sec. 4 is another natural baseline of comparison since A-GEM builds upon it. The amount of episodic memory per task used in ICARL, GEM and A-GEM is set to 250, 65, 50, and 100, and the batch size for the computation of g ref (when the episodic memory is sufficiently filled) in A-GEM is set to 256, 1300, 128 and 128 for MNIST, CIFAR, CUB and AWA, respectively. While populating episodic memory, the samples are chosen uniformly at random for each task. Whereas the network weights are randomly initialized for MNIST, CIFAR and AWA, on the other hand, for CUB, due to the small dataset size, a pre-trained ImageNet model is used. Finally, we consider a multi-task baseline, MULTI-TASK, trained on a single pass over shuffled data from all tasks, and thus violating the LLL assumption. It can be seen as an upper bound performance for average accuracy.6.1 Fig. 1 and 2 show the overall on all the datasets we considered (for brevity we show only representative methods, see detailed in Appendix Tab. 4, 5, 6 and 7). First, we observe that A-GEM achieves the best average accuracy on all datasets, except Permuted MNIST, where PROG-NN works better. The reason is because on this dataset each task has a large number of training examples, which enables PROG-NN to learn its task specific parameters and to leverage its lateral connections. However, notice how PROG-NN has the worst memory cost by the end of training -as its number of parameters grows super-linearly with the number of tasks. In particular, in large scale setups (Split CUB and AWA), PROG-NN runs out of memory during training due to its large size. Also, PROG-NN does not learn well on datasets where tasks have fewer training examples. Second, A-GEM and GEM perform comparably in terms of average accuracy, but A-GEM has much lower time (about 100 times faster) and memory cost (about 10 times lower), comparable to regularizationbased approaches like EWC. Third, EWC and similar methods perform only slightly better than VAN on this single pass LLL setting. The analysis in Appendix Sec. F demonstrates that EWC requires several epochs and over-parameterized architectures in order to work well. Fourth, PROG-NN has no forgetting by construction and A-GEM and GEM have the lowest forgetting among methods that use a fixed capacity architecture. Next, all methods perform similarly in terms of LCA, with PROG-NN being the worst because of its ever growing number of parameters and A-GEM slightly better than all the other approaches. And finally, the use of task descriptors improves average accuracy across the board as shown in Fig.2, with A-GEM a bit better than all the other methods we tried. All joint-embedding models using task descriptors have better LCA performance, although this is the same across all methods including A-GEM. Overall, we conclude that A-GEM offers the best trade-off between average accuracy performance and efficiency in terms of sample, memory and computational cost. different methods are best in terms of LCA, although A-GEM is always top-performing. Fig. 4 shows in more detail the gain brought by task descriptors which greatly speed up learning in the few-shot regime. On these datasets, A-GEM performs the best or on par to the best. Finally, in Fig. 5, we report the 0-shot performance of LLL methods on Split CUB and Split AWA datasets over time, showing a clear advantage of using compositional task descriptors with joint embedding models, which is more significant for A-GEM. Interestingly, the zero-shot learning performance of joint embedding models improves over time, indicating that these models get better at forward transfer or, in other words, become more efficient over time. Continual or Lifelong Learning (LLL) BID25 have been the subject of extensive study over the past two decades. One approach to LLL uses modular compositional models BID6 BID0 BID19 BID3 BID28 BID7, which limit interference among tasks by using different subset of modules for each task. Unfortunately, these methods require searching over the space of architectures which is not sample efficient with current methods. Another approach is to regularize parameters important to solve past tasks BID29 BID4, which has been proven effective for over-parameterized models in the multiple epoch setting (see Appendix Sec. F), while we focus on learning from few examples using memory efficient models. Methods based on episodic memory BID17 BID15 ) require a little bit more memory at training time but can work much better in the single pass setting we considered BID15 ).The use of task descriptors for LLL has already been advocated by BID9 but using a sparse coding framework which is not obviously applicable to deep nets in a computationally efficient way, and also by although they did not explore the use of compositional descriptors. More generally, tasks descriptors have been used in Reinforcement Learning with similar motivations by several others BID24 BID21 BID2, and it is also a key ingredient in all the zero/few-shot learning algorithms BID13 BID27 BID5 BID26 BID12 ). We studied the problem of efficient Lifelong Learning (LLL) in the case where the learner can only do a single pass over the input data stream. We found that our approach, A-GEM, has the best tradeoff between average accuracy by the end of the learning experience and computational/memory cost. Compared to the original GEM algorithm, A-GEM is about 100 times faster and has 10 times less memory requirements; compared to regularization based approaches, it achieves significantly higher average accuracy. We also demonstrated that by using compositional task descriptors all methods can improve their few-shot performance, with A-GEM often being the best. Our detailed experiments reported in Appendix E also show that there is still a substantial performance gap between LLL methods, including A-GEM, trained in a sequential learning setting and the same network trained in a non-sequential multi-task setting, despite seeing the same data samples. Moreover, while task descriptors do help in the few-shot learning regime, the LCA performance gap between different methods is very small; suggesting a poor ability of current methods to transfer knowledge even when forgetting has been eliminated. Addressing these two fundamental issues will be the focus of our future research. In Sec. A we report the summary of datasets used for the experiments. Sec. B details our A-GEM algorithm and Sec. C provides the proof of update rule of A-GEM discussed in Sec. 4 of the main paper. In Sec. D, we analyze the differences between A-GEM and GEM, and describe another variation of GEM, dubbed Stochastic GEM (S-GEM). The detailed of the experiments which were used to generate Fig 1 and 2 in the main paper are given in Sec. E. In Sec. F, we provide empirical evidence to the conjecture that regularization-based approaches like EWC require over-parameterized architectures and multiple passes over data in order to perform well as discussed in the Sec. 6.1 of the main paper. In Sec. G, we provide the grid used for the cross-validation of different hyperparameters and report the optimal values for different models. Finally, in Sec. H, we pictorially describe the joint embedding model discussed in Sec. 5.A DATASET STATISTICS DISPLAYFORM0 end if 14: DISPLAYFORM1 At,: DISPLAYFORM2 end for 19:return f θ, A 20: end procedure DISPLAYFORM3 end for 10: return a 11: end procedure DISPLAYFORM4 end for 7:return M 8: end procedure C A-GEM UPDATE RULE Here we provide the proof of the update rule of A-GEM (Eq. 11),g = g − Proof. The optimization objective of A-GEM as described in the Eq. 10 of the main paper, is: DISPLAYFORM5 Replacingg with z and rewriting Eq. 14 yields: DISPLAYFORM6 Note that we discard the term g g from the objective and change the sign of the inequality constraint. The Lagrangian of the constrained optimization problem defined above can be written as: DISPLAYFORM7 Now, we pose the dual of Eq. 16 as: DISPLAYFORM8 Lets find the value z * that minimizes the L(z, α) by setting the derivatives of L(z, α) w.r.t. to z to zero: DISPLAYFORM9 The simplified dual after putting the value of z * in Eq. 17 can be written as: DISPLAYFORM10 The solution α * = max α;α>0 θ D (α) to the dual is given by: DISPLAYFORM11 By putting α * in Eq. 18, we recover the A-GEM update rule: DISPLAYFORM12 In this section, we empirically analyze the differences between A-GEM and GEM, and report experiments with another computationally efficient but worse performing version of GEM.D.1 FREQUENCY OF CONSTRAINT VIOLATIONS FIG4 shows the frequency of constraint violations (see Eq. 8 and 10) on Permuted MNIST and Split CIFAR datasets. Note that, the number of gradient updates (training steps) per task on MNIST and CIFAR are 5500 and 250, respectively. As the number of tasks increase, GEM violates the optimization constraints at almost each training step, whereas A-GEM plateaus to a much lower value. Therefore, the computational efficiency of A-GEM not only stems from the fact that it avoids solving a QP at each training step (which is much more expensive than a simple inner product) but also from the fewer number of constraint violations. From the figure, we can also infer that as the number of tasks grows the gap between GEM and A-GEM would grow further. Thus, the computational and memory overhead of GEM over A-GEM, see also Tab. 7, gets worse as the number of tasks increases. In Tab. 2, we empirically demonstrate the different properties induced by the objective functions of GEM and A-GEM. GEM enjoys lower worst-case task forgetting while A-GEM enjoys better overall average accuracy. This is particularly true on the training examples stored in memory, as on the test set the is confounded by the generalization error. In this section we report experiments with another variant of GEM, dubbed Stochastic GEM (S-GEM). The main idea in S-GEM is to randomly sample one constraint, at each training step, from the possible t − 1 constraints of GEM. If that constraint is violated, the gradient is projected only taking into account that constraint. Formally, the optimization objective of S-GEM is given by: DISPLAYFORM0 In other words, at each training step, S-GEM avoids the increase in loss of one of the previous tasks sampled randomly. In Tab. 3 we report the comparison of GEM, S-GEM and A-GEM on Permuted MNIST and Split CIFAR.Although, S-GEM is closer in spirit to GEM, as it requires randomly sampling one of the GEM constraints to satisfy, compared to A-GEM, which defines the constraint as the average gradient of the previous tasks, it perform slightly worse than GEM, as can be seen from Tab. 3. In Tab. 4, 5, 6 and 7 we report the detailed which were used to generate Fig.1 and 2. In this section we provide empirical evidence to the conjecture that regularization-based approaches like EWC need over-parameterized architectures and multiple passes over the samples of each task in order to perform well. The intuition as to why models need to be over-parameterized is because it is easier to avoid cross-task interference when the model has additional capacity. In the single-pass setting and when each task does not have very many training samples, regularization-based appraches also suffer because regularization parameters cannot be estimated well from a model that has not fully converged. Moreover, for tasks that do not have much data, rgularization-based approaches do not enable any kind of positive backward transfer BID15 which further hurts performance as the predictor cannot leverage knowledge acquired later to improve its prediction on past tasks. Finally, regularization-based approaches perform much better in the multi-epoch setting simply because in this setting the baseline un-regularized model performs much worse, as it overfits much more to the data of the current task, every time unlearning what it learned before. We consider Permuted MNIST and Split CIFAR datasets as described in Sec. 6 of the main paper. For MNIST, the two architecture variants that we experiment with are; 1) two-layer fully-connected network with 256 units in each layer (denoted by −S suffix), and 2) two-layer fully-connected network with 2000 units in each layer (denoted by −B suffix).For CIFAR, the two architecture variants are; 1) ResNet-18 with 3 times less feature maps in all the layers (denoted by −S suffix), and 2) Standard ResNet-18 (denoted by −B token).We run the experiments on VAN and EWC with increasing the number of epochs from 1 to 10 for Permuted MNIST and from 1 to 30 for CIFAR. For instance, when epoch is set to 10, it means that the training samples of task t are presented 10 times before showing examples from task t + 1. In Fig. 7 and 8 we plot the Average Accuracy (Eq. 1) and Forgetting (Eq. 2) on Permuted MNIST and Split CIFAR, respectively. We observe that the average accuracy significantly improves with the number of epochs only when EWC is applied to the big network. In particular, in the single epoch setting, EWC peforms similarly to the baseline VAN on Split CIFAR which has fewer number of training examples per task. Below we report the hyper-parameters grid considered for different experiments. Note, as described in the Sec. 6 of the main paper, to satisfy the requirement that a learner does not see the data of a task more than once, first T CV tasks are used to cross-validate the hyper-parameters. In all the datasets, the value of T CV is set to'3'. The best setting for each experiment is reported in the parenthesis.• Pictorial description of the joint embedding model discussed in the Sec. 5 of the main paper. Modules; φ θ and ψ ω are implemented as feed-forward neural networks with P and A × D parameters, respectively. The descriptor of task k (t k) is a matrix of dimensions C k × A, shared among all the examples of the task, constructed by concatenating the A-dimensional class attribute vectors of C k classes in the task.
An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms.
1,789
scitldr
Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation. We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise. As a of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline. The unprecedented success of Deep Learning models in a variety of tasks including computer vision, machine translation and speech recognition has led to the proliferation of deeper and more complex models. Algorithmic innovations such as large batch training and neural architecture search have enabled models to scale on large compute cluster to accelerate training. This enhanced performance has enabled the adoption of larger neural networks. As a consequence, the computational requirements for training Deep Learning models have been growing at an exponential rate (Amodei & Hernandez) over the past few years, outperforming Moore's Law and hardware capabilities by a wide margin. One of the promising areas of research to address this growing compute gap is to reduce the numeric precision requirements for deep learning. Reduced precision methods exploit the inherent noise resilient properties of deep neural networks to improve compute efficiency, while minimizing the loss of model accuracy. Recent studies have shown that, deep neural networks can be trained using 16-bits of precision without any noticeable impact on validation accuracy across a wide range of networks. Today, state-of-the-art training platforms support 16-bit precision in the form of high-performance systolic array or GEMM engine (General Matrix Multiply) implementations (; Köster et al., 2017a). There have been numerous attempts (; ; ; ;) to train deep neural networks at lower precision (below 16-bits) with varying degrees of success. With the abundance of 8-bit integer deep learning'ops' deployed to accelerate inference tasks, much of the research into training methods have also focused on integer based fixed-point numeric formats (; ;). Training with 8-bit integers has been significantly more challenging because the dynamic range of such formats is not sufficient to represent error gradients during back-propagation. More recently, have shown that 8-bit floating representation can be used to train convolutional neural networks, with the help of specialized chunk-based accumulation and stochastic rounding hardware. While this method has shown promising , it requires expensive stochastic rounding hardware built into the critical compute path making it unattractive for systolic array and GEMM accelerator implementations. Our paper extends the state of the art in 8-bit floating point (FP8) training with the following key contributions: • We propose a scalable training solution that eliminates the need for specialized hardware designs , thereby enabling efficient MAC designs with higher compute density. • We demonstrated state-of-the-art training using 8-bit floating point representation (for weight, activation, error and gradient tensors), across multiple data sets (Imagenet-1K, WMT16) and a broader set of workloads (Resnet, GNMT, Transformer) than previously reported . • We propose enhanced loss scaling method to compensate for the reduced subnormal range of 8-bit floating point representation for improved error propagation leading to better model accuracy. • We present a detailed study of the impact of quantization noise on model generalization and propose a stochastic rounding technique to address the gradient noise in the early epochs leading to better generalization. The study of reduced precision methods for deep learning training is an active area of research. In the pursuit of improving compute efficiency, researchers have experimented with various numeric formats and hardware implementations. demonstrated that deep neural networks can be trained with minimal loss in accuracy, using 16-bit fixed point representation. This was followed by studies employing other numeric formats such as, half-precision floating point and dynamic fixed point (Köster et al., 2017b;), demonstrating state of the art across residual, recurrent and generative networks. Today most of the neural network training in a production deployment has migrated to 16-bit hardware, ing in significant improvements in performance . There have been several attempts to further reduce the precision requirements of DNNs to boost training performance. have trained DoReFa-Net, a derivative of AlexNet using bit-convolutions with 1-bit and 2-bits to represent weights and activations respectively, while the gradients are quantized to 6-bits of precision. have trained AlexNet using 8-bit precision for activations, errors and weight gradients, while the weights are quantized to 2-bits of precision. However, both these methods have reported significant loss in validation accuracy. More recently, have successfully trained Resnet-50 ) using 8-bit floating point numeric format with the help of a specialized hardware to compute chunk-based dotproduct computation and stochastic rounding on a 16-bit accumulator. The authors of this study have focused on reducing the accumulator precision and based on studies on smaller networks (AlexNet Resnet-18), attributed training issues related to error propagation and generalization on the choice of accumulator size. However, our studies on larger networks (Resnet-34/50) using 32-bit accumulator indicate that, these issues are not related to the choice of accumulator size and should be addressed independently. We discuss these issues and our proposed solutions in greater detail in Sections3.1and 3.2. Guided by these , we decided to focus on studying the impact of using FP8 numeric format on training, while maintaining a high precision accumulator(FP32). We further believe that modern GEMM engine designs implementing progressive multiplier reduction techniques can effectively amortize the cost of a larger final accumulator, and do not benefit from building 16-bit accumulator with additional hardware overheads in the critical compute path. The choice of bit-level representation of floating point (sign, exponent, mantissa), has a significant impact on the effectiveness of the numerical format -the trade-off between the dynamic range and precision is especially tricky at low bit-width representations. While it is important to maintain higher dynamic range for effective propagation of error gradients , it leads to having values that are too few and scattered to maintain fidelity required for gradient computations. After careful consideration of these facts and several failed experiments with other formats (ex: with more exponent bits), we decided to use s=1,e=5,m=2 numeric format for representing 8-bit floating point. We also use a 32-bit floating point accumulator to accommodate the dynamic range of the output ing from the dot-product operation. Therefore, each GEMM/convolution operation takes two input tensors in 8-bit floating point format and produces a 32-bit single precision floating point output. The 32-bit output must be down-converted to a 8-bit floating point value before passing to the next operation. We believe rounding plays an extremely important role during down-conversion operation to help recover the numeric accuracy. We present the from the study of different rounding modes applied to this numeric format and their impact on training performacne in Section.3.2 Figure. 1a shows the precision settings of various compute operations used in our mixed precision training setup. The'GEMM' operator shown in Figure. 1a represents the key compute kernel used by deep neural networks during forward, backward, and gradient computation passes. Quantization nodes identified with the letter'Q' perform down-conversion and rounding on 32-bit output from GEMM operators and convert them them to 8-bit floating point format before passing on to the next layer. In our training experiments, we quantized the weight, activation, error and gradient tensors of all convolution and MatMul kernels to 8-bit floating point format in forward, backward and weight update paths. More details about workload specific quantization flow is discussed in Section 4. Figure.1b shows the data flow during optimization and weight update steps. In the optimization path the L2-regularization term is added to the cross entropy, and the ing loss is multiplied with loss scale parameter before initiating back propagation. The weight gradients are computed during back propagation and converted to 8-bit floating point format. During weight update, the weight gradients are re-scaled using the loss scale parameter, this step is performed in full precision to prevent any potential underflow. The gradients are then passed to the momentum optimizer, and the final gradients are applied to the master copy of weights. In our experiments, the master copy of weights are stored in half-precision floating point format, these values are up-converted to 32-bit during the update step and the weight update is performed at full precision. After the update, the master weights are converted to half-precision format before they are stored back into memory. Since this is a bandwidth bound operation, compute precision will not have any noticeable impact on the performance. Previous studies on half-precision floating point have shown that loss scaling technique can be used to push smaller error gradients into representable range and train neural networks successfully. The full range of numeric values represented by a floating point format include the'subnormal' values, the range of which is determined by the number of mantissa bits. Because of this property of floating point numbers, the proposed 8-bit floating point format will have significantly smaller subnormal range compared to a half-precision floating point which has the same number of exponent bits. Table. 1 shows the dynamic range comparison between fullprecision(FP32), half-precision(FP16) and the proposed 8-bit floating point (FP8) formats. Half-precision training for convolution networks has been shown to converge using a constant loss scaling value (ex: Resnet-50 uses 1000) . Networks like GNMT and Transformer experience significant variations in gradient distributions through the training cycle and perform better with dynamic loss scaling methods. We adopt the same loss scaling methods used by FP16 training for FP8. For Resnet-50, we increase the scaling factor to compensate for the smaller subnormal range of FP8. Figure.2a shows from our convergence studies on Resnet-50 using different loss scaling values. The model failed to converge with a scaling factor of 1000, and progressively performed better with increasing loss scale values, converging at 10 000. Transformer (big) model, uses the standard'DynamicLossScale' method implemented in TensorFlow and it works out-of-the-box without any additional changes. We use the OpenSeq2Seq implementation for GNMT model, and it uses'back-off' method for automatic loss scaling. Both these loss scaling methods try to maintain a higher loss scale value while checking for numeric overflows at regular intervals(=2000), that in a'NaN' during gradient computation. Our experiments showed that loss scaling experiences frequent gradient overflows for GNMT because of a few outliers, while a significant chunk of the gradients underflow. This irregular gradient distribution is likely a of not having normalization layers in GNMT. The loss scaling algorithm responds to these overflows by dropping the loss scale value significantly. We addressed this by ignoring a few overflows which are likely a of the outliers and continue to maintain a higher loss scale value. We accomplish this by setting a'minimum threshold' for the loss scale value to prevent it from going below a certain threshold value even if an overflow occurs. Figure.2b shows the loss scaling schedule that worked for GNMT. The goal of this exercises was to maintain a higher loss scale value, while ignoring a few spurious overflows. This can also achieved by enhancing the loss scaling algorithm with few additional checks to ignore overflows unless they occur in succession for a few times that is predefined threshold value (nan threshold). Overflows occurring at successive intervals is a more reliable indicator of a actual shift in the gradient distribution. We can also reduce the interval between loss scale updates (from 2000 iterations to 500 iterations), to improve chances of recovering from any inadvertent drop in loss scale value. We describe this algorithm in 1. Reduced precision methods introduce significant amount of noise that can adversely effect convergence and accuracy of deep neural networks. Rounding techniques applied to quantization methods can be effective in regulating some of this noise. For extremely low precision representations with large rounding errors such as the one proposed here(= 0.125), the choice of rounding method can have significant influence on the numeric accuracy and overall applicability of the numeric format. Previous studies have shown that stochastic rounding can be effective for training neural networks using low-precision fixed point formats. The most widely supported rounding failed to converge with loss scale=1000, performed better with 2.3% accuracy loss at loss scale=4000 and showed full convergence at loss scale=10 000, (right) Dynamic loss scaling with gradually increasing minimum threshold for the scaling factor. Ensure: minval = 2, maxval = 2 15, interval = 500, scale = 2, nan threshold = 2 1: nan count = 0, last iter = 0, last nan iter = 0 2: procedure UPDATE LOSS SCALE (loss scale, has nan, iter) return loss scale method in hardware today is RNE (round to nearest even), because it is easier to implement and requires smaller silicon area. In this section, we explore the impact of both RNE and stochastic rounding methods on model convergence and generalization. Our early experiments showed that, for smaller networks such as Resnet-18, RNE proved quite effective when trained on Imagenet-1K data set. However, when we trained ResNet-50 we observed some interesting . Figure.3 shows the convergence plots for Resnet-50 using RNE rounding method applied to quantized weights, activations and gradients. The model displayed significant over-fitting behaviour as indicated by the increased validation error, while the training error mostly follows the baseline as shown in as shown in Figure. 3b, and 3a. Multiple experiments indicated that this behaviour is caused by the noisy error gradients during early epochs which lead to unconstrained growth in model parameters. This is indicated by steep increase in L2 regularization parameter as shown in Figure. 3c. Regularization loss is computed using the formula shown in Equation.1. Increased regularization loss leads to more noisy gradients, which further exacerbates this behaviour. An interesting observation about the L2 regularization loss is that for ResNet-18, the L2-loss is low at the beginning and increases with gradually with iterations. On the other hand for ResNet-50, the L2-loss is high at the beginning due to the initialization of low fan-in 1x1 convolutions, and needs to dip a little before gradually rising again. We suspect that this property of the initialization leads to more noisy behavior of ResNet-50 in the earlier iterations as compared to ResNet-18. Therefore for the ResNet-50 model stochastic rounding is essential. Where, λ is the weight decay parameter and W is the total number of weights. In order to understand the issue of regularization independent of the choice of rounding method, we conducted additional experiments using RNE with other forms of regularization. Figure.4a compares the'Dropout' method with'no regularization' method which uses quantization noise as implicit regularizer with no explicit regularization term. In both these cases, the models performed much better than using L2 regularization with RNE, leading us to the that RNE was ineffective in regulating quantization noise in gradients causing unconstrained growth in model parameters. Unlike deterministic rounding techniques, stochastic rounding computes the probability of rounding using information from several discarded bits of the input making it less prone to introducing large rounding errors. We studied the error behaviour of Resnet-50 by applying stochastic rounding on activations and gradients to regulate quantization noise in the gradients, which in-turn can improve the effectiveness of explicit regularization methods. Our stochastic rounding method is defined as follows: x k, with probability 1 − P Where, k is the target precision, is machine epsilon, and r is random value generated by a pseudo random number generator. We built a TensorFlow based training platform that can accurately emulate the numeric properties of 8-bit floating point on existing floating point hardware. Training experiments were conducted using open source model implementations from TensorFlow 1 and OpenSeq2Seq . Our training framework internally updates the training graph by inserting quantization nodes in the forward, backward, weight update paths for all convolution and GEMM kernels as described in Section.3. Using the proposed training method, we have successfully trained two residual networks (Resnet-34/50) on Imagenet-1K data set. For 8-bit training experiments, we have quantized the Convolution, BatchNorm (stats are accumulated at full precision), ReLU and EltWise layers, while the data layer and SoftMax layers use full precision. We have maintained the first Convolution layer, the following BatchNorm layer, and final Dense layer at half-precision to minimize the accuracy loss. The first convolution layer is expected to have a minimum of eight mantissa bits to accurately represent the normalized image data passed from the data layer. We also store the master copy of weights in half-precision; this can potentially speed up weight update operation by up to 2×, because the operation is memory bound. Since the weight update computation is performed at full precision (as descried in Section.3), to realize these bandwidth benefits the up/down conversion of the master weights between full and half precision formats must be strictly performed in the cache or near memory. We have used identical batch size, and hyper parameters for both full precision and 8-bit training experiments. Both sets of experiments also employ'warm up' for the first 5 epochs, with an initial learning rate of 0.128, and train for a total of 90 epochs to achieve the final accuracy. Table. 2 summarizes the validation accuracy achieved by convolution networks on imagenet-1K dataset. In addition to convolution networks, we have also trained two state of the art machine translation workloads (GNMT and Transformer ) and demonstrated BLEU scores matching single precision baselines. We trained an 8-layer GNMT encoder/decoder LSTM model with 1024 recurrent units and 1024 attention units. We trained this network using 8-bit floating point format for all MatMul operations, including the LSTM, attention modules, embedding lookup is performed at 8-bit while the updates are computed at full precision. The activation functions such as tanh and sigmoid use half-precision data type. Our 8-bit training experiments use identical batch size and hyper parameters to that of the full precision baseline and the network is trained for 10 epochs. We used the loss scaling schedule described in Section.3.1. We also trained a 6-layer Transformer-big translation network with roughly 200M parameters. For the 8-bit experiments, we quantized all the MatMul operations in the encoder, decoder, including attention layers. Embedding lookup operation is performed at 8-bit precision while the updates are computed at full precision. We use the identical configuration for both baseline and 8-bit training experiments. Both GNMT and Transformer models were trained on large scale, WMT2016 English− →German dataset consisting of 4.5 million sentence pairs. The are summarized in Table. 4. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50, GNMT, Transformer) than previously reported. We propose easy to implement and scalable solution for building FP8 compute primitives, eliminating the need for stochastic rounding hardware in the critical compute path, as proposed by , thereby reducing the cost and complexity of the MAC unit. We explore issues around gradient underflow and quantization noise that arise as a of using the proposed 8-bit numeric format for large scale neural network training. We propose solutions to deal with these problems in the form of enhanced loss scaling and stochastic rounding.
We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer.
1,790
scitldr
Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points. In this work, we approach deep metric learning from a novel perspective. We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. ICE has three main appealing properties. Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision. Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size. Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated. It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining. In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE. Deep metric learning (DML) aims to learn a non-linear embedding function (a.k.a. distance metric) such that the semantic similarities over samples are well captured in the feature space . Due to its fundamental function of learning discriminative representations, DML has diverse applications, such as image retrieval , clustering , verification , few-shot learning and zero-shot learning . A key to DML is to design an effective and efficient loss function for supervising the learning process, thus significant efforts have been made (; ; ; ; ;). Some loss functions learn the embedding function from pairwise or triplet-wise relationship constraints (; ;). However, they are known to not only suffer from an increasing number of non-informative samples during training, but also incur considering only several instances per loss computation. Therefore, informative sample mining strategies are proposed (; ; b). Recently, several methods consider semantic relations among multiple examples to exploit their similarity structure (; ;). Consequently, these structured losses achieve better performance than pairwise and triple-wise approaches. In this paper, we tackle the DML problem from a novel perspective. Specifically, we propose a novel loss function inspired by CCE. CCE is well-known in classification problems owing to the fact that it has an intuitive probabilistic interpretation and achieves great performance, e.g., ImageNet classification . However, since CCE learns a decision function which predicts the class label of an input, it learns class-level centres for reference (; a). Therefore, CCE is not scalable to infinite classes and cannot generalise well when it is directly applied to DML . With scalability and structured information in mind, we introduce instance cross entropy (ICE) for DML. It learns an embedding function by minimising the cross entropy between a predicted instance-level matching distribution and its corresponding ground-truth. In comparison with CCE, given a query, CCE aims to maximise its matching probability with the class-level context vector (weight vector) of its ground-truth class, whereas ICE targets at maximising its matching probability with it similar instances. As ICE does not learn class-level context vectors, it is scalable to infinite training classes, which is an intrinsic demand of DML. Similar to (; ; ; ;), ICE is a structured loss as it also considers all other instances in the mini-batch of a given query. We illustrate ICE with comparison to other structured losses in Figure 1. A common challenge of instance-based losses is that many training examples become trivial as model improves. Therefore, we integrate seamless sample reweighting into ICE, which functions similarly with various sample mining schemes (; ; ;). Existing mining methods require either separate time-consuming process, e.g., class mining , or distance thresholds for data pruning (; ;). Instead, our reweighting scheme works without explicit data truncation and mining. It is motivated by the relative weight analysis between two examples. The current common practice of DML is to learn an angular embedding space by projecting all features to a unit hypersphere surface (; ;). We identify the challenge that without sample mining, informative training examples cannot be differentiated and emphasised properly because the relative weight between two samples is strictly bounded. We address it by sample reweighting, which rescales samples' gradient to control the differentiation degree among them. Finally, for intraclass compactness and interclass separability, most methods (; ; ;) use distance thresholds to decrease intraclass variances and increase interclass distances. In contrast, we achieve the target from a perspective of instance-level matching probability. Without any distance margin constraint, ICE makes no assumptions about the boundaries between different classes. Therefore, ICE is easier to apply in applications where we have no prior knowledge about intraclass variances. Our contributions are summarised: We approach DML from a novel perspective by taking in the key idea of matching probability in CCE. We introduce ICE, which is scalable to an infinite number of training classes and exploits structured information for learning supervision. A seamless sample reweighting scheme is derived for ICE to address the challenge of learning an embedding subspace by projecting all features to a unit hypersphere surface. We show the superiority of ICE by comparing with state-of-the-art methods on three real-world datasets. Heated-up, NormFace, TADAM, DRPR, Prototypical Networks, Proxy-NCA. These methods calculate the similarities between a query and class centres (a.k.a. proxies or prototypes) instead of other instances (; a; ; ; ;). In Heated-up and NormFace, the class centres are learned parameters of a fully connected layer, which is similar to Center Loss. While in TADAM, DRPR, and Prototypical Networks, a class centre is the mean over all embeddings of a class. By comparing a sample with other examples other than class centres, more informative instances can contribute more in ICE. NCA , S-NCA . NCA learns similarity relationships between instances. Since original NCA learns the whole training data and its time complexity is quadratically proportional to the scale of training data, S-NCA is proposed recently with linear time complexity with respect to the training data size. Instead, ICE is scalable to infinite training data by iterative learning on randomly sampled small-scale instances matching tasks. S-NCA and NCA share the same learning objective. However, they treat the event of all similar instance being correctly recognised as a whole by a sum accumulator. Instead, we maximise the probability of every similar sample being correctly identified individually. Therefore, ICE's optimisation task is harder, leading to better generalisation. N-pair-mc . The aim of N-pair-mc is to identify one positive example from N − 1 negative examples of N − 1 classes (one negative example per class). In other words, only one (a) A query versus learned parametric class centroids. All T classes in the training set are considered. Prior work: CCE, Heated-up , NormFace (a). (b) A query versus non-parametric class means. Only classes in the mini-batch are considered. Representative work: TADAM , DRPR , Prototypical Networks . (c) N-pair-mc : A query versus one instance per class. A mini-batch has to be 2 examples per class rigidly. Only one instance per negative class is randomly sampled out of 2. (d) NCA and S-NCA : A query versus the rest instances. Figure 1: Our ICE and related losses. The first row shows prior work of a query versus class centres/means while the second row displays the work of a query versus instances. Note that the cross entropy computation and interpretation are different in different losses. For a mini-batch, we show two classes, i.e., circle and rectangle, with 3 examples per class except N-pair-mc which requires 2 samples per class. The icons are at the right bottom. GT means ground-truth matching distribution. When illustrating the losses of a query versus instances in (c), (d) and (e), we index those instances with numbers for clarity, except the query. positive and one negative instance per class are considered per loss computation by simulating CCE exactly. Instead, ICE exploits all negative examples to benefit from richer information. When constructing mini-batches, N-pair-mc requires expensive offline class mining and samples 2 images per class. According to N-pair-mc is superior to NCA. Hyperbolic . It aims to preserve the similarity structures among instances as well. However, it learns a hyperbolic embedding space where the distance depends only on norm of embeddings. Instead, we learn an angular space where the similarity depends only on the angle between embeddings. Besides, Hyperbolic requires a separate sampling of semantic subtrees when the dataset is large. Mining informative examples or emphasising on them are popular strategies in DML: 1) Mining non-trivial samples during training is crucial for faster convergence and better performance. Therefore, sample mining is widely studied in the literature. In pairwise or triplet-wise approaches (; ; ;, data pairs with higher losses are emphasized during gradient backpropagation. As for structured losses, Lifted Struct also focuses on harder examples. Furthermore, and propose to mine hard negative classes to construct informative input mini-batches. Proxy-NCA addresses the sampling problem by learning class proxies. 2) Assigning higher weights to informative examples is another effective scheme (a; c). Beyond, there are some other novel perspectives to address sample mining or weighting, e.g., hardness-aware examples generation and divide-and-conquer of the embedding space . Our proposed ICE has a similarity scaling factor which helps to emphasise more on informative examples. Moreover, as described in , very hard negative pairs are likely to be outliers and it is safer to mine semi-hard ones. In ICE, the similarity scaling factor is flexible in that it controls the emphasis degree on harder samples. Therefore, a proper similarity scaling factor can help mine informative examples and alleviate the disturbance of outliers simultaneously. What makes ours different is that we do not heuristically design the mining or weighting scheme. Instead, it is built-in and we simply scale it as demonstrated in Section 3.4. We remark that Prototypical Networks, Matching Networks and NCA are also scalable and do not require distance thresholds. Therefore, they are illustrated and differentiated in Figure 1. Matching Networks are designed specifically for one-shot learning. Similarly, design mAP-SSVM and mAP-DLM for few-shot learning, which directly optimises the retrieval performance mAP when multiple positives exist. FastAP is similar to and optimises the ranked-based average precision. Instead, ICE processes one positive at a time. Beyond, the setting of few-shot learning is different from deep metric learning: Each mini-batch is a complete subtask and contains a support set as training data and a query set as validation data in few-shot learning. Few-shot learning applies episodic training in practice. Remarkably, TADAM formulates instances versus class centres and also has a metric scaling parameter for adjusting the impact of different class centres. Contrastively, ICE adjusts the influence of other instances. Furthermore, ours is not exactly distance metric scaling since we simply apply naive cosine similarity as the distance metric at the testing stage. That is why we interpret it as a weighting scheme during training. is an input mini-batch, where x i ∈ R h×w×3 and y i ∈ {1, ..., C} represent i-th image and the corresponding label, respectively; {x is a set of N c images belonging to c-th class, ∀c, N c ≥ 2. The number of classes C is generally much smaller than the total number of classes T in the training set (C T). Note that T is allowed to be extremely large in DML. Given a sufficient number of different mini-batches, our goal is to learn an embedding function f that captures the semantic similarities among samples in the feature space. We represent deep embeddings of X as {{f . Given a query, 'positives' and 'negatives' refer to samples of the same class and different classes, respectively. CCE is widely used in a variety of tasks, especially classification problems. As demonstrated in , a deep classifier consists of two joint components: deep feature learning and linear classifier learning. The feature learning module is a transformation (i.e., embedding function f) composed of convolutional and non-linear activation layers. The classifier learning module has one neural layer, which learns T class-level context vectors such that any image has the highest compatibility (logit) with its ground-truth class context vector: where is the learned parameters of the classifier. During training, the goal is to maximise the joint probability of all instances being correctly classified. The identical form is to minimise the negative log-likelihood, i.e., L CCE (X; f, W). Therefore, the learning objective of CCE is: In contrast to CCE, ICE is a loss for measuring instance matching quality (lower ICE means higher quality) and does not need class-level context vectors. We remark that an anchor may have multiple positives, which are isolated in separate matching distributions. There is a matching distribution for every anchor-positive pair versus their negatives as displayed in Figure 1e. Let f c a be a random query, we compute its similarities with the remaining points using dot product. We define the probability of the given anchor x c a matching one of its positives x c i (i = a) as follows: where f We remark: Dot product measures the similarity between two vectors; Eq. represents the probability of a query matching a positive while Eq. is the probability of a query matching its ground-truth class. To maximise p(x between the predicted and ground-truth distributions, which is equivalent to minimise their cross entropy. Since the ground-truth distribution is one-hot encoded, the cross-entropy is − log p(x c i |x c a). To be more general, for the given anchor x c a, there may exist multiple matching points when N c > 2, i.e., |{x c i} i =a | = N c − 1 > 1. In this case, we predict one matching distribution per positive point. Our goal is to maximise the joint probability of all positive instances being correctly identified, i.e.,. A case of two positives matching a given query is described in Figure 1e. In terms of mini-batch, each image in X serves as the anchor iteratively and we aim to maximise the joint probability of all queries {{p x c a} Nc a=1 } C c=1. Equivalently, we can achieve this by minimising the sum of all negative log-likelihoods. Therefore, our proposed ICE on X is as follows: Following the common practice in existing DML methods, we apply L 2 -normalisation to feature embeddings before the inner product. Therefore, the inner product denotes the cosine similarity. The similarity between two feature vectors is determined by their norms and the angle between them. Without L 2 normalisation, the feature norm can be very large, making the model training unstable and difficult. With L 2 normalisation, all features are projected to a unit hypersphere surface. Consequently, the semantic similarity score is merely determined by the direction of learned representations. Therefore, L 2 normalisation can be regarded as a regulariser during training 1. Note that the principle is quite different from recent hyperspherical learning methods (a; a; a). They enforce the learned weight parameters to a unit hypersphere surface and diversify their angles. In contrast, feature normalisation is output regularisation and invariant to the parametrisation of the underlying neural network . In summary, our learning objective is: The feature L 2 -normalisation layer is implemented according to Wang et al. (2017a). It is a differentiable layer and can be easily inserted at the output of a neural net. Intrinsic sample weighting. We find that ICE emphasises more on harder samples from the perspective of gradient magnitude. We demonstrate this by deriving the partial derivatives of L ICE (X; f) with respect to positive and negative examples. Given the query x c a, the partial derivative of its any positive instance is derived by the chain rule: Since ||f Similarly, the partial derivative of its any negative sample is:. Clearly, the harder negative samples own higher matching probabilities and weights. Relative weight analysis. In general, the relative weight is more notable as the exact weight will be rescaled during training, e.g., linear post-processing by multiplying the learning rate. Therefore, we analyse the relative weight between two positive points of the same anchor (i = k = a): Similarly, the relative weight between two negative points of the same anchor (o = c, l = c) is: Note that the positive relative weight in Eq. is only decided by f Non-linear scaling for controlling the relative weight. Inspired by , we introduce a scaling parameter to modify the absolute weight non-linearly: where s ≥ 1 is the scaling parameter. In contrast to p and w,p andŵ represent the rescaled matching probability and partial derivative weight, respectively. We remark that we scale the absolute weight Batch setting: C classes, N c images from c-th class, batch size N = C c=1 N c. Hyper-setting: The scaling parameter s and the number of iterations τ. Input: Initialised embedding function f, iteration counter iter = 0. Output: Updated f. for iter < τ do iter = iter + 1. Sample one mini-batch randomly X = {{x Step 1: Feedforward X into f to obtain feature representations {{f Step 2: Compute the similarities between an anchor and the remaining instances. Every example serves as the anchor iteratively. Step 3: Gradient back-propagation to update the parameters of f using Eq.. end for non-linearly, which is an indirect way of controlling the relative weight. We do not modify the relative weight directly and Eq. and Eq. are only for introducing our motivation. Our objective is to maximise an anchor's matching probability with its any positive instance competing against its negative set. Therefore, we normalise the rescaled weights based on each anchor:. Note that the denominators in Eq. and are the accumulated weights of positives and negatives w.r.t. x c a, respectively. Although there are much more negatives than positives, the negative set and positive set contribute equally as a whole, as indicated by 1/2. N = C c=1 N c is the total number of instances in X. We select each instance as the anchor iteratively and treat all anchors equally, as indicated by 1/N. It is worth noting that during back-propagation, the magnitudes of partial derivatives in Eq. and Eq. To make it more clear and intuitive for understanding, we now analyse a naive case of ICE, where there are two samples per class in every mini-batch, i.e., ∀c, N c = 2, |{x c i} i =a | = N c − 1 = 1. In this case, for each anchor (query), there is only one positive among the remaining data points. As a , the weighting schemes in Eq. for positives and Eq. for negatives can be simplified: Firstly, we have N anchors that are treated equally as indicated by 1/N. Secondly, for each anchor, we aim to recognise its positive example correctly. However, there is a sample imbalance problem because each anchor has only one positive and many negatives. ICE addresses it by treating the positive set (single point) and negative set (multiple points) equally, i.e., 1/2 in Eq. and Eq. 2. Finally, as there are many negative samples, we aim to focus more on informative ones, i.e., harder negative instances with higher matching probabilities with a given anchor. The non-linear transformation can help control the relative weight between two negative points. The weighting scheme shares the same principle as the popular temperature-based categorical cross entropy . The key is that we should consider not only focusing on harder examples, but also the emphasis degree. Algorithm 1 summarises the learning process with ICE. As presented there, the input data format of ICE is the same as CCE, i.e., images and their corresponding labels. In contrast to other methods which require rigid input formats , e.g., triplets and n-pair tuplets, ICE is much more flexible. We iteratively select one image as the anchor. For each anchor, we aim to maximise its matching probabilities with its positive samples against its negative examples. Therefore, the computational complexity over one mini-batch is O(N 2), being the same as recent online metric learning approaches (; b). Note that in FaceNet and N -pair-mc , expensive sample mining and class mining are applied, respectively. For data augmentation and preprocessing, we follow (; . In detail, we first resize the input images to 256 × 256 and then crop it at 227 × 227. We use random cropping and horizontal mirroring for data augmentation during training. To fairly compare with the reported in , we use a centre cropping without horizontal flipping in the test phase. For the embedding size, we set it to 512 on all datasets following (; ; a). To compare fairly with (; ;), we choose GoogLeNet V2 (with batch normalisation) as the backbone architecture initialised by the publicly available pretrained model on ImageNet . We simply change the original 1000-neuron fully connected layers followed by softmax normalisation and CCE to 512-neuron fully connected layers followed by the proposed ICE. For faster convergence, we randomly initialise the new layers and optimise them with 10 times larger learning rate than the others as in . We implement our algorithm in the Caffe framework . The source code will be available soon. Datasets. Following the evaluation protocol in (;, we test our proposed method on three popular fine-grained datasets including CARS196 , CUB-200-2011 (and SOP . A summary of the datasets is given in 2 The weight sum of negatives: Table 1. We also keep the same train/test splits. We remark that to test the generalisation and transfer capability of the learned deep metric, the training and test classes are disjoint. Evaluation protocol. We evaluate the learned representations on the image retrieval task in terms of Recall@K performance . Given a query, its K nearest neighbours are retrieved from the database. Its retrieval score is one if there is an image of the same class in the K nearest neighbours and zero otherwise. Recall@K is the average score of all queries. Training settings. All the experiments are run on a single PC equipped with Tesla V100 GPU with 32GB RAM. For optimisation, we use the stochastic gradient descent (SGD) with a weight decay of 1e −5 and a momentum of 0.8. The base learning rate is set as 1e Remarks. For a fair comparison, we remark that the methods group (; ; b; ;) using GoogLeNet V1 and another group (; ;) using ResNet-50 (are not benchmarked. Besides, ensemble models ; ;) are not considered. HTL also uses GoogLeNet V2, but it constructs a hierarchical similarity tree over the whole training set and updates the tree every epoch, thus being highly unscalable and expensive in terms of both computation and memory. That is why HTL achieves better performance on small datasets but performs worse than ours on the large dataset SOP. Finally, there are some other orthogonal deep metric learning research topics that are worth studying together in the future, e.g., a robust distance metric and metric learning with continuous labels. In GoogLeNet V2, there are three fully connected layers of different depth. We refer them based on their depth: L for the low-level layer (inception-3c/output), M for the mid-level layer (inception-4e/output) and H for the high-level layer (inception5b/output). By default, we use only'H'. We also report the of their combination (L, M, H) for reference following RLL (a). Competitors. All the compared baselines, Triplet Semihard , Lifted Struct , N -pair-mc , Struct Clust , Spectral Clust (, Proxy-NCA , RLL (a) and our ICE are trained and evaluated using the same settings: GoogLeNet V2 serves as the backbone network; All models are initialised with the same pretrained model on ImageNet; All apply the same data augmentation during training and use a centre-cropped image during testing. The of some baselines (; ;) are from , which means they are reimplemented there for a fair comparison. In addition, the of vanilla GoogLeNet V2 pretrained on ImageNet without fine-tuning and with fine-tuning via minimising CCE are reported in , which can be regarded as the most basic baselines. Among these baselines, Proxy NCA is not scalable as class-level proxies are learned during training. Struct Clust and Spectral Clust are clustering-motivated methods which explicitly aim to optimise the clustering quality. We highlight that clustering performance Normalised Mutual Information (NMI) (Schütze et al., 2008) is not a good assessment for SOP because SOP has a large number of classes but only 5.3 images per class on average. Therefore, we only report and compare Recall@K performance. Results. Table 3 compares the of our ICE and those of the state-of-the-art DML losses. ICE achieves the best Recall@1 performance on all benchmarks. We observe that only RLL achieves comparable performance in a few terms. However, RLL is more complex since it has three hyperparameters in total: one weight scaling parameter and two distance margins for positives and negatives, respectively. In addition, its perspective is different since it processes the positive set together similarly with (; c). We note that (c) is also complex in designing weighting schemes and contains four control hyper-parameters. However, our Recall@1 on SOP is 77.3%, which is only 0.9% lower than 78.2% of (c). It is also worth mentioning that among these approaches, except fine-tuned models with CCE, only our method has a clear probability interpretation and aims to maximise the joint instance-level matching probability. As observed, apart from being unscalable, CCE's performance is much worse than the state-of-the-art methods. Therefore, ICE can be regarded as a successful exploration of softmax regression for learning deep representations in DML. The t-SNE visualisation of learned embeddings are available in the supplementary material. We empirically study the impact of the weight scaling parameter s, which is the only hyperparameter of ICE. It functions similarly with the popular sample mining or example weighting (a; b; c) widely applied in the baselines in Table 3. Generally, different s corresponds to different emphasis degree on difficult examples. When s is larger, more difficult instances are assigned with relatively higher weights. In general, small datasets are more sensitive to minor changes of hyper-settings and much easier to overfit. Therefore, the experiments are conducted on the large dataset SOP. The are shown in Table 2. Note that when s is too small, e.g., s = 1, we observe that the training does not converge, which demonstrates the necessity of weighting/mining samples. The most significant observation is that focusing on difficult samples is better but the emphasis degree should be properly controlled. When s increases from 16 to 64, the performance grows gradually. However, when s = 80, we observe the performance drops a lot. That may be because extremely hard samples, e.g., outliers, are emphasised when s is too large. In this paper, we propose a novel instance-level softmax regression framework, named instance cross entropy, for deep metric learning. Firstly, the proposed ICE has clear probability interpretation and exploits structured semantic similarity information among multiple instances. Secondly, ICE is scalable to infinitely many classes, which is required by DML. Thirdly, ICE has only one weight scaling hyper-parameter, which works as mining informative examples and can be easily selected via cross-validation. Finally, distance thresholds are not applied to achieve intraclass compactness and interclass separability. This indicates that ICE makes no assumptions about intraclass variances and the boundaries between different classes. Therefore ICE owns general applicability. A.1 BATCH CONTENT We evaluate the impact of batch content which consists of C classes and k images per class, i.e., ∀c, N c = k. The batch size N = C × k is set to 180. In our experiments, we change the number of classes C from 36 to 90, and the number of images k from 2 to 5, while keeping the batch size unchanged. Table 4 shows the on SOP dataset. We observe that when there are more classes in the mini-batch, the performance is better. We conjecture that as the number of classes increases, the mini-batch training becomes more difficult and helps the model to generalise better. To explore different batch size N, we fix k = 2 and only change C. In this case, N = C × 2. Table 5 shows that as the number of classes increases, the performance grows. In detail, when the number of classes increases from 50 to 90, the performance raises from 74.4% to 77.3% accordingly. One reason may be that as the number of classes increases, it fits the global structure of the test set better, where there are a large number of classes but only a few positive examples. In addition, the increasing difficulty of mini-batch training can help the model to generalise better. The dimension of feature representations is an important factor in many DML methods. We conduct experiments on SOP to see the influence of different embedding size. The are presented in Table 6. We observe that when the embedding size is very small, e.g., 64, the performance is much worse. The performance increases gradually as the embedding size grows. The t-SNE visualisation of learned embeddings is available in Figures 2, 3, 4.
We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one.
1,791
scitldr
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings. Reinforcement Learning (RL) is an agent-oriented learning paradigm concerned with learning by interacting with an uncertain environment. Combined with deep neural networks as function approximators, deep reinforcement learning (deep RL) algorithms recently allowed us to tackle highly complex tasks. Despite recent success in a variety of challenging environment such as Atari games BID4 and the game of Go, it is still difficult to apply RL approaches in domains with high dimensional observation-action space and complex dynamics. Furthermore, most popular RL algorithms are model-free as they directly learn a value function BID34 or policy BID43 ) without trying to model or predict the environment's dynamics. Model-free RL techniques often require large amounts of training data and can be expensive, dangerous or impossibly slow, especially for agents and robots acting in the real world. On the other hand, model-based RL BID49 BID14 BID11 provides an alternative approach by learning an explicit representation of the underlying environment dynamics. The principal component of model-based methods is to use an estimated model as an internal simulator for planning, hence limiting the need for interaction with the environment. Unfortunately, when the dynamics are complex, it is not trivial to learn models that are accurate enough to later ensure stable and fast learning of a good policy. The most widely used techniques for model learning are based on one-step prediction. Specifically, given an observation o t and an action a t at time t, a model is trained to predict the conditional distribution over the immediate next observation o t+1, i.e p(o t+1 | o t, a t). Although computationally easy, the one-step prediction error is an inadequate proxy for the downstream performance of model-based methods as it does not account for how the model behaves when com-posed with itself. In fact, one-step modelling errors can compound after multiple steps and can degrade the policy learning. This is referred to as the compounding error phenomenon BID51 BID0 BID54. Other examples of models are autoregressive models such as recurrent neural networks BID32 that factorize naturally as log p θ (o t+1, a t+1, o t+2, a t+2, . . . | o t, a t) = t log p θ (o t+1, a t+1 | o 1, a 1, . . . o t, a t). Training autoregressive models using maximum likelihood in'teacher-forcing' that breaks the training over one-step decisions. Such sequential models are known to suffer from accumulating errors as observed in BID30.Our key motivation is the following -a model of the environment should reason about (i.e. be trained to predict) long-term transition dynamics p θ (o t+1, a t+1, o t+2, a t+2, . . . | o t, a t) and not just single step transitions p θ (o t+1 | o t, a t). That is, the model should predict what will happen in the long-term future, and not just the immediate future. We hypothesize (and test) that such a model would exhibit less cascading of errors and would learn better feature embeddings for improved performance. One way to capture long-term transition dynamics is to use latent variables recurrent networks. Ideally, latent variables could capture higher level structures in the data and help to reason about long-term transition dynamics. However, in practice it is difficult for latent variables to capture higher level representation in the presence of a strong autoregressive model as shown in BID17 BID16; BID18. To overcome this difficulty, we leverage recent advances in variational inference. In particular, we make use of the recently proposed Z-forcing idea BID16, which uses an auxiliary cost on the latent variable to predict the long-term future. Keeping in mind that more accurate long-term prediction is better for planning, we use two ways to inject future information into latent variables. Firstly, we augment the dynamics model with a backward recurrent network (RNN) such that the approximate posterior of latent variables depends on the summary of future information. Secondly, we force latent variables to predict a summary of the future using an auxiliary cost that acts as a regularizer. Unlike one-step prediction, our approach encourages the predicted future observations to remain grounded in the real observations. Injection of information about the future can also help in planning as it can be seen as injecting a plan for the future. In stochastic environment dynamics, unfolding the dynamics model may lead to unlikely trajectories due to errors compounding at each step during rollouts. In this work, we make the following key contributions:1. We demonstrate that having an auxiliary loss to predict the longer-term future helps in faster imitation learning. 2. We demonstrate that incorporating the latent plan into dynamics model can be used for planning (for example Model Predictive Control) efficiently. We show the performance of the proposed method as compared to existing state of the art RL methods. 3. We empirically observe that using the proposed auxiliary loss could help in finding sub-goals in the partially observable 2D environment. We consider an agent in the environment that observes at each time step t an observation o t. The execution of a given action a t causes the environment to transition to a new unobserved state, return a reward and emit an observation at the next time step sampled from p (o t+1 |o 1:t, a 1:t) where o 1:t and a 1:t are the observation and action sequences up to time step t. In many domains of interest, the underlying transition dynamics p are not known and the observations are very high-dimensional raw pixel observations. In the following, we will explain our novel proposed approach to learn an accurate environment model that could be used as an internal simulator for planning. We focus on the task of predicting a future observation-action sequence (o 1:T, a 1:T) given an initial observation o 0. We frame this problem as estimating the conditional probability distribution p(o 1:T, a 1:T |o 0). The latter distribution is modeled by a recurrent neural network with stochastic latent variables z 1:T. We train the model using variational inference. We introduce an approximate posterior over latent variables. We maximize a regularized form of the Evidence Lower Bound (ELBO). The regularization comes from an auxiliary task we assign to the latent variables. The graphical model in FIG0 illustrates the dependencies in our generative model. Observations and latent variables are coupled by using an autoregressive model, the Long Short Term Memory (LSTM) architecture BID20, which runs through the sequence: DISPLAYFORM0 where f is a deterministic non-linear transition function and h t is the LSTM hidden state at time t. According the graphical model in FIG0, the predictive distribution factorizes as follows: DISPLAYFORM1 is the observation decoder distribution conditioned on the last action a t−1, the hidden state h t and the latent variable z t. 2. p θ (a t−1 | h t−1, z t) is the action decoder distribution conditioned on the the hidden states h t−1 and the latent variable z t. 3. p θ (z t | h t−1) is the prior over latent variable z t condition on the hidden states h t−1 All these conditional distributions, listed above, are represented by simple distributions such as Gaussian distributions. Their means and standard variations are computed by multi-layered feedforward networks. Although each single distribution is unimodal, the marginalization over sequence of latent variables makes p θ (o 1:T, a 1:T |o 0) highly multimodal. Note that the prior distribution of the latent random variable at time step t depends on all the preceding inputs via the hidden state h t−1. This temporal structure of the prior has been shown to improve the representational power BID12 BID15 BID16 of the latent variable. In order to overcome the intractability of posterior inference of latent variables given observationaction sequence, we make use of amortized variational inference ideas BID27. We consider recognition or inference network, a neural network which approximates the intractable posterior. The true posterior of a given latent variable z t is p(z t |h t−1, a t−1:T, o t:T, z t+1:T). For the sake of an efficient posterior approximation, we make the following design choices:1. We drop the dependence of the posterior on actions a t−1:T and future latent variables z t+1:T. 2. To take into account the dependence on h t−1, we share parameters between the generative model and the recognition model by making the approximate posterior, a function of the hidden state h t−1 computed by the LSTM transition module f of the generative model. 3. To take into account the dependence on future observations o t:T, we use an LSTM that processes observation sequence backward as b t = g(o t, b t+1), where g is a deterministic transition function and b t is the LSTM backward hidden state at time t.4. Finally, a feed-forward network takes as inputs h t−1 and b t and output the mean and the standard deviation of the approximate posterior q φ (z t | h t−1, b t).In principle, the posterior should depend on future actions. To take into account the dependence on future actions as well as future observations, we can use the LSTM that processes the observationaction sequence in backward manner. In pilot trials, we conducted experiments with and without the dependencies on actions for the backward LSTM and we did not notice a noticeable difference in terms of performance. Therefore, we chose to drop the dependence on actions in the backward LSTM to simplify the code. Now using the approximate posterior, the Evidence Lower Bound (ELBO) is derived as follows: DISPLAYFORM0 Leveraging temporal structure of the generative and inference network, the ELBO breaks down as: DISPLAYFORM1 The main difficulty in latent variable models is how to learn a meaningful latent variables that capture high level abstractions in underlying observed data. It has been challenging to combine powerful autoregressive observation decoder with latent variables in a way to make the latter carry useful information BID6. Consider the task of learning to navigate a building from raw images. We try to build an internal model of the world from observation-action trajectories. This is a very high-dimensional and highly redundant observation space. Intuitively, we would like that our latent variables capture an abstract representation describing the essential aspects of the building's topology needed for navigation such as object locations and distance between rooms. The decoder will then encode high frequency source of variations such as objects' texture and other visual details. Training the model by maximum likelihood objective is not sensitive to how different level of information is encoded. This could lead to two bad scenarios: either latent variables are unused and the whole information is captured by the observation decoder, or the model learns a stationary auto-encoder with focus on compressing a single observation BID24 ).The shortcomings, described above, are generally due to two main reasons: the approximate posterior provides a weak signal or the model focuses on short-term reconstruction. In order to address the latter issue, we enforce our latent variables to carry useful information about the future observations in the sequence. In particular, we make use of the so-called "Z-forcing" idea : we consider training a conditional generative model p ζ (b | z) of backward states b given the inferred latent variables z ∼ q θ (z | h, b). This model is trained by log-likelihood maximization: DISPLAYFORM0 The loss above will act as a training regularization that enforce latent variables z t to encode future information. The training objective is a regularized version of the ELBO. The regularization is imposed by the auxiliary cost defined as the reconstruction term of the additional backward generative model. We bring together the ELBO in and the reconstruction term in, multiplied by the trade-off pa-rameter β, to define our final objective: DISPLAYFORM0 We use the reparameterization trick BID27 BID42 and a single posterior sample to obtain unbiased gradient estimators of the ELBO in. As the approximate posterior should be agnostic to the auxiliary task assigned to the latent variable, we don't account for the gradients of the auxiliary cost with respect to backward network during optimization. Here we explain how we can use our dynamics model to help solve sequential RL tasks. We consider two settings: imitation learning, where a learner is asked to mimic an expert and reinforcement learning, where an agent aims at maximizing its long-term performance. We consider a passive approach of imitation learning, also known as behavioral cloning BID41. We have a set of training trajectories achieved by an expert policy. Each trajectory consists of a sequence of observations o 1:T and a sequence of actions a 1:T executed by an expert. The goal is to train a learner to achieve -given an observation -an action as similar as possible to the expert's. This is typically accomplished via supervised learning over observation-action pairs from expert trajectories. However, this assumes that training observation-action pairs are i.i.d. This critical assumption implies that the learner's action does not influence the distribution of future observations upon which it acts. Moreover, this kind of approach does not make use of full trajectories we have at our disposals and chooses to break correlations between observation-actions pairs. In contrast, we propose to leverage the temporal coherence present in our training data by training our dynamic model using full trajectories. The advantage of our method is that our model would capture the training distribution of sequences. Therefore, it is more robust to compounding error, a common problem in methods that fit one-step decisions. Model-based RL approaches can be understood as consisting of two main components: (i) model learning from observations and (ii) planning (obtaining a policy from the learned model). Here, we will present how our dynamics model can be used to help solve RL problems. In particular, we explain how to perform planning under our model and how to gather data that we feed later to our model for training. Given a reward function r, we can evaluate each transition made by our dynamics model. A planner aims at finding the optimal action sequence that maximizes the long-term return defined as the expected cumulative reward. This can be summarized by the following optimization problem: DISPLAYFORM0 r t ] where the expectation is over trajectories sampled under the model. If we optimize directly on actions, the planner may output a sequence of actions that induces a different observation-action distribution than seen during training and end up in regions where the model may capture poorly the environment's dynamics and make prediction errors. This training/test distribution mismatch could in'catastrophic failure', e.g. the planner may output actions that perform well under the model but poorly when executed in the real environment. To ensure that the planner's solution is grounded in the training manifold, we propose to perform planning over latent variables instead of over actions: DISPLAYFORM1 In particular, we use model predictive control (MPC) BID31 as planner in latent space as shown in Alg. 1.Given, an episode of length T, we generate a bunch of sequences starting from the initial observation, We evaluate each sequence based on their cumulative reward and we take the best sequence. Then we pick the k first latent variables z 1:k for the best sequence and we execute k actions a 1:k in the real environment conditioned on the picked latent variables. Now, we re-plan again by following the same steps described above starting at the last observation of the generated segment. Note that for an episode of length T, we re-plan only T /k times because we generate a sequence of k actions after each plan. Now we turn out to our approach to collect data useful for model training. So far, we assumed that our training trajectories are given and fixed. As a consequence, the learned model capture only the training distribution and relying on this model for planning will compute poor actions. Therefore, we need to consider an exploration strategy for data generating. A naive approach would be to collect data under random policy that picks uniformly random actions. This random exploration is often inefficient in term of sample complexity. It usually wastes a lot of time in already well understood regions in the environment while other regions may be still poorly explored. A more directed exploration strategy consists in collecting trajectories that are not likely under the model distribution. For this purpose, we consider a policy π ω parameterized by ω and we train it to maximize the negative regularized ELBO L in. Specifically, if p πω (o 1:T, a 1:T) denotes the distribution of trajectory (o 1:T, a 1:T) induced by π ω, we consider the following optimization problem: DISPLAYFORM0 The above problem can be solved using any policy gradient method, such as proximal policy optimization PPO BID44, with negative regularized ELBO as a reward per trajectory. The overall algorithm is described in Alg. 2. We essentially obtain a high rewarding trajectory by performs Model Predictive Control (MPC) at every k-steps. We then use the exploration policy π ω to sample trajectories that are adjacent to the high-rewarding one obtained using MPC. The algorithm then uses the sampled trajectories for training the model. Given trained model M, Reward function R for times t ∈ {1, ..., T /k} do 1. Generate m sequences of observation sequences of length T M P C 2. Evaluate reward per sequence and take the best sequence. 3. Save the k first latent variables z 1:k for the best sequence (1 latent per observation) 4. Execute the actions conditioned on z 1:k and observation o 1:k for k steps starting at the last observation of last segment. Initialize replay buffer and the model with data from randomly initialized π ω for iteration i ∈ {1, ..., N} do 1. Execute MPC as in Algorithm 1 2. Run exploration policy starting from a random point on the trajectory visited by MPC 3. Update replay buffer with gathered data 4. Update exploration policy π ω using PPO with rewards as the negative regularized ELBO 5. Train the model using a mixture of newly generated data by π ω and data in the replay buffer 4 RELATED WORK Generative Sequence Models. There's a rich literature of work combining recurrent neural networks with stochastic dynamics BID12 BID29 BID15 BID17 BID16 BID18. works propose a variant of RNNs with stochastic dynamics or state space models, but do not investigate their applicability to model based reinforcement learning. Previous work on learning dynamics models for Atari games have either consider learning deterministic models BID37 BID11 or state space models BID8. These models are usually trained with one step ahead prediction loss or fixed k-step ahead prediction loss. Our work is related in the sense that we use stochastic RNNs where the dynamics are conditioned on latent variables, but we propose to incorporate long term future which, as we demonstrate empirically, improves over these models. In our model, the approximate posterior is conditioned on the state of the backward running RNN, which helps to escape local minima as pointed out by BID24. The idea of using a bidirectional posterior goes back to at least BID3 and has been successfully used by BID24 BID16. The application to learning models for reinforcement learning is novel. Model based RL. Many of these prior methods aim to learn the dynamics model of the environment which can then be used for planning, generating synthetic experience, or policy search BID1 BID40 BID49. Improving representations within the context of model-based RL has been studied for value prediction BID38, dimensionality reduction BID36, self-organizing maps BID47, and incentivizing exploration BID48. BID54 introduce Imagination-Augmented Agent which uses rollouts imagined by the dynamics model as inputs to the policy function, by summarizing the outputs of the imagined rollouts with a recurrent neural network. BID8 compare several methods of dynamic modeling and show that state-space models could learn good state representations that could be encoded and fed to the Imagination-Augmented Agent. BID25 provide a computationally efficient way to estimate a variational lower bound to empowerement. As their formulation assumes the availability of a differentiable model to propagate through the transitions, they train a dynamic model using Deep Variational Bayes Filter BID24. BID16. BID21 points out that incorporating long term future by doing Dyna style planning could be useful for model based RL. Here we are interested in learning better representations for the dynamics model using auxiliary losses by predicting the hidden state of the backward running RNN.Auxiliary Losses. Several works have incorporated auxiliary loses which in representations which can generalize. BID39 considered using inverse models, and using the prediction error as a proxy for curiosity. Different works have also considered using loss as a reward which acts as a supervision for reinforcement learning problems BID45. BID23 considered pseudo reward functions which helps to generalize effectively across different Atari games. In this work, we propose to use the auxillary loss for improving the dynamics model in the context of reinforcement learning. Incorporating the Future. Recent works have considered incorporating the future by dynamically computing rollouts across many rollout lengths and using this for improving the policy BID7. BID50 introduced TD(λ), a temporal difference method in which targets from multiple time steps are merged via exponential decay. To the best of our knowledge no prior work has considered incorporating the long term future in the case of stochastic dynamics models for building better models. Many of the model based mentioned above learn global models of the system that are then used for planning, generating synthetic experience, or policy search. These methods require an reliable model and will typically suffer from modeling bias, hence these models are still limited to short horizon prediction in more complex domains BID33. As discussed in Section 3, we study our proposed model under imitation learning and model-based RL. We perform experiments to answer the following questions:1. In the imitation learning setting, how does having access to the future during training help with policy learning? 2. Does our model help to learn a better predictive model of the world? 3. Can our model help in predicting subgoals? 4. In model-based reinforcement learning setting, how does having a better predictive model of the world help for planning and control? First, we consider the imitation learning setting where we have training trajectories generated by an expert at our disposal. Our model is trained as described in Section 2.4. We evaluate our model on continuous control tasks in Mujoco and CarRacing environments, as well as a partially observable 2D grid-world environments with subgoals called BabyAI BID10.We compare our model to two baselines for all imitation learning tasks: a recurrent policy, an LSTM that predicts only the action a t given an observation o t, and a recurrent decoder, an LSTM that predicts both action and next observation given an observation. We compare to the recurrent policy to demonstrate the value of modeling future at all and we compare to the recurrent decoder to demon- We show comparison of our method with the baseline methods for HalfCheetah, Reacher and Car Racing tasks. We find that our method is able to achieve higher reward faster than baseline methods and is more stable.strate the value of modeling long-term future trajectories (as opposite to single-step observation prediction. For all tasks, we take high-dimensional rendered image as input (compared to lowdimensional state vector). All models are trained on 10k expert trajectories and hyper parameters used are described in Section 8.1 appendix. Mujoco tasks. We evaluate the models on Reacher and HalfCheetah. We take rendered images as inputs for both tasks and we compare to recurrent policy and recurrent decoder baselines. The performance in terms of test rewards are shown in FIG1. Our model significantly and consistently outperforms both baselines for both Half Cheetah and Reacher. Car Racing task. The Car Racing task BID28 ) is a continuous control task, details for experimental setup can be found in appendix. The expert is trained using methods in BID19. The model's performance compared to the baseline is shown in FIG1. Our model both achieves a higher reward and is more stable in terms of test performance compared to both the recurrent policy and recurrent decoder. BabyAI PickUnlock Task We evaluate on the PickUnlock task on the BabyAI platform BID10. The BabyAI platform is a partially observable (POMDP) 2D GridWorld with subgoals and language instructions for each task. We remove the language instructions since language-understanding is not the focus of this paper. The PickUnlock task consists of 2 rooms separated by a wall with a key, there is a key in the left room and a target in the right room. The agent always starts in the left room and needs to first find the key, use the key to unlock the door to go into the next room to reach to the goal. The agent receives a reward of 1 for completing the task under a fixed number of steps and gets a small punishment for taking too many steps for completing the task. Our model consistently achieves higher rewards compared to the recurrent policy baseline as shown in Fig. 3. One way to check if the model learns a better generative model of the world is to evaluate it on longhorizon video prediction. We evaluate the model in the CarRacing environment BID28. We evaluate the likelihood of these observations under the models trained in Section 5.1 on 1000 test trajectories generated by the expert trained using BID19. Our method significantly outperforms the recurrent decoder by achieving a negative log-likelihood (NLL) of −526.0 whereas the recurrent decoder achieves an NLL of −352.8. We also generate images (videos) from the model by doing a 15-step rollout and the images. The video can be found at the anonymous link for our method and recurrent decoder. Note that the samples are random and not cherry-picked. Visually, our method seems to generate more coherent and complicated scenes, the entire road with some curves (not just a straight line) is generated. In comparison, the recurrent decoder turns to generated non-complete road (with parts of it missing) and the road generated is often straight with no curves or complications. Intuitively, a model should become sharply better at predicting the future (corresponding to a steep reduction in prediction loss) when it observes and could easily reach a'marker' corresponding to Figure 3: Model-Based RL. We show our comparison of our methods with baseline methods including SeCTAr for BabyAI PickUnlock task and Wheeled locomotion task with sparse rewards. We observe that our baseline achieves higher rewards than the corresponding baselines.a subgoal towards the final goal. We study this for the BabyAI task that contains natural subgoals such as locating the key, getting the key, opening the door, and finding the target in the next room. Experimentally, we do indeed observe that there is sharp decrease in prediction error as the agent locates a subgoal. We also observe that there is an increase in prediction cost when it has a difficulty locating the next subgoal (no key or goal in sight). Qualitative examples of this behavior are shown in Appendix Section 8.2. We evaluate our model on the wheeled locomotion tasks as in BID13 with sparse rewards. The agent is given a reward for every third goal it reached. we compare our model to the recently proposed Sectar model BID13. We outperform the Sectar model, which itself outperforms many other baselines such as Actor-Critic (A3C), TRPO BID43, Option Critic BID2, FeUdal BID53, VIME BID22. We use the same sets of hyperparameters as in BID13. In this work we considered the challenge of model learning in model-based RL. We showed how to train, from raw high-dimensional observations, a latent-variable model that is robust to compounding error. The key insight in our approach involve forcing our latent variables to account for long-term future information. We explain how we use the model for efficient planning and exploration. Through experiments in various tasks, we demonstrate the benefits of such a model to provide sensible long-term predictions and therefore outperform baseline methods. Mujoco Tasks We evaluate on 2 Mujoco tasks BID52, the Reacher and the Half Cheetah task BID52. The Reacher tasks is an object manipulation task consist of manipulating a 7-DoF robotic arm to reach the goal, the agent is rewarded for the number of objects it reaches within a fixed number of steps. The HalfCheetah task is continuous control task where the agent is awarded for the distance the robots moves. For both tasks, the experts are trained using Trust Region Policy Optimization (TRPO) BID43. We generate 10k expert trajectories for training the student model, all models are trained for 50 epochs. For the HalfCheetah task, we chunk the trajectory (1000 timesteps) into 4 chunks of length 250 to save computation time. Car Racing task The Car Racing task BID28 ) is a continuous control task where each episode contains randomly generated trials. The agent (car) is rewarded for visiting as many tiles as possible in the least amount of time possible. The expert is trained using methods in BID19. We generate 10k trajectories from the expert. For trajectories of length over 1000, we take the first 1000 steps. Similarly to Section 5.1, we chunk the 1000 steps trajectory into 4 chunks of 250 for computation purposes. BabyAI The BabyAI environment is a POMDP 2D Minigrid envorinment BID10 with multiple tasks. For our experiments, we use the PickupUnlock task consistent of 2 rooms, a key, an object to pick up and a door in between the rooms. The agent starts off in the left room where it needs to find a key, it then needs to take the key to the door to unlock the next room, after which, the agent will move into the next room and find the object that it needs to pick up. The rooms can be of different sizes and the difficulty increases as the size of the room increases. We train all our models on room of size 15. It is not trivial to train up a reinforcement learning expert on the PickupUnlock task on room size of 15. We use curriculum learning with PPO BID44 for training our experts. We start with a room size of 6 and increase the room size by 2 at each level of curriculum learning. We train the LSTM baseline and our model both using imitation learning. The training data are 10k trajectories generated from the expert model. We evaluate the both baseline and our model every 100 iterations on the real test environment (BabyAI environment) and we report the reward per episode. Experiments are run 5 times with different random seeds and we report the average of the 5 runs. Wheeled locomotion We use the Wheeled locomotion with sparse rewards environment from . The robot is presented with multiple goals and must move sequentially in order to reach each reward. The agent obtains a reward for every 3 goal it reaches and hence this is a task with sparse rewards. We follow similar setup to BID13, the number of explored trajectories for MPC is 2048, MPC re-plans at every 19 steps. However, different from , we sample latent variables from our sequential prior which depends on the summary of the past events h t. This is in comparison to BID13, where the prior of the latent variables are fixed. Experiments are run 3 times and average of the 3 runs are reported. Our model has an auxiliary cost associated with predicting the long term future. Intuitively, the model is better at predicting the long term future when there is more certainty about the future. Let's consider a setting where the task is in a POMDP environment that has multiple subgoals, for example the BabyAI environment BID10 we used earlier. Intuitively, the agent or model should be more certain about the long term future when it sees a subgoal and knows how to get there and less certain if it does not have the next subgoal in sight. We test our hypothesis on tasks in the 5.1 environment. We took our model trained using imitation learning as in section 5.1. Here, we give one example of how our model trained using imitation learning in section 5.1 behaves in real environment and how this corresponds to increase or decrease in auxiliary cost (uncertainty) described in 2.3. In FIG2, we show how our model behaves in BaybyAI environment. The last figure in 4 plots the auxiliary cost at each step. Overall, the auxiliary cost decreases as the agent moves closer to the goal and sometimes there is a sharp drop in the auxiliary cost when the agent sees the subgoal and the goal is aligned with the agent's path. An example reflecting this scenario is the sharp drop in auxiliary cost from step 6 to step 7, where the agent's path changed to be aligned with the door.
incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL.
1,792
scitldr
Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems. Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition method has not been extensively explored for anomaly detection. In this work, we introduce an Integrative Tensor-based Anomaly Detection (ITAD) framework to detect anomalies in a satellite system. Because of the high risk and cost, detecting anomalies in a satellite system is crucial. We construct 3rd-order tensors with telemetry data collected from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and calculate the anomaly score using one of the component matrices obtained by applying CANDECOMP/PARAFAC decomposition to detect anomalies. Our shows that our tensor-based approach can be effective in achieving higher accuracy and reducing false positives in detecting anomalies as compared to other existing approaches. Due to the high maintenance cost as well as extreme risk in space, detecting anomalies in a satellite system is critical. However, anomaly detection in a satellite system is challenging for several reasons. First, anomalies occur due to complex system interactions from various factors inside and outside a satellite system. For example, a sensor in one subsystem in a satellite system is often connected to several other types of sensors or resources in other subsystem modules. Each sensor measurement is encapsulated as telemetry and downlinked to the ground station. In order to identify anomalies, it is crucial to compare and understand not just one single telemetry but several telemetries as a whole. However, most of the previous studies (; ;) on detecting anomalies in satellite systems have primarily focused on analyzing individual telemetry. This can lead to a high false positives rate, because some instantaneous glitches may not be actual anomalies, but just trivial outliers . Additionally, false positives can be costly, requiring much manual effort from operators to investigate and determine whether they are anomalies . To reduce the false positives, analyzing a set of multiple telemetries as a whole can be more effective to determine true anomalies in a complex system. To the best of our knowledge, this integrated approach for a satellite system has not been studied extensively in the past. In order to address these challenges, we propose an Integrative Tensor-based Anomaly Detection (ITAD) framework for a satellite system, where a tensor can effectively capture a set of high dimensional data. Specifically, we construct a 3rd-order tensor for entire telemetries in one subsystem and decompose it into component matrices, which captures the characteristics of multiple telemetries as a whole to detect anomalies. We then conduct a cluster analysis on one component matrix in a decomposed tensor and calculate the anomaly score based on the distance between each telemetry sample and its cluster centroid. Finally, we used the dynamic thresholding method to detect anomalies; the dynamic thresholding method changes the detection threshold value over time instead of using a fixed value for the entire dataset. We performed experiments on our approach with a subset of real telemetries from the KOMPSAT-2 satellite, and verify that our approach can detect actual anomalies effectively and reduce false positives significantly, compared to other approaches. The fundamental concept of a tensor and the algorithm for tensor decomposition are described in Appendix A. In this section, we mainly provide an overview of the research that is directly relevant to our work. Tensor-based Anomaly Detection: Tensor decomposition for anomaly detection has been developed by. They proposed multiway principal components analysis (MPCA) using tensor decomposition to monitor the progress of the batch process in the multivariate trajectory data. They successfully extracted information from a database of batches and captured the multivariate data into a few matrices, which can represent the original data sufficiently, while reducing the space. The modern application of tensor-based anomaly detection has been deployed in many different areas such as neuroscience, environmental monitoring, video surveillance, network security, and remote sensing. The tensor-based anomaly detection method can detect anomalies in an unsupervised manner, where the score plot-based model is the most widely used tensor decomposition method in anomaly detection without labels. Component matrices from tensor decomposition are utilized to calculate the score plots to detect the anomalies. According to the characteristics of the tensor dataset, the score plot can be 1-dimensional (; ;, 2-dimensional (;, or 3-dimensional . In our work, we adopt a score-based method to detect anomalies using one of the component matrices, which has the comprehensive characteristics of all telemetries simultaneously along the timeline. Anomaly Detection for Satellite Operation: Generally, the Out-Of-Limit (OOL) is one of the most widely used methods to detect an anomaly, where OOL can define a nominal range with the lower and upper thresholds. So far, detecting anomalies for satellite systems have primarily used the OOL along with additional methods such as dimensionality reduction algorithm (Schölkopf et al., 1998; ;), nearest neighbors (; ;), and clustering algorithm (; ;). dent weather and marine observation, and communication relay. Korea Multi-Purpose Satellite 2 (KOMPSAT-2) (KOMPSAT-2, 2019) is one of the national monitoring satellites with high-resolution optical observation. It was launched in 2006 and had been operating for 13 years. KOMPSAT-2 satellite generates more than 3,000 different types of telemetries from various subsystems (; eoPortal, 2019). In this work, we collected 88 types of different telemetries with more than 43,716 telemetry samples from the KOMPSAT-2 satellite for 10 months. These telemetries are categorized into 7 different subsystems according to their characteristics. In Table 1, we present the number of different telemetry types used from each subsystem collected from May 2013 to February 2014 for 10 months. Also, the collected data size for each month is shown in GB. In this section, we explain the overall process of our new approach, an Integrative Tensor-based Anomaly Detection (ITAD) framework for the satellite system and describe its details in Fig. 1. Since many telemetries are measured and sampled at a different time interval (from every second to a few minutes), there are many missing or unmeasured sensor values for each timestamp. To address this challenge, we apply linear interpolation, which populates a missing value with a linearly increasing value between two consecutive data points. After linear interpolation, we normalize each telemetry value (feature) individually. Each value such as temperature or power is measured on a different scale; min-max normalization is used to normalize all values into the range between 0 and 1. The minimum value is mapped to 0, and the maximum value is mapped to 1. After linear interpolation, the timestamp is recorded every 1 second in the raw dataset. However, most of the values do not change in a short period, and many telemetry values have the same value for a few minutes or much longer. Therefore, we assumed that it might be practical to compress several timestamps (rows) into a single row. Also, as we add many interpolated values from the linear interpolation step, the size of the dataset increases by more than three times (9.52GB to 29.62GB) after interpolation. Therefore, the compression provides the benefit in computational efficiency while maintaining the critical information of the data. Feature Extraction: Using the above compression method, we use different statistical methods to extract 8 features for each telemetry time series T i: mean (x), standard deviation (s), skewness (skew), kurtosis (kurt), minimum (min), maximum (max), energy (E), and average crossing (x crossing). Energy and average crossing are calculated from E = original data and we obtain the final feature vector V {n,Ti} generated by concatenating the different features as shown in equation 1 as follows: where n is the number of feature vector samples. As a , we can reduce the size of the dataset significantly from the interpolated data (0.5 Gb << 29.62 GB) and reconstitute the telemetry dataset into the matrix form consisting of feature vectors (at each column) by time samples (at each row) as shown in Fig. 1.(b). Tensor Construction and Decomposition: Tensor decomposition can effectively handle high dimensional time series. Therefore, we construct a 3rd-order telemetry tensor consisting of time × f eature vector × telemetry and decompose a tensor using the CANDECOMP/PARAFAC (CP) decomposition in Fig. 1. (c), which is one of the most widely used tensor decomposition methods, as shown in equation 2. After CP decomposition, we obtain three component matrices, A, B, and C as described in Fig. 1. The component matrix A consists of time-to-factor (time × r) describing the comprehensive characteristics of samples from all telemetries at the same point in time using r factors. The component matrix B shows the feature vector-to-factor matrix (r × feature vector) indicating how much each factor influences each feature vector. The final matrix C captures the telemetry-to-factor (telemetry × r) matrix to characterize how much each factor affects each telemetry. where R is the rank, λ is the weight, a r ∈ R I1, b r ∈ R I2, and c r ∈ R I3, for r=1,..., R . In order to find the optimal solutions of CP decomposition, we utilize the alternating least squares (ALS) updating method, which iteratively optimizes one component while leaving the others fixed. Given the 3rd-order tensor, it first fixes the component matrices B and C to obtain the solution for the component matrix A. Then, ALS fixes A and C to find B, and lastly fixes A and B to solve for C as follows:, where X denotes the mode-1 unfolding of tensor X into a matrix, and X and X indicates the mode-2 and mode-3 unfolding, respectively. Moreover, denotes the Khatri-Rao product , which is the "matching columnwise" Kronecker product. It repeats this procedure until it reaches the specific convergence criteria or maximum iteration. Selecting an Optimal Rank r: Since we aim to obtain the component matrices from the decomposition, it is critical to choose an optimal size of the factor (rank) r that can represent the original telemetry tensor. However, there is no general straightforward algorithm to select the optimal r. Instead, we measure the reconstruction error as the difference between the original tensor X and the approximated tensorX. From the given 3rd-order tensor X ∈ R I1×I2×I3, we use the Frobenius norm X −X F to calculate the reconstruction error, whereX is computed as the outer product (•) of component matrices A, B, and C, as shown in equation 2. We can compute the reconstruction error as follows: J×R, and c kr ∈ R K×R. The smaller the reconstruction error, the closer the approximate tensor is to the original tensor. We find the reconstruction error by increasing the rank r from 2 and choosing the smallest r by minimizing X −X F, until when the approximated tensor can reconstruct more than 90% of the original telemetry tensor. We present an example of selecting an optimal r from the reconstruction error in Appendix B. Clustering Analysis: The original telemetry data is highly unbalanced, where most elements are normal, with only a few anomalies. This is one of the challenging issues in an anomaly detection problem. Additionally, normal telemetry data might exhibit certain repeating patterns, as many satellite commanding sequences are not drastically different from one another during nominal operation. Therefore, we apply a clustering method for the component matrix A (T ime × r), in order to group major patterns of telemetry samples, as shown in Fig. 1.(e), such that they represent normal data behavior. The primary reason we chose the matrix A among the three-component matrices is that we ultimately aim to identify the time, at which the anomaly occurs, and the component matrix A represents the key information and comprehensive characteristics of all telemetries at each time instance across different subsystems. Note that the clustering is applied row-wise since the component matrix A has the time information at its row. However, clustering is challenging because not only a telemetry sample is an 8-dimensional vector, but also the original tensor dataset consists of different types of telemetries. Therefore, we extensively experimented with several clustering algorithms such as Gaussian Mixture Model (GMM), Isolation Forest (IF), k-means, and One-class Support Vector Machine (OCSVM) to compare and determine the best approach. Clustering methods other than the k-means algorithm showed too many false positives. Hence, we only use the k-means algorithm in our work. As it is required to set the number of clusters when applying k-means clustering, we use silhouette analysis to determine an optimal k. The silhouette method has coefficients ranging from -1 to 1, where the positive value indicates that the samples are correctly assigned and away from the neighboring clusters, and the negative value represents samples that might be assigned to the wrong cluster. The zero value indicates that the samples are on the decision boundary of two neighboring clusters. We varied k from 2 to 10 and chose the value when the silhouette coefficient is the largest positive value, as shown in Appendix B. Note: Tensor decomposition can be viewed as a clustering mechanism. Generally, the component matrix A (time × f actor), which accounts for comprehensive characteristics of all telemetries in the same subsystem, can serve as an indicator for different column-wise clusters. In our research, however, we need a row-wise clustering for calculating an anomaly score by time, since our goal is to identify the time instance when an anomaly occurs. That is the reason we use another kmeans clustering in addition to the tensor decomposition to capture the distance between normal and abnormal data. Calculating Anomaly Score: If a time sample is normal, it might belong to one of the clusters we constructed from the previous step. If a time sample is anomalous, then it would exhibit a far different pattern from normal patterns, and it would not belong to any clusters. To quantify this, we calculate the Euclidean distance d(s, c) = 2 between each time sample s = (s 1, s 2, ..., s n) and the centroid c = (c 1, c 2, ..., c n) of its nearest cluster. A short Euclidean distance means that a value is similar to a normal value and pattern, and a long distance indicates that the value is far different from major clusters and normal patterns. Therefore, we can define this Euclidean distance as an anomaly score, as shown in Fig. 1.(f), where anomalies will typically have high anomaly scores. Data-Driven Dynamic Thresholding: In order to derive anomalies from the anomaly score, it is required to set a certain threshold. Although a fixed threshold is the simplest way, it cannot detect the contextual anomalies, which can be below the threshold point. In the same vein, values that are just above the fixed threshold can be normal, but they can be detected as anomalies with a fixed threshold method. Additionally, a fixed threshold approach cannot adapt to various changing conditions and can produce too many false positives. The example of the problems with a fixed threshold and high false positives is illustrated in Appendix C. To address the problem with a fixed threshold, we develop the data-driven dynamic thresholding method, where a threshold value can be dynamically adjusted and changed adaptively in differing contexts. We first choose the time window w, which is defined as the number of previous anomaly score points to compute the current anomaly score. Then, we calculate the mean µ and standard deviation σ of the data values in the time window w. Finally, based on the confidence interval distribution, we determine an anomaly, when the anomaly score is over m × σ apart from the µ, denoted by σ = (X i − µ) 2 and µ = X i /n, where i = (n − w),..., n (the number of data points in w) and m (the coefficient parameter) ≥ 1. This measures how far (m × σ) is apart from the mean µ of values within the window w. Tensor Size: In this experiment, we use 88 types of telemetries with more than 43,716 telemetry samples, where each telemetry sample has a feature vector of 8 different statistical quantities, as discussed in Section 3. With these telemetries, we construct seven 3rd-order telemetry tensors, and the dimensions and the size of each tensor are provided as follows: time × f eature vector × telemetry, and 43, 716 × 8 × number of telemetries. We summarize the detailed size for each subsystem in Table 2. Convergence Criteria: Next, we decompose each 3rd-order telemetry tensor into component matrices, A, B, and C, using CANDECOMP/PARAFAC (CP) decomposition with the alternating least squares (ALS) updating method. Updating will be stopped when the iteration reaches 100, or when the convergence tolerance is less than 10 −8 in this experiment. Optimal Rank: The reconstruction errors are calculated from increasing the rank r from 2. Then, we choose the smallest r, which minimizes X −X F, until when the approximate tensor can reconstruct more than 90% of the original telemetry tensor. The of the optimal r is presented in the third column in Table 2 for each subsystem. As shown in Table 2, r produces different ranges of values from 11 to 29 because of the different telemetry values, characteristics, and structures in each subsystem. k-means Clustering: Among the three-component matrices from decomposition using the optimal r, we apply k-means clustering to the component matrix A consisting of time-to-factor information. Since our goal is to detect anomaly points over different time values, we chose the component matrix A, which has the time information. Additionally, in order to determine an optimal k, we apply silhouette analysis for each subsystem. All are presented in the last column in Table 2. We use the dynamic thresholding method, which can dynamically adjust the threshold value based on environmental change. However, there are clear trade-offs between different dynamic thresholding parameters, window size w and coefficient parameter m. To empirically evaluate the trade-offs and fine-tune the best parameters, we conducted various experiments by changing the window size from 9 to 576 to determine the optimal value of w and m for each subsystem. Since we sampled the dataset corresponding to 10 minutes into one data point, window size 9 translates to 90 minutes time period, which corresponds to the single activity cycle of the satellite operation. We can observe that the best performance is achieved when w is either 108 or 198 for all subsystems. We also empirically found that m = 6 typically ed in the best performance for any window size. An example of fine-tuning for dynamic thresholding is described in Appendix C. Comparisons with Other Methods: We compared our approach with four other well-known anomaly detection baselines developed for satellite systems. First, we compare ours with One-Class SVM (OCSVM) after feature extraction following the method from CNES's NOS-TRADAMUS is also compared. We present the performance of each anomaly detection method in Table 4. The domain experts label anomalies at KARI, and the number of them is shown in the second column in Table 3. If a detected point is an anomaly, it is counted as a true positive (TP). Otherwise, we count it as a false positive (FP). The performance in TP and FP for each detection method is provided in Table 3. Note: Even with 10 months of data, we do not have many anomalies. If there are many anomalies, the satellite will not function properly. Specifically, as requested by satellite operators, our objective is to detect the anomalous events accurately, while reducing false positives in a highly unbalanced dataset. Total 3 2 9 2 25 2 18 2 18 2 1 All methods show the same detection performance in TP while detecting anomalies from different subsystems. However, our ITAD and MPPCACD detect one anomaly for subsystem1 and another one for subsystem2. OCSVM detects all two anomalies from subsystem2, while it misses an anomaly for subsystem1. On the other hand, the ITAD framework outperforms in FP compared to others. While SCLSTM, OCSVM, and IF produces a total of 25, 18, and 18 false positives, respectively, our ITAD approach reduces the false positives to 1. MPPCACD shows the second-best performance among other approaches in FP, but it could not reduce the FPs enough compared to the ITAD approach. To compare the overall performance, we present the precision, recall, and F1 score in Table 4. The ITAD framework achieves the highest precision (66.67%) than other methods, because of its high performance in FP. On the other hand, the recall performance was the same across all approaches. Overall, ITAD outperforms all other methods in F1 score by more than two-fold (66.67% vs. 28.57%). Analysis: Most of the anomaly detection methods except our approach generate high FPs because they cannot account for multiple telemetries simultaneously. When a temporary glitch is detected from only one telemetry in a subsystem, it is highly likely that it is a trivial outlier, not an actual anomaly. (Note that telemetries collected from adjacent sensors in the same environment are regarded as one case since they are highly correlated with each other). In the case of subsystem4, as shown in Fig. 2.(a), the 1st telemetry, TDCSUAT, has a temporal glitch on August 17th, whereas all other different types of telemetries such as MMQTXON, TCCU2AT, and TXPOND2T do not have any glitch at the same timestamp. This glitch is confirmed as a trivial outlier, not an actual anomaly. Our Integrative Tensor-based Anomaly Detection (ITAD) approach does not report this glitch as an anomaly, whereas other detection methods such as MPPCACD, SCLSTM, OCSVM, and IF record it as an anomaly. In the case of subsystem3, subsystem4, and subsystem5, there are some temporal glitches, but no actual anomaly. The ITAD framework is the only method that does not report this type of trivial outliers, as shown in FPs of subsystem4 in Table 3. When there are temporary glitches in multiple telemetries at the same timestamp as shown in subsystem2 in Fig. 2.(b), ITAD reports anomalies. For example, there are temporary glitches in two different types of telemetries (AFFS1OMI and ASFF1OPI) at the same timestamp. This glitch is an actual anomaly, and the ITAD approach accurately reports it as an anomaly. Since the ITAD method can take and process multiple telemetries simultaneously, it significantly reduces false positives caused by the other methods based on a single-variate anomaly analysis. These demonstrate the effectiveness of integrative analysis for multiple telemetries in subsystems using a tensor-based method for satellite monitoring. Determining an appropriate rank-size r is an NP-complete problem (Håstad, 1990), and there is no general algorithm to find it. To choose r, we exploit the reconstruction error, which is proposed in the original CP research . However, there is a possibility to suffer from overfactoring and ultimately failing to obtain an optimal solution from this method. To address this possibility, we plan to apply the Core Consistency Diagnostic (CORCONDIA) proposed by for determining the optimal rank r for our future work. We believe that the CORCONDIA method, which assesses the core consistency and measures the similarity between the core array and theoretical super-diagonal array, can yield more accurate . Even though we use 10 months of real telemetry dataset, we do not have many anomalies, which is a realistic scenario. Otherwise, i.e. if there are many anomalous events, most mission-critical systems would fail very quickly. In the presence of a small number of anomalies, the main focus of our work is to reduce false positives to assist satellite operators to determine the true anomalies, as requested by KARI operators. However, we agree that because of a small number of anomalies, current precision, and recall metrics would be very sensitive to anomaly events. Missing one anomaly would in a 33% drop in performance. To partially address this issue, we are currently in the process of collecting more datasets with anomalies within a longer and plan to evaluate our tensor-based system with datasets with more anomalies. Also, we believe we need to develop a better performance metric, which can capture the performance with a small number of anomalies. Lastly, we are in the process of deploying our tensor-based anomaly detection method to THE KOMPSAT-2 satellite in the spring of 2020. We plan to incorporate not only 88 telemetries we experimented in this research, but also other types of telemetries and subsystems to evaluate our integrative anomaly detection method. In this work, we proposed an Integrative Tensor-based Anomaly Detection framework (ITAD) to detect anomalies using the KOMPSAT-2 satellite telemetry dataset, where our approach can analyze multiple telemetries simultaneously to detect anomalies. Our ITAD achieves higher performance in precision and F1 score compared to other approaches. We also demonstrate that the ITAD reduces the false positives significantly. This reduction in FPs is because it can distinguish actual anomalies from trivial outliers by incorporating information from other telemetries at the same time. In the future, we plan to improve our algorithm by applying the CORCONDIA method to avoid overfactoring and find an optimal rank r and incorporate and evaluate datasets with more anomalies. We believe our work laid the first grounds using an integrated tensor-based detection mechanism for space anomaly detection. Moreover, the demonstrates that our proposed method can be applicable in a variety of multivariate time-series anomaly detection scenarios, which require low false positives as well as high accuracy. A TENSOR DECOMPOSITION A tensor is a multi-dimensional array , where geometric vectors and scalars can be considered as the simplest tensors. A 1st-order tensor is a vector, a 2nd-order tensor is a matrix, and a 3rd-order tensor can be represented as a cube, which has three vector spaces. In general, N th-order tensor X ∈ R I1×I2×···×I N is represented by the outer product • of N vector spaces as follows: where I N is the N th dimension and a N is the vector in N th dim. A rank in a tensor indicates the number of components in the decomposed matrices and every tensor can be expressed as a sum of a rank-1 tensor (; 1989). Due to its ability to express multi-modality, it is effective to handle such dataset with multi-modal aspects. Expressing a tensor as a sum of a rank-1 tensor was first proposed by Hitchcock (1927; 1928). In 1970, proposed a canonical decomposition (CANDECOMP) and suggested parallel factor decomposition (PARAFAC), which is an extended version of 2-way factorization for higher-order data. Since CANDECOMP and PARAFAC have a similar concept, the CANDECOMP/PARAFAC (CP) decomposition formulated by has been widely used. It decomposes Nth order data into a linear sum of rank-1 tensor as described in Fig. 3.(a) and a 3rd-order tensor can be decomposed into three component matrices A, B, and C. pioneered the use of CP model to extract information from a chemical system. contributed to developing practical description and application of tensors. And Acar et al. (2005; 2006) was the first to apply a tensor decomposition to data mining. They analyzed online chatroom data to understand how social groups evolved in cyberspace. They constructed a 3rd-order tensor with user × keyword × time spaces. The Tucker decomposition is another commonly-used tensor decomposition, which was first introduced by and refined later (; 1966). As shown in Fig. 3.(b), the Tucker decomposition has a core tensor, which can be viewed as a compression of the original tensor X. In the case of the 3rd-order tensor, it decomposes a tensor into a core tensor and three matrices with different ranks (p = g = r) as shown in Fig. 3.(b). In fact, CP decomposition can be thought as a special case of the Tucker decomposition, where all ranks are identical (p = g = r) and the core tensor is super-diagonal having ones as diagonal values as shown in Fig. 3 The benefit of using tensor decomposition is that it is one of the most effective unsupervised methods for extracting characteristics of Nth-dimensional data. Traditionally, it has been required to rearrange the dimension into a 2nd-order matrix to factorize high dimensional data. However, tensor decomposition can offer more accurate by keeping the N th-order structure of data as shown in other research (; ;). We use silhouette analysis to determine an optimal k for k-means clustering. We varied k from 2 to 10, and chose the value when the silhouette coefficient is the largest positive value as shown in the example of Subsystem2 in Fig. 4. Silhouette Score The number of clusters Figure 4: Selecting the optimal k for Subsystem2 using the silhouette scores by changing the number of clusters. In order to select the optimal r, we find the reconstruction error by increasing the rank r from 2 and choosing the smallest r by minimizing X −X F, until when the approximated tensor can reconstruct more than 90% of the original telemetry tensor as shown in the example of Subsystem7 in Fig. 5. The number of rank r Figure 5: Selecting the optimal r for Subsystem7 at the point when the reconstruction error is less than 10%. C FIXED AND DYNAMIC THRESHOLDING A fixed threshold approach cannot adapt to changing conditions, and can produce too many false positives. An example of the problems with a fixed threshold and high false positives is illustrated in Fig. 6. Figure 6: Example of high false alarms caused by fixed threshold for telemetry AFSS1OMI of subsystem2. The X-axis indicates the time, and the Y-axis is the anomaly score value. The blue line indicates the anomaly score and the red line represents the dynamic threshold computed from the formula, µ + m · σ during w. As we can observe from Fig. 7, different number of false positives and true positives can be detected based on different w and m. As shown in Fig. 7.(a) and (b), increasing the window size w from 9 to 198 tends to make the threshold line flatter. In addition, Fig. 7. (a) and (c) show that increasing the coefficient parameter m from 4 to 6 influences the overall distance between the threshold line and the anomaly score line. We can observe the best performance is achieved when w is either 108 or 198 for all subsystems. We also empirically found that m = 6 typically ed in the best performance for any window size.
Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system.
1,793
scitldr
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning. Please watch the video demonstrating learned experts and default policies on several continuous control tasks (https://youtu.be/U2qA3llzus8). For many interesting reinforcement learning tasks, good policies exhibit similar behaviors in different contexts, behaviors that need to be modified only slightly or occasionally to account for the specific task at hand or to respond to information becoming available. For example, a simulated humanoid in navigational tasks is usually required to walk -independently of the specific goal it is aiming for. Similarly, an agent in a simulated maze tends to primarily move forward with occasional left/right turns at intersections. This intuition has been explored across multiple fields, from cognitive science (e.g. BID22 to neuroscience and machine learning. For instance, the idea of bounded rationality (e.g. BID46) emphasizes the cost of information processing and the presence of internal computational constraints. This implies that the behavior of an agent minimizes the need to process information, and more generally trades off task reward with computational effort, ing in structured repetitive patterns. Computationally, these ideas can be modeled using tools from information and probability theory (e.g. BID50 BID32 BID47 BID40 BID33 BID49, for instance, via constraints on the channel capacity between past states and future actions in a Markov decision process. In this paper we explore this idea, starting from the KL regularized expected reward objective (e.g. BID51 BID52 BID19 BID36 BID23 BID48, which encourages an agent to trade off expected reward against deviations from a prior or default distribution over trajectories. We explore how this can be used to inject subjective knowledge into the learning problem by using an informative default policy that is learned alongside the agent policy This default policy encodes default behaviours that should be executed in multiple contexts in absence of addi-tional task information and the objective forces the learned policy to be structured in alignment with the default policy. To render this approach effective, we introduce an information asymmetry between the default and agent policies, preventing the default policy from accessing certain information in the state. This prevents the default policy from collapsing to the agent's policy. Instead, the default policy is forced to generalize across a subset of states, implementing a form of default behavior that is valid in the absence of the missing information, and thereby exerting pressure that encourages sharing of behavior across different parts of the state space. FIG0 illustrates the proposed setup, with asymmetry imposed by hiding parts of the state from the default policy. We investigate the proposed approach empirically on a variety of challenging problems including both continuous action problems such as controlling simulated high-dimensional physical embodied agents, as well as discrete action visual mazes. We find that even when the agent and default policies are learned at the same time, significant speed-ups can be achieved on a range of tasks. We consider several variations of the formulation, and discuss its connection to several ideas in the wider literature, including information bottleneck, and variational formulations of the EM algorithm for learning generative models. Throughout this paper we use s t and a t to denote the state and action at time step t, and rps, aq the instantaneous reward for the agent if it executes action a in state s. We denote the history up to time t by x t " ps 1, a 1, . . ., s t q, and the whole trajectory by τ " ps 1, a 1, s 2, . . .q. Our starting point is the KL regularized expected reward objective Lpπ, π 0 q " E πτ "ř t γ t rps t, a t q´αγ t KL " πpa t |x t q}π 0 pa t |x t q ‰‰,where π is the agent policy (parameterized by θ and to be learned), π 0 the default policy, and E πτ r¨s is taken with respect to the distribution π τ over trajectories defined by the agent policy and system dynamics: π τ pτ q " pps 1 q ś t πpa t |x t qpps t`1 |s t, a t q. Note that our policies are history-dependent. KLrπpa t |x t q}π 0 pa t |x t qs is the Kullback-Leibler (KL) divergence between the agent policy π and a default or prior policy π 0 given history x t. The discount factor is γ P r0, 1s and α is a hyperparameter scaling the relative contributions of both terms. Intuitively, this objective expresses the desire to maximize the reward while also staying close to a reference behaviour defined by π 0. As discussed later, besides being a convenient way to express a regularized RL problem, it also has deep connections to probabilistic inference. One particular instantiation of eq. FORMULA0 is when π 0 is the uniform distribution (assuming a compact action space). In this case one recovers, up to a constant, the entropy regularized objective (e.g. BID55 BID8 BID11 BID44 BID13 : DISPLAYFORM0 This objective has been motivated in various ways: it prevents the policy from collapsing to a deterministic solution thus improving exploration, it encourages learning of multiple solutions to a task which can facilitate transfer, and it provides robustness to perturbations and model mismatch. One approximation of the entropy regularized objective is for the history dependent entropy to be used as an additional (auxiliary) loss to the RL loss; this approach is widely used in the literature (e.g. BID53 . While the motivations for considering the entropy regularized objective are intuitive and reasonable, the choice of regularizing towards an uniform policy is less obvious, particularly in cases with large or high dimensional action spaces. In this work we explore whether regularization towards more sophisticated default policies can be advantageous. Both objectives and can be generalized beyond the typical Markov assumption in MDPs. In particular, additional correlations among actions can be introduced, e.g. using latent variables BID13. This can be useful when, as discussed below, either π 0 or π are not given full access to the state, rendering the setup partially observed. In the following we will not explore such extensions, though note that we do work with policies πpa t |x t q and π 0 pa t |x t q that depend on history x t. Many works that consider the KL regularized objective either employ a simple or fixed default policy or directly work with the entropy formulation (e.g. BID40 BID8 BID11 BID13 . In contrast, here we will be studying the possibility of learning the default policy itself, and the form of the subjective knowledge that this introduces to the learning system. Our guiding intuition, as described earlier, is the notion of a default behaviour that is executed in the absence of additional goal-directed information. Instances which we explore in this paper include a locomotive body navigating to a goal location where the locomotion pattern depends largely on the body configuration and less so on the goal, and a 3D visual maze environment with discrete actions, where the typical action includes forward motion, regardless of the specific task at hand. To express the notion of a default behavior, which we also refer to as "goal-agnostic" (although the term should be understood very broadly), we consider the case where the default policy π 0 is a function (parameterized by φ) of a subset of the interaction history up to time t, i.e. π 0 pa t |x t q " π 0 pa t |x D t q, where x D t is a subset of the full history x t and is the goal-agnostic information that we allow the default policy to depend on. We denote by x G t the other (goal-directed) information in x t and assume that the full history is the disjoint union of both. The objective specializes to: DISPLAYFORM0 To By hiding information from the default policy, the system forces the default policy to learn the average behaviour over histories x t with the same value of x D t. If x D t hides goal-directed information, the default policy will learn behaviour that is generally useful regardless of the current goal. We can make this precise by noting that optimizing the objective with respect to π 0 amounts to supervised learning of π 0 on trajectories generated by π τ, i.e. this is a distillation process from π τ to π 0 BID17 BID41 BID34 BID48. In the nonparametric case, the optimal default policy π 0 can be derived as: DISPLAYFORM1 where π τ pxtq is the probability of seeing historyxt at time stept under the policy π, and the indicator DISPLAYFORM2 q is 1 if the goal-agnostic information of the two histories matches and 0 otherwise. It is also worth considering the effect of the objective eq. on the learned policy π. Since π 0 is learned alongside π and not specified in advance, this objective does not favor any particular behavior a priori. Instead it will encourage a solution in which similar behavior will be executed in different parts of the state space that are similar as determined by x D t, since the policy π is regularized towards the default policy π 0. More generally, during optimization of π the default policy effectively acts like a shaping reward while the entropy contained in the KL discourages deterministic solutions. Reinforcement learning objectives with information theoretic constraints have been considered by multiple authors BID50 BID47 BID49. Such constraints can be motivated by the internal computational limitations of the agent, which limit the rate with which information can be extracted from states (or observations) and translated into actions. Such capacity constraints can be expressed via an information theoretic regularization term that is added to the expected reward. Specializing to our scenario, where the "information flow" to be controlled is between the goal-directed history information x G t and action a t (so that the agent prefers default, goal-agnostic, behaviour), consider the objective: DISPLAYFORM0 qs is positive (see BID1 . Re-introducing this into we find that the KL regularized objective in eq. can be seen as a lower bound to eq., where the agent has a capacity constraint on the channel between goal-directed history information and (future) actions. See section A in the appendix for a generalization including latent variables. In this light, we can see our work as a particular implementation of the information bottleneck principle, where we penalize the dependence on the information that is hidden from the default policy. The above setup also bears significant similarity to the training of variational autoencoders BID20 BID37 and, more generally the variational EM framework for learning latent variable models BID6 BID30. The setup is as follows. Given observations X " tx 1,... x N u the goal is to maximize the log marginal likelihood log p θ pX q " ř i log p θ px i q where p θ pxq " ş p θ px, zqdz. This marginal likelihood can be bounded from below by ř i E q φ pz|xiq rlog p θ px i |zq´log q φ pz|xiq p θ pzq s with q φ pz|x i q being a learned approximation to the true posterior p θ pz|x i q. This lower bound exhibits a similar information asymmetry between q and p as the one introduced between π and π 0 in the objective in eq.. In particular, in the multi-task case discussed in section 3 with one task per episode, x i can be seen to take the role of the task, log ppx i |zq that of the task reward, qpz|x i q that of task conditional policy, and ppzq the default policy. Therefore maximizing eq. can then be thought of as learning a generative model of behaviors that can explain the solution to different tasks. In practice the objective in eq. 3 can be optimized in different ways. A simple approach is to perform alternating gradient ascent in π 0 and π. Optimizing L with respect to π 0 amounts to supervised learning with π as the data distribution (distilling π into π 0). Optimizing π given π 0 requires solving a regularized expected reward problem which can be achieved with a variety of algorithms BID44 BID48 BID11 BID13 BID12.The specific algorithm choice in our experiments depends on the type of environment. For the continuous control domains we use SVG BID15 with experience replay and a modification for the KL regularized setting BID13 BID12. The SVG algorithm learns stochastic policies by backpropagation from the action-value function. We estimate the action value function using K-step returns and the Retrace operator for low-variance off-policy correction (see BID27 ; as well as BID13 ; BID39). For discrete action spaces we use a batched actor-critic algorithm (see BID7). The algorithm employs a learned state-value function and obtains value estimates for updating the value function and advantages for computing the policy gradient using K-step returns in combination with the V-trace operator for off-policy correction. All algorithms are implemented in batched distributed fashion with a single learner and multiple actors. In algorithm 1 we provide pseudo-code for actor-critic version of the algorithm with K-step returns. Details of the off-policy versions of the algorithms for continuous and discrete action spaces can be found in the appendix (section D). There are several well established connections between certain formulations of the reinforcement learning literature and concepts from the probabilistic modeling literature. The formalisms are often closely related although derived from different intuitions, and with different intentions. for t = 0, K, 2K,... T do rollout partial trajectory: τ t:t`K " ps t, a t, r t... r t`K q compute KL: DISPLAYFORM0 Default policy loss: DISPLAYFORM1 Maximum entropy reinforcement learning, stochastic optimal control, and related approaches build on the observation that some formulation of the reinforcement learning problem can be interpreted as exact or approximate variational inference in a probabilistic graphical model in which the reward function takes the role of log-likelihood (e.g. BID55 BID19 BID52 . While the exact formulation and algorithms vary, they in an entropy or KL regularized expected reward objective. These algorithms were originally situated primarily in the robotics and control literature but there has been a recent surge in interest in deep reinforcement learning community (e.g. BID8 BID44 BID28 BID11 BID13 BID12 .Related but often seen as distinct is the familiy of expectation maximization policy search algorithms (e.g. BID35 BID36 BID23 BID26 BID4 BID0 . These cast policy search as an alternating optimization problem similar to the EM algorithm for learning probabilistic models. They differ in the specific implementation of the equivalents of the E and M steps; intuitively the default policy is repeatedly replaced by a new version of the policy. The DISTRAL algorithm BID48 as well as the present paper can be seen as taking an intermediate position: unlike in the class of RL-as-inference algorithms the default policy is not fixed but learned, but unlike in the classical EM policy search the final of the optimization remains regularized since the default policy is constrained relative to the policy. As explained above this can be seen as analogous to the relative roles of learned model and observation specific posterior in fitting a generative model. Similar to DISTRAL, Divide and Conquer BID9 learns an ensemble of policies, each specializing to a particular context, which are regularized towards one another via a symmetric KL penalty, with the behavior of the ensemble distilled to a single fixed policy. In concurrent work BID10 propose an information bottleneck architecture for policies with latent variables that leads to a KL-regularized formulation similar to the one described in Appendix A.2. The information bottleneck is implemented in latent space and the default policy is obtained by marginalization with a goal-agnostic prior. An important feature of EM policy search and other policy gradient algorithms is the presence of a KL constraint that limits the relative change of the policy to some older version across iterations to control for the rate of change in the policy (e.g. BID43 BID15 BID45 BID16 BID29 . The constraint can be implemented in different ways, and collectively the algorithms are often classified as "trust region" methods. Note that for a KL regularized objective to be a trust region BID31, additional assumptions need to hold. In principle, as an optimization technique, the critical points of the KL regularized objective for some function f pθq have to be, provably, the same as for the non-regularized objective. This is not trivial to show unless the trust region for step k is around θ k . In our case, there is no such guarantee even if we remove the asymmetry in information between default policy and policy or make the default policy be an old copy of the policy. Other related works motivated from an optimization perspective include Deep Mutual Learning BID54 applied in supervised learning, where KL-regularization is used with a learned prior that receives the same amount of information as the trained model. introduces EWC to address catastrophic forgetting, where a second order Taylor expansion of the KL, in a KL-regularized objective, forces the main policy to stay close to solutions of previously encountered tasks. also relies on a KL-regularized objective to ensure policies explored in a curriculum stay close to each other. Conceptually distinct but formally closely related to maximum entropy and KL-regularized formulations are computational models of bounded rationality (e.g. BID50 BID32 BID47 BID40 BID49 which introduce information constraints to account for the agent's internal computational constraints on its ability to process information. As discussed in section 4 the present formulation can be seen as a more general formulation of the idea. In our experiments, we study the effect of using a learned default policy to regularize the behavior of our agents, across a wide range of environments spanning sparse and dense reward tasks. In particular, we evaluate the impact of conditioning the default policy on various information sets x D on the learning dynamics, and evaluate the potential of pretrained default policies for transfer learning. In these experiments, we consider two streams of information which are fed to our agents: task specific information (task) and proprioception (proprio), corresponding to walker (body) specific observations (joint angles etc.). Walls task with humanoid, where the goal is avoid walls while running through a terrain. We consider three walkers: jumping ball with 3 degrees of freedom (DoF) and 3 actuators; quadruped with 12 DoF and 8 actuators; humanoid with 28 DoF and 21 actuators. The task is specified to the agent either via an additional feature vector (referred to as feature-tasks) or in the form of visual input (vision-task). The tasks differ in the type of reward: in sparse reward tasks a non-zero reward is only given when a (sub-)goal is achieved (e.g. the target was reached); in dense reward tasks smoothly varying shaping reward is provided (e.g. negative distance to the target). We consider the following tasks. Walking task, a dense-reward task based on features. The walker needs to move in one of four randomly sampled directions, with a fixed speed; the direction being resampled half-way through the episode. Walls task, a dense-reward vision-task. Here the walker has to traverse a corridor while avoiding walls. Go to one of K targets task, a sparse-reward feature-based task. The walker has to go to one of K randomly sampled targets. For K=1, the target can either reappear within the episode (referred to as the moving target task) or the episode can end upon reaching the target. Move one box to one of K targets, a sparse-reward feature-based-task. The walker has to move a box to one of K targets, and optionally, go on to one of the remaining targets. The latter is referred to as the move one box to one of K targets and go to another target). Foraging in the maze task, a sparse-reward vision-task. The walker collects apples in a maze. FIG2 shows visualizations of the walkers and some of the tasks. Refer to appendix C for more details. Experimental Setup As baseline, we consider policies trained with standard entropy regularization. When considering the full training objective of eq. 1, the default policy network shares the same structure as the agent's policy. In both cases, hyper-parameters are optimized on a per-task basis. We employ a distributed actor-learner architecture BID7: actors execute recent copies of the policy and send data to a replay buffer of fixed size; while the learner samples short trajectory windows from the replay and computes updates to the policy, value, and default policy. We experimented with a number of actors in t32, 64, 128, 256u (depending on the task) and a single learner. Results with a single actor are presented in appendix B. Unless otherwise mentioned, we plot average episodic return as a function of the number of environment transitions processed by the learner 1. Each experiment is run with five random seeds. For more details, see appendix D.2We consider three information sets passed to the default policy: proprioceptive, receiving only proprioceptive information; task-subset, receiving proprioceptive and a subset of task-specific information; full-information, receiving the same information as the policy. Results for the sparse-reward tasks with complex walkers. Left: go to moving target task with humanoid. Center: foraging in the maze with quadruped. Right: moving one box to one of two targets and go to another target task with quadruped. The legends denote additional to the proprioception, information passed to the default policy (except baseline, where we do not use default policy).The main finding of our experiments is that the default policy with limited task information provides considerable speed-up in terms of learner steps for the sparse-reward tasks with complex walkers (quadruped, humanoid). The on these tasks are presented in FIG3. More cases are covered in the appendix E.Overall, the proprioceptive default policy is very effective and gives the biggest gains in the majority of tasks. Providing additional information to the default policy, leads to an improvement only in a small number of cases (figure 3, right and appendix E.3). In these cases, the additional information (e.g. box position), adds useful inductive bias for the policy learning. For the dense-reward tasks or for a simple walker body adding the default policy has limited or no effect (see appendix E.1, E.2). We hypothesize that the absence of gain is due to the relative simplicity of the regular policy learning versus the KL-regularized setup. In the case of dense-reward tasks the agent has a strong reward signal. For simple walkers, the action space is too simple to require sophisticated exploration provided by the default policy. Finally, with full information in the default policy, the optimal default policy would exactly copy the agent policy, which would not provide additional learning signal beyond the regular policy learning. In all these cases, the default policy will not be forced to generalize across different contexts and hence not provide a meaningful regularization signal. We analyze the agent behavior on the go to moving target task with a quadruped walker. We illustrate the agent trajectory for this task in FIG4, left. The red dot corresponds to the agent starting position. The green stars on the left and central figures correspond to the locations of the targets with Center: KL divergence from the agent policy to the proprioceptive default policy plotted over time for the same trajectory. Right: Performance of the transfer on move one box to one of 3 targets task with quadruped. The legend whether the default policy is learned or is transferred. Furthermore, it specifies the task from which the default policy is transferred as well as additional information other than the proprioceptive information that the default policy is conditioned on, if any.blue numbers indicating the order of achieving the targets. The yellow dots on the left and central curves indicate the segment (of 40 time steps) near the target. In FIG4, center, we show the KL divergence, KLrπ}π 0 s, from the agent policy to the proprioceptive default policy. We observe that for the segments which are close to the target (yellow dots near green star), the value of the KL divergence is high. In these segments the walker has to stop and turn in order to go to another target. It represents a deviation from the standard, walking behavior, and we can observe it as spikes in the KL. Furthermore, for the segments between the targets, e.g. 4 -> 5, the KL is much lower. We additionally explore the possibility of reusing pretrained default policies to regularize learning on new tasks. Our transfer task is moving one box to one of 2 targets and going to another target task with the quadruped. We consider different default policies: GTT proprio: proprioceptive information only trained on going to moving target task (GTT); MB proprio: proprioceptive information only trained on moving one box to one target task (MB); MB box: similar MB proprio, but with box position information as additional input. The are given in figure 4, right. We observe a significant improvement in learning speed transferring the pretrained default policies to the new task. Performance improves as the trajectory distribution modeled by the default policy is closer to the one appropriate for the transfer task (compare GTT proprio with MB proprio; and MB proprio with MB box). Ablative Analysis To gain deeper insights into our method, we compare different forms of regularization of the standard RL objective: entropy bonus -adding an entropy term Hpπp¨|s t qq to the per-timestep actor loss; entropy regularization -optimizing the objective; KL bonus -adding the KL-divergence term KL " πpa t |s t q}π 0 pa t |s t q ‰ from the agent policy to the default one to the per-timestep actor loss; KL-regularization -optimizing the objective; KL-regularization to the old policy -optimization of the objective 1 where regularization is done wrt. an older version of the main policy (updated every 100 steps). The default policy receives only proprioceptive information in these experiments. The task is go to moving target. As can be seen in FIG5 left, all three KL-based variants improve performance over the baseline, but regularizing against the information restricted default policy outperforms regularization against an old version of the policy. FIG5 center, demonstrates that the benefit of the default policy depends on the reward structure. When replacing the sparse reward with a dense shaping reward, proportional to the inverse distance from the walker to the target, our method and the baseline perform similarly, which is consistent with dense-reward . Finally, we assess the benefit of the KL-regularized objective 1 when used with an idealized default policy. We repeat the go-to-target experiment with a pretrained default policy on the same task. FIG5 right, shows a significant difference between the baseline and different regularization variants: using the pretrained default policy, learning the default policy alongside the main policy or using a pretrained expert (default policy with access to the full state). This suggests that large gains may be achievable in situations when a good default policy is known a priori. We performed the same analysis for the dense reward but we did not notice any gain. The speed-up from regularizing to the pretrained expert is significant, however it corresponds to regularizing against an existing solution and can thus primarily be used as a method to speed-up the experiment cycles, as it was demonstrated in kickstarting framework BID42 ).Finally, we study impact of the direction of the KL in objective 1 on the learning dynamics. Motivated by the work in policy distillation BID41 we flip the KL and use KL " π 0 pa t |s t q}πpa t |s t q ‰ instead of the described before KL " πpa t |s t q}π 0 pa t |s t q ‰. The experiments showed that there was no significant difference between these regularization schemes, which suggests that the idea of learned default policy can be viewed from student-teacher perspective, where default policy plays the role of the teacher. This teacher can be used in a new task. For the details, please refer to the appendix E.6. We also evaluate our method on the DMLab-30 set of environments. DMLab BID3 provides a suite of rich, first-person environments with tasks ranging from complex navigation and laser-tag to language-instructed goal finding. Recent works on multitask training BID7 in this domain have used a form of batched-A2C with the V-trace algorithm to maximize an approximation of the entropy regularized objective described earlier, where the default policy is a uniform distribution over the actions. Typically, the agent receives visual information at each step, along with an instruction channel used in a subset of tasks. The agent receives no task identifier. We adopt the architecture employed in previous work BID7 in which frames, past actions and rewards are passed successively through a deep residual network and LSTM, finally predicting a policy and value function. All our experiments are tuned with population-based training BID18. Further details are provided in appendix D.1.DMLab exposes a large action space, specifically the cartesian product of atomic actions along seven axes. However, commonly a human-engineered restricted subset of these actions is used at training and test time, simplifying the exploration problem for the agent. For example, the used action space has a forward bias, with more actions ing in the agent moving forward rather than backwards. This helps with exploration in navigation tasks, where even a random walk can get the agent to move away from the starting position. The uniform default policy is used on top of this human engineered small action space, where its semantics are clear. In this work, we instead consider a much larger combinatorial space of actions. We show that a pure uniform default policy is in fact unhelpful when human knowledge is removed from defining the right subset of actions to be uniform over, and the agent under-performs. Learning the default policy, even in the extreme case when the default policy is not conditioned on any state information, helps recovering which actions are worth exploring and leads to the emergence of a useful action space without any hand engineering. FIG6 shows the of our experiments. We consider a flat action space of 648 actions, each moving the agent in different spatial dimensions. We run the agent from BID7 as baseline which is equivalent to considering the default policy to be a uniform distribution over the 648 actions, and three variants of our approach, where the default policy is actually learnt. FORMULA0 ) that uses uniform distribution over actions as a default policy and three different possible default policies. Center, the entropy for the vector default policy over learning. Right, marginalized distribution over few actions of interest for the vector default policy. For feed forward default policy, while the agent is recurrent, the default policy is not. That is the policy π is conditioned on the full trace of observed states s 1, a 1,..s t, while the default policy π 0 is conditioned only on the current frame a t´1, s t. Given that most of the 30 tasks considered require memory in order to be solvable, the default policy has to generalize over important task details. LSTM default policy on the other hand, while being recurrent as the agent, it observes only the previous action a t´1 and does not receive any other state information. In this instance, the default policy can only model the most likely actions given recent behaviour a 1,..a t´1 in absence of any visual stimuli. For example, if previous actions are moving forward, the default policy might predict moving forward as the next action too. This is because the agent usually moves consistently in any given direction in order to navigate efficiently. Finally, the vector default policy refers to a default policy that is independent of actions and states (i.e. average behaviour over all possible histories of states and actions).Using any of the default policies outperforms the baseline, with LSTM default policy slightly underperforming compared with the others. The vector default policy performs surprisingly well, highlighting that for DMLab defining a meaningful action space is extremely important for solving the task. Our approach can provide a mechanism for identifying this action space without requiring human expert knowledge on the tasks. Note in middle plot, FIG6, that the entropy of the default policy over learning frames goes down, indicating that the default policy becomes peaky and is quite different from the uniform distribution which the baseline assumes. Note that when running the same experiments with the original human-engineered smaller action space, no gains are observed. This is similar to the continuous control setup, corresponding to changing the walker to a simple one and hence converting the task into a denser reward one. Additionally, in figure 6 right, for the vector default policy, we show the probability of a few actions of interest by marginalizing over all other actions. We notice that the agent has a tendency of moving forward 70%, while moving backwards is quite unlikely 10%. The default policy discovers one element of the human defined action space, namely forward-bias which is quite useful for exploring the map. The uniform bias would put same weight for moving forward as for moving backwards, making exploration harder. We also note that the agent has a tendency to turn right and look right. Given that each episode involves navigating a new sampled map, such a bias provides a meaningful exploration boost, as it suggest a following the wall strategy, where at any new intersection the agent always picks the same turning direction (e.g. right) to avoid moving in circles. But as expected, since neither looking up or looking down provides any advantage, these actions are equally probable. In this work we studied the influence of learning the default policy in the KL-regularized RL objective. Specifically we looked at the scenario where we enforce information asymmetry between the default policy and the main one. In the continuous control, we showed empirically that in the case of sparse-reward tasks with complex walkers, there is a significant speed-up of learning compared to the baseline. In addition, we found that there was no significant gain in dense-reward tasks and/or with simple walkers. Moreover, we demonstrated that significant gains can be achieved in the discrete action spaces. We provided evidence that these gains are mostly due to the information asymmetry between the agent and the default policy. Best are obtained when the default policy sees only a subset of information, allowing it to learn task-agnostic behaviour. Furthermore, these default polices can be reused to significantly speed-up learning on new tasks. In this appendix we derive the connection between KL-regularized RL and information bottleneck in detail. For simplicity we assume that x D t is empty, consider dependence only on current state s t and do not use subscript by t in detailed derivations for notational convenience. We also apologize for some notational inconsistencies, and will fix them in a later draft. DISPLAYFORM0 The simple formulation of the information bottleneck corresponds to maximizing reward while minimizing the per-timestep information between actions and state (or a subset of state, like the goal): DISPLAYFORM1 Upper-bounding the mutual information term: DISPLAYFORM2 DISPLAYFORM3 Thus DISPLAYFORM4 i.e. the problem turns into one of KL-regularized RL. For policies with latent variables such as πpa|sq " ş πpa|zqπpz|sqdz we obtain:MIrA; Ss " ż πpa, sq log πpa|sqdads´ż πpaq log πpaqda ď π ż πpa, sq log πpa|sqdads´ż πpaq logπ 0 paqdaas before. We choose π 0 paq " ş πpa|zqπ 0 pzqdz, then: DISPLAYFORM0 and thusMIrA; Ss ď ż πpa, sq log qpa|sqdads´ż πpaq logπ 0 paqda ď ż πpa, sq log πpa|sqdads´ż πpa, sq log πpa|sqdads´ż πpz, sq log π 0 pzq DISPLAYFORM1 Therefore: DISPLAYFORM2 Thus, the KL regularized objective discussed above can be seen as implementing an information bottleneck. Different forms of the default policy correspond to restricting the information flow between different components of the interaction history (past states or observations), and to different approximations to the ing mutual information penalties. This perspective suggests two different interpretations of the KL regularized objective discussed above: We can see the role of the default policy implementing a way of restricting information flow between (past) states and (future) actions. An alternative view, more consistent with the analogy between RL and probabilistic modeling invoked above is that of learning a "default" behavior that is independent of some aspect of the state. (Although the information theoretic view has recently gained more hold in the probabilistic modeling literature, too (e.g. BID1). We use a distributed off-policy setup similar to BID38. There is one learner and multiple actors. These are essentially the instantiations of the main agent used for different purposes. Each actor is the main agent version which receives the copy of parameters from the learner and unrolls the trajectories in the environment, saving it to the replay buffer of fixed size 1e6. The learner is the agent version which samples a batch of short trajectories windows (window size is defined by unroll length) from the replay buffer, calculates the gradients and updates the parameters. The updated parameters are then communicated to the actors. Such a setup speeds-up learning significantly and makes the final performance of the policy better. We compare the performance of on go to moving target task with 1 and 32 actors. From figure 7, we see that the effect of the default policy does not disappear when the number of actor decreases to 1, but the learning becomes much slower, noisier and weaker. Walkers visualization is provided in figure 8. Below we give a detaatiled description of each continuous control task we studied. Walking task. Type. Dense-reward feature-based-task. Description. Each half of the episode, a random direction among 4 (left, right, forward and backwards) is sampled. Task information is specified via a one-hot encoding of the required direction. The walker is required to move in this direction with the target speed v t and receives the reward r. Reward. r " exp´| vcur´vt| 2. Technical details. Target speed, v t " 3. The episode length is 10 seconds. For the humanoid task we use the absolute head height termination criteria: h ă 0.95. Type. Dense-reward vision-task. Description. Walker is required to run through a terrain and avoid the walls. The task-specific information is a vision input. It receives the reward r defined as a difference between the current walker speed v cur and the target speed v t along the direction of the track. Reward. r " exp´| Technical details. Target speed, v t " 3. The episode length is 45 seconds. For the humanoid task we use the absolute head height termination criteria: h ă 0.9.Go to one of K single targets. Type. Sparse-reward feature-based-task. Description. On an infinite floor, there is a finite area of size 8x8 with K randomly placed targets. The walker is also randomly placed in a finite area. The walker's initial position is also randomly placed on the finite area. The walker is required to one of the K targets, specified via command vector. Once it achieves the target, the episode terminates and the walker receives the reward r. Reward. r " 60. Technical details. The episode length is 20 seconds. Go to one moving target. Type. Sparse-reward feature-based-task. Description. Similar to the previous one, but there is only one target and once the walker achieves it, the target reappears in a new random place. The walker receives r for 10 consecutive steps staying on the target before the target reappears in a new random position. Reward. r " 1. Technical details. The episode length is 25 seconds. Move one box to one of the K targets. Type. Sparse-reward feature-based-task. Description. There is a finite floor of size 3x3 padded with walls with K randomly placed targets and one box. The walker is required to move this box to one of the specified targets. Once the box is placed on the target, the episode terminates and the walker receives the reward r. Reward. r " 60. Technical details. The episode length is 30 seconds. Control timestep is 0.05 for quadruped and 0.025 for jumping ball. Move one box to one of the K targets and go to another. Type. Sparse-reward feature-based-task. Description. Similar to the previous one, but the walker is also required to go to another target (which is different from the one where it must place the box on). The walker receives the a r task for each task solved, and a r end if it solves both tasks. The other parameters are the same. Reward. r task " 10, r end " 50. Technical details. Same as in the previous task. Foraging in the maze. Type. Sparse-reward vision-task. Description. There is a maze with 8 apples which walker must collect. For each apple, it receives reward r. The episode terminates once the walker collects all the apples or the time is elapsed. Reward. r " 1. Technical details. The episode length is 90 seconds. Control timestep is 0.025 for jumping ball, and 0.05 for quadruped. Our agents run in off-policy regime sampling the trajectories from the replay buffer. In practice, it means that the trajectories are coming from the behavior (replay buffer) policy π b, and thus, the correction must be applied (specified below). Below we provide architecture details, baselines, hyperparmaeters as well as algorithm details for discrete and continuous control cases. In discrete experiments, we use V-trace off-policy correction as in BID7. We reuse all the hyperparameters for DMLab from the mentionned paper. At the top of that, we add default policy network and optimize the corresponding α parameter using population-base training. The difference with the setup in BID7 is that they use the human prior over actions (table D.2 in the mentionned paper), which in 9-dimensional action space. In our work, we take the rough DMLab action space, consisting of all possible rotations, and moving forward/backward, and "fire" actions. It in the action space of dimension 648. It make the learning much more challenging, as it has to explore in much larger space. The agent network (see FIG0 is divided into actor and critic networks without any parameter sharing. In the case of feature-based-task, the task-specific information is encoded by one layer MLP with ELU activations. For the vision-task, we use a 3-layer ResNet BID14 . The encoded task information is then concatenated with the proprioceptive information and passed to the agent network. The actor network encodes a Gaussian policy, N pμ,σq, by employing a two-layer MLP, with mean µ and log variance log σ as outputs and applying the following processing procedures: DISPLAYFORM0 where f is a sigmoid function: DISPLAYFORM1 The critic network is a two-layer MLP and a linear readout. The default policy network has the same structure as actor network, but receives a concatenation of the proprioceptive information with only a subset (potentially, empty) of a task-specific information. There is no parameter sharing between the agent and the default policy. ELU is used as activation everywhere. The exact actor, critic and default policy network architectures are described below. We tried to use LSTM for the default policy network instead of MLP, but did not see a difference. We use separate optimizers and learning rates β π, β Q, β π 0 for the actor, critic and default policy networks correspondingly. For each network (which we call online), we also define the target network, similar to the target Q-networks BID24. The target networks are updated are updated in a slower rate than the online ones by copying their parameters. We assume that the trajectories are coming from the replay buffer B. To correct for being off-policy, we make use of the Retrace operator (see BID27). This operator is applied to the Q function essentially introducing the importance weights. We will note RQ the action for this operator. Algorithm 2 is an off-policy version with retraced Q function of the initial algorithm 1.We use the same update period for actor and critic networks, P a and a different period for the default network P d. The baseline is the agent network (see FIG0) without the default policy with an entropy bonus λ. All the hyperparameters of the baseline are tuned for each task. For each best baseline hyperparameters configuration, we tune the default policy parameters. When we use the default policy, we do not have the entropy bonus. Instead, we have a regularisation parameter α. The other parameteres which we consider are: batch size, unroll length. Below we provide the hyperparameters for each of the task. The following default hyperparameters are used unless some particular one is specified.sampling actions from it and backpropagating through Q. In this algorithm, we learn a value function V using V-trace BID7 ) and the policy is updated using an off-policy corrected policy gradient with empirical returns. The for the dense-reward tasks are given in FIG9. We observe little difference of using default policy comparing to the baseline. In the walls task, we also consider the default policy with global information, such as the orientation, position and speed in the global coordinates. We do not observe a significant difference between using the default policy and the baseline. The reason for this, we believe, is that the agent is being trained very quickly by seeing a strong reward signal, so the default policy cannot catch it up. The for the sparse reward tasks with jumping ball are given in FIG9. We see little difference of using default policy comparing to the baseline. Our hypothesis consists in the fact that since the default policy affects the policy by regularizing the state-conditional action distribution (policy), for too simple actions space such is given here (3 actions), this effect is not strong enough. Center: moving one box to one target. Right: foraging in the maze. The legends denote additional to the proprioception, information passed to the default policy (except baseline, where we do not use default policy). In this section, we provide more for the sparse reward tasks. In FIG0 the for going to one of K targets task with quadruped are presented. The proprioceptive default policy gives significant gains comparing to others. What interesting is that when the number of targets K increases, the baseline performance drops dramatically, whereas the proprioceptive default policy solve the task reliably. Our hypothesis is that the default policy learns quickly the default walking behavior which becomes very helpful for the agent to explore the floor and search for the target. FIG0: Results for go to one of K targets tasks with quadruped. Left: go to 1 target. Center: go to one of 2 targets. Right: go to one of 3 targets. The legends denote additional to the proprioception, information passed to the default policy (except baseline, where we do not use default policy).We also provide the for move box to one of K targets task, where K " 1, 2, 3, and move box to one of two targets task with go to another. The are given in figure 12. Similar effect occurs here. Starting from left, first: move one box to one of 2 targets with go to another. Second: move one box to 1 target. Third: move one box to one of 2 targets. Forth: move one box to one of 3 targets. The legends denote additional to the proprioception, information passed to the default policy (except baseline, where we do not use default policy). In this section, we provide additional transfer experiment for the range of the tasks. They are given in FIG0. In the first two cases we see that proprioceptive default policy from the go to target task gives a significant boost to the performance comparing to the learning from scratch. We also observe, that for the box pushing tasks, the default policy with the box position significantly speeds up learning comparing to other cases. We believe it happens because this default policy learns the best default behavior for these tasks possible: going to the box and push it. For the most complicated task, move one box to one of two targets and go to another one, 13, right, the box default policy makes a big difference: it makes the policy avoid being stuck in go to target behavior (line with reward of 10).Additional for the transfer experiments are given in FIG0. We observe the same effect happening: whereas the baseline performance drops significantly, the agent with default policy stays E.5 ABLATION WALLS QUADRUPED Ablations for the walls quadruped are given in figure 14. Center: move one box to one of two targets. Right: move one box to one of two targets and go to another one. The legend whether the default policy is learned or is transferred. Furthermore, it specifies the task from which the default policy is transferred as well as additional information other than the proprioceptive information that the default policy is conditioned on, if any. Published as a conference paper at ICLR 2019 The for having the different order of the default policy in the KL-term (KLrπ||π 0 s or KLrπ 0 ||πs) for go to moving target task with quadruped walker are shown in FIG0. We use this term either in per time step actor loss (auxiliary loss) or as a regularizer by optimizing the objective 1 (with different order of KL). We do not observe significant difference.
Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together
1,794
scitldr
When an image classifier makes a prediction, which parts of the image are relevant and why? We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision? Producing an answer requires marginalizing over images that could have been seen but weren't. We can sample plausible image in-fills by conditioning a generative model on the rest of the image. We then optimize to find the image regions that most change the classifier's decision after in-fill. Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image. Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods. The decisions of powerful image classifiers are difficult to interpret. Saliency maps are a tool for interpreting differentiable classifiers that, given a particular input example and output class, computes the sensitivity of the classification with respect to each input dimension. BID3 and BID2 cast saliency computation an optimization problem informally described by the following question: which inputs, when replaced by an uninformative reference value, maximally change the classifier output? Because these methods use heuristic reference values, e.g. blurred input BID3 or random colors BID2, they ignore the context of the surrounding pixels, often producing unnatural in-filled images (Figure 2). If we think of a saliency map as interrogating the neural network classifier, these approaches have to deal with a somewhat unusual question of how the classifier responds to images outside of its training distribution. To encourage explanations that are consistent with the data distribution, we modify the question at hand: which region, when replaced by plausible alternative values, would maximally change classifier output? In this paper we provide a new model-agnostic framework for computing and visualizing feature importance of any differentiable classifier, based on variational Bernoulli dropout BID4. We marginalize out the masked region, conditioning the generative model on the non-masked parts of the image to sample counterfactual inputs that either change or preserve classifier behavior. By leveraging a powerful in-filling conditional generative model we produce saliency maps on ImageNet that identify relevant and concentrated pixels better than existing methods. Gradient-based approaches BID12 BID13 BID17 BID10 ) derive a saliency map for a given input example and class target by computing the gradient of the classifier output with respect to each component (e.g., pixel) of the input. The reliance on the local gradient information induces a bias due to gradient saturation or discontinuity in the DNN activations BID11. BID0 observed that some gradientbased saliency computation reflect an inductive bias due to the convolutional architecture, which is independent of the network parameter values. To explain its response to a particular input x we partition the input x into masked (unobserved) region x r and their complement x = x r ∪ x \r. Then we replace the x r with uninformative reference valuê x r to test which region x r is important for classifier's output p M (c|x r, x \r). Heuristic in-filling BID3 computesx r ad-hoc such as image blur. This biases the explanation when samples [x r, x \r] deviate from the data distribution p(x r, x \r). (1c) We instead sample x r efficiently from a conditional generative model x r ∼ p G (x r |x \r) that respects the data distribution. Reference-based approaches analyze the sensitivity of classifier outputs to the substitution of certain inputs/pixels with an uninformative reference value. BID11 linearly approximates this change in classifier output using an algorithm resembling backpropagation. This method is efficient and addresses gradient discontinuity, but ignores nonlinear interactions between inputs. BID1 optimizes a variational bound on the mutual information between a subset of inputs and the target, using a variational family that sets input features outside the chosen subset to zero. In both cases, the choice of value as reference limits applicability to simple image domains with static like MNIST. BID18 computes the saliency of a pixel (or image patch) by treating it as unobserved and marginalizing it out, then measuring the change in classification outcome. This approach is similar in spirit to ours. The key difference is that where BID18 iteratively execute this computation for each region, we leverage a variational Bernoulli distribution to efficiently search for optimal solution while encouraging sparsity. This reduces computational complexity and allows us to model the interaction between disjoint regions of input space. BID3 computes saliency by optimizing the change in classifier outputs with respect to a perturbed input, expressed as the pixel-wise convex combination of the original input with a reference image. They offer three heuristics for choosing the reference: mean input pixel value (typically gray), Gaussian noise, and blurred input. BID2 amortize the cost of estimating these perturbations by training an auxiliary neural network.3 propose two objectives for computing the saliency map:• Smallest Deletion Region (SDR) considers a saliency map as an answer to the question: What is the smallest input region that could be removed and swapped with alternative reference values in order to minimize the classification score? • Smallest Supporting Region (SSR) instead poses the question: What is the smallest input region that could substituted into a fixed reference input in order to maximize the classification score?Solving these optimization problems (which we formalize below) involves a search over input masks, and necessitates reference values to be substituted inside (SDR) or outside (SSR) the masked region. These values were previously chosen heuristically, e.g., mean pixel value per channel. We instead consider inputs inside (SDR) or outside (SSR) the masked region as unobserved variables to be marginalized efficiently by sampling from a strong conditional generative model 1. We describe our approach for an image application where the input comprises pixels, but our method is more broadly applicable to any domain where the classifier is differentiable. Generative Methods Consider an input image x comprising U pixels, a class c, and a classifier with output distribution p M (c|x). Denote by r a subset of the input pixels that implies a partition of the input x = x r ∪ x \r. We refer to r as a region, although it may be disjoint. We are interested in the classifier output when x r are unobserved, which can be expressed by marginalization as We then approximate p(x r |x \r) by some generative model with distribution p G (x r |x \r) (specific implementations are discussed in section 4.1). Then given a binary mask 2 z ∈ {0, 1} U and the original image x, we define an infilling function 3 φ as a convex mixture of the input and reference with binary weights, DISPLAYFORM0 The classification score function s M (c) represents a score of classifier confidence on class c; in our experiments we use log-odds: DISPLAYFORM0 SDR seeks a mask z yielding low classification score when a small number of reference pixels are mixed into the mask regions. Without loss of generality 4, we can specify a parameterized distribution Given an input, FIDO-CA finds a minimal pixel region that preserves the classifier score following in-fill by CA-GAN BID16. BID2 (Realtime) assigns saliency coarsely around the central object, and the heuristic infill reduces the classifier score. We mask further regions (head and body) of the FIDO-CA saliency map by hand, and observe a drop in the infilled classifier score. The label for this image is "goose".over masks q θ (z) and optimize its parameters. The SDR problem is a minimization w.r.t θ of DISPLAYFORM1 On the other hand, SSR aims to find a masked region that maximizes classification score while penalizing the size of the mask. For sign consistency with the previous problem, we express this as a minimization w.r.t θ of DISPLAYFORM2 Naively searching over all possible z is exponentially costly in the number of pixels U. Therefore we specify q θ (z) as a factorized Bernoulli: DISPLAYFORM3 Bern(z u |θ u).This corresponds to applying Bernoulli dropout BID14 to the input pixels and optimizing the per-pixel dropout rate. θ is our saliency map since it has the same dimensionality as the input and provides a probability of each pixel being marginalized (SDR) or retained (SSR) prior to classification. We call our method FIDO because it uses a strong generative model (see section 4.1) to Fill-In the DropOut region. To optimize the θ through the discrete random mask z, we follow in computing biased gradients via the Concrete distribution BID9 BID7; we use temperature 0.1. We initialize all our dropout rates θ to 0.5 since we find it increases the convergence speed and avoids trivial solutions. We optimize using Adam BID8 with learning rate 0.05 and linearly decay the learning rate for 300 batches in all our experiments. Our PyTorch implementation takes about one minute on a single GPU to finish one image. compute saliency by directly optimizing the continuous mask z ∈ U under the SDR objective, withx chosen heuristically; we call this approach Black Box Meaningful Perturbations (BBMP). We instead optimize the parameters of a Bernoulli dropout distribution q θ (z), which enables us to sample reference valuesx from a learned generative model. Our method uses mini-batches of samples z ∼ q θ (z) to efficiently explore the huge space of binary masks and obtain uncertainty estimates, whereas BBMP is limited to a local search around the current point estimate of the mask z. See Figure 5 for a pseudo code comparison. In Appendix A.1 we investigate how the choice of algorithm affects the ing saliency maps. To avoid unnatural artifacts in φ(x, z), BID3 and BID2 additionally included two forms of regularization: upsampling and total variation penalization. DISPLAYFORM0 With φ, compute L by Equation 4 or 5 Update z with ∇ z L end while Return z as per-feature saliency map DISPLAYFORM1 With φ, compute L by Equation FORMULA3 Upsampling is used to optimize a coarser θ (e.g. 56 × 56 pixels), which is upsampled to the full dimensionality (e.g. 224 × 224) using bilinear interpolation. Total variation penalty smoothes θ by a 2 regularization penalty between spatially adjacent θ u. To avoid losing too much signal from regularization, we use upsampling size 56 and total variation as 0.01 unless otherwise mentioned. We examine the individual effects of these regularization terms in Appendices A.2 and A.4, respectively. We first evaluate the various infilling strategies and objective functions for FIDO. We then compare explanations under several classifier architectures. In section 4.5 we show that FIDO saliency maps outperform BBMP BID3 in a successive pixel removal task where pixels are in-filled by a generative model (instead of set to the heuristic value). FIDO also outperforms the method from BID2 on the so-called Saliency Metric on ImageNet. Appendices A.1-A.6 provide further analysis, including consistency and the effects of additional regularization. We describe several methods for producing the reference valuex. The heuristics do not depend on z and are from the literature. The proposed generative approaches, which producex by conditioning on the non-masked inputs x z=0, are novel to saliency computation. Heuristics: Mean sets each pixel ofx according to its per-channel mean across the training data. Blur generatesx by blurring x with Gaussian kernel (σ = 10) BID3. Random samplesx from independent per-pixel per-channel uniform color with Gaussians (σ = 0.2). Generative Models: Local computesx as the average value of the surrounding non-dropped-out pixels x z=0 (we use a 15 × 15 window). VAE is an image completion Variational Autoencoder BID6. Using the predictive mean of the decoder network worked better than sampling. CA is the Contextual Attention GAN BID16; we use the authors' pre-trained model. Here we examine the choice of objective function between L SDR and L SSR; see Figure 6. We observed more artifacts in the L SDR saliency maps, especially when a weak in-filling method (Mean) is used. We suspect this unsatisfactory behavior is due to the relative ease of optimizing L SDR. There are many degrees of freedom in input space that can increase the probability of any of the 999 classes besides c; this property is exploited when creating adversarial examples BID15 ). Since Figure 6: Choice of objective between L SDR and L SSR. The classifier (ResNet) gives correct predictions for all the images. We show the L SDR and L SSR saliency maps under 2 infilling methods: Mean and CA. Here the red means important and blue means non-important. We find that L SDR is more susceptible to artifacts in the ing saliency maps than L SSR.it is more difficult to infill unobserved pixels that increase the probability of a particular class c, we believe L SSR encourages FIDO to find explanations more consistent with the classifier's training distribution. It is also possible that texture is easier for a conditional generative model to fit. To mitigate the effect of artifacts, we use L SSR for the remaining experiments. Figure 7: Comparison of saliency map under different infilling methods by FIDO SSR using ResNet. Heuristics baselines (Mean, Blur and Random) tend to produce more artifacts, while generative approaches (Local, VAE, CA) produce more focused explanations on the targets. Here we demonstrate the merits of using strong generative model that produces substantially fewer artifacts and a more concentrated saliency map. In Figure 7 we generate saliency maps of different infilling techniques by interpreting ResNet using L SSR with sparsity penalty λ = 10 −3. We observed a susceptibility of the heuristic in-filling methods (Mean, Blur, Random) to artifacts in the ing saliency maps, which may fool edge filters in the low level of the network. The use of generative in-filling (Local, VAE, CA) tends to mitigate this effect; we believe they encourage in-filled images to lie closer to the natural image manifold. To quantify the artifacts in the saliency maps by a proxy: the proportion of the MAP configuration (θ > 0.5) that lies outside of the ground truth bounding box. FIDO-CA produces the fewest artifacts by this metric FIG5 ). We use FIDO-CA to compute saliency of the same image under three classifier architectures: AlexNet, VGG and ResNet; see FIG6. Each architecture correctly classified all the examples. We observed a qualitative difference in the how the classifiers prioritize different input regions (according to the saliency maps). For example in the last image, we can see AlexNet focuses more on the body region of the bird, while Vgg and ResNet focus more on the head features. We follow BID3 and BID11 in measuring the classifier's sensitivity to successively altering pixels in order of their saliency scores. Intuitively, the "best" saliency map should compactly identify relevant pixels, so that the predictions are changed with a minimum number of altered pixels. Whereas previous works flipped salient pixel values or set them to zero, we note that this moves the classifier inputs out of distribution. We instead dropout pixels in saliency order and infill their values with our strongest generative model, CA-GAN. To make the log-odds score suppression comparable between images, we normalize per-image by the final log-odds suppression score (all pixels infilled). In FIG0 we evaluate on ResNet and carry out our scoring procedure on 1, 533 randomly-selected correctly-predicted ImageNet validation images, and report the number of pixels required to reduce the normalized log-odds score by a given percent. We evaluate FIDO under various in-filling strategies as well as BBMP with Blur and Random in-filling strategies. We put both algorithms on equal footing by using λ = 1e−3 for FIDO and BBMP (see Section A.1 for further comparisons). We find that strong generative infilling (VAE and CA) yields more parsimonious saliency maps, which is consistent with our qualitative comparisons. FIDO-CA can achieve a given normalized log-odds score suppression using fewer pixels than competing methods. While FIDO-CA may be better adapted to evaluation using CA-GAN, we note that other generative in-filling approaches (FIDO-Local and FIDO-VAE) still out-perform heuristic in-filling when evaluated with CA-CAN.We compare our algorithm to several strong baselines on two established metrics. We first evaluate whether the FIDO saliency map can solve weakly supervised localization (WSL) BID2. After thresholding the saliency map θ above 0.5, we compute the smallest bounding box containing all salient pixels. This prediction is "correct" if it has intersection-over-union (IoU) ratio over 0.5 with any of the ground truth bounding boxes. Using FIDO with various infilling methods, we report the average error rate across all 50, 000 validation images in Table 1. We evaluate the authors' pre-trained model of BID2 5, denoted as "Realtime" in the . We also include five baselines: Max (entire input as the bounding box), Center (centered bounding box occupying half the image), Grad BID12, Deconvnet , and GradCAM BID10. We follow the procedure of mean thresholding in BID3: we normalize the heatmap between 0 and 1 and binarize by threshold θ = αµ i where µ i is the average heatmap for image i. Then we take the smallest bounding box that encompasses all the binarized heatmap. We search α between 0 to 5 with 0.2 step size on a holdout set to get minimun WSL error. The best α are 1.2, 2 and 1 respectively. FIDO-CA frugally assigns saliency to contextually important pixels while preserving classifier confidence (Figure 4), so we do not necessarily expect our saliency maps to correlate with the typically large human-labeled bounding boxes. The reliance on human-labeled bounding boxes makes WSL suboptimal for evaluating saliency maps, so we evaluate the so-called Saliency Metric proposed by BID2, which eschews the human labeled bounding boxes. The smallest bounding box A is computed as before. The image is then cropped using this bounding box and upscaling to its original size. The Saliency Metric is log max(Area(A), 0.05) − log p(c|CropAndUpscale(x, A)), the log ratio between the bounding box area and the in-class classifier probability after upscaling. This metric represents the information concentration about the label within the bounded region. From the superior performance of FIDO-CA we conclude that a strong generative model regularizes explanations towards the natural image manifold and finds concentrated region of features relevant to the classifier's prediction. FIG0: Examples from the ablation study. We show how each of our two innovations, FIDO and generative infilling, improve from previous methods that adopts BBMP with hueristics infilling (e.g. Blur and Random). Specifically, we compare with a new variant BBMP-CA that uses strong generative in-filling CA-GAN via thresholding the continous masks: we test a variety of decreasing thresholds. We find both FIDO (searching over Bernoulli masks) and generative in-filling (CAGAN) are needed to produce compact saliency maps (the right-most column) that retain class information. See Appendix B for more qualitative examples and in section A.7 for quantitative . Can existing algorithms be improved by adding an in-filling generative model without modeling a discrete distribution over per-feature masks? And does filling in the dropped-out region suffice without an expressive generative model? We carried out a ablation study that suggests no on both counts. We compare FIDO-CA to a BBMP variant that uses CA-GAN infilling (called BBMP-CA); we also evaluate FIDO with heuristic infilling (FIDO-Blur, FIDO-Random). Because the continuous mask of BBMP does not naturally partition the features into observed/unobserved, BBMP-CA first thresholds the masked region r = I(z > τ) before generating the reference φ(x r, x \r) with a sample from CA-GAN. We sweep the value of τ as 1, 0.7, 0.5, 0.3, 0.1 and 0. We find BBMP-CA is brittle with respect to its threshold value, producing either too spread-out or stringent saliency maps (FIG0). We observed that FIDO-Blur and FIDO-Random produce more concentrated saliency map than their BBMP counterpart with less artifacts, while FIDO-CA produces the most concentrated region on the target with fewest artifacts. Each of these baselines were evaluated on the two quantitative metrics (Appendix A.7); BBMP-CA considerably underperformed relative to FIDO-CA. Because the classifier behavior is ill-defined for out-of-distribution inputs, any explanation that relies on out-of-distribution feature values is unsatisfactory. By modeling the input distribution via an expressive generative model, we can encourage explanations that rely on counterfactual inputs close to the natural manifold. However, our performance is then upper-bounded by the ability of the generative model to capture the conditional input density. Fortunately, this bound will improve alongside future improvements in generative modeling. We proposed FIDO, a new framework for explaining differentiable classifiers that uses adaptive Bernoulli dropout with strong generative in-filling to combine the best properties of recently proposed methods BID3 BID2 BID18. We compute saliency by marginalizing over plausible alternative inputs, revealing concentrated pixel areas that preserve label information. By quantitative comparisons we find the FIDO saliency map provides more parsimonious explanations than existing methods. FIDO provides novel but relevant explanations for the classifier in question by highlighting contextual information relevant to the prediction and consistent with the training distribution. We released the code in PyTorch at https://github. com/zzzace2000/FIDO-saliency. Here we compare FIDO with two previously proposed methods, BBMP with Blur in-filling strategy BID3 and BBMP with Random in-filling strategy BID2. One potential concern in qualitatively comparing these methods is that each method might have a different sensitivity to the sparsity parameter λ. Subjectively, we observe that BBMP requires roughly 5 times higher sparsity penalty λ to get visually comparable saliency maps. In our comparisons we sweep λ over a reasonable range for each method and show the ing sequence of increasingly sparse saliency maps FIG0 ). We use λ = 5e−4, 1e−3, 2e−3, 5e−3.We observe that all methods are prone to artifacts in the low λ regime, so the appropriate selection of this value is clearly important. Interestingly, BBMP Blur and Random respectively find artifacts with different quality: small patches and pixels for Blur and structure off-object lines for Random. FIDO with CA is arguably the best saliency map, producing fewer artifacts and concentrating saliency on small regions for the images. Here we examine the effect of learning a reduced dimensionality θ that upsampled to the full image size during optimization. We consider a variety of upsampling rates, and in a slight abuse of terminology we refer to the upsampling "size" as the square root of the dimensionality of θ before upsampling, so smaller size implies more upsampling. In FIG0, we demonstrate two examples with different upsampling size under Mean and CA infilling methods with SSR objectives. The weaker infilling strategy Mean apparently requires stronger regularization to avoid artifacts compared to CA. Note that although CA produces much less artifacts compared to Mean, it still produces some small artifacts outside of the objects which is unfavored. We then choose 56 for the rest of our experiments to balance between details and the removal of the artifacts. FIG0: Comparisons of upsampling effect in Mean and CA infilling methods with no total variation penalty. We show the upsampling regularization removes the artifacts especially in the weaker infilling method Mean. To show the stability of our method, we test our method with different random seeds and observe if they are similar. In FIG0, our method produces similar saliency map for 4 different random seeds. Here we test the effect of total variation prior regularization in FIG0. We find the total variation can reduce the adversarial artifacts further, while risking losing signals when the total variation penalty is too strong. Here we quantitatively compare the in-filling strategies. The generative approaches (VAE and CA) perform visually sharper images than four other baselines. Since we expect this random removal should not remove the target information, we use the classification probability of the ResNet as our metric to measure how good the infilling method recover the target prediction. We quantitatively evaluate the probability for 1, 000 validation images in FIG0. We find that VAE and CA consistently outperform other methods, having higher target probability. We also note that all the heuristic baselines (Mean, Blur, Random) perform much worse since the heuristic nature of these approaches, the images they generate are not likely under the distribution of natural images leading to the poor performance by the classifier. FIG0: Box plot of the classifier probability under different infilling with respect to random masked pixels using ResNet under 1, 000 images. We show that generative models (VAE and CA) performs much better in terms of classifier probability. A.6 BATCH SIZE EFFECTS FIG0 shows the effect of batch size on the saliency map. We found unsatisfactory for batch size less than 4, which we attribute this to the high variance in the ing gradient estimates. We show the performance of BBMP-CA with various thresholds τ on both WSL and SM on subset of 1, 000 images in TAB3. We also show more qaulitative examples in FIG14. We find BBMP-CA is relatively brittle across different thresholds of τ. Though with τ = 0.3, the BBMP-CA perform slightly better than BBMP and FIDO with heuristics infilling, it still performs substantially inferior to FIDO-CA. We also perform the flipping experiment in FIG0 and show our FIDO-CA substantially outperforms BBMP-CA with varying different thresholds. B MORE EXAMPLES FIG0 shows several more infilled counterfactual images, along with the counterfactuals produced by the method from BID2. More examples comparing the various FIDO infilling approaches can be found in FIG12 and 21. BID2; FIDO-CA is our method with CA-GAN infilling BID16. Classifier confidence p(c|x) is reported below the input and each infilled image. We hypothesize by that FIDO-CA is able to isolate compact pixel areas of contextual information. For example, in the upper right image pixels in the net region around the fish are highlighted; this context information is missing from the Realtime saliency map but are apparently relevant to the classifier's prediction. These 4 examples are bulbul, tench, junco, and ptarmigan respectively.
We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information.
1,795
scitldr
This paper presents the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input. The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks. Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned. We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable. We focus on the problem of generating variations of a given input in an intended way. This means that given some input element x, which can be considered as a template, we want to generate transformed versions of x with different high-level attributes. Such a mechanism is of great use in many domains such as image edition since it allows to edit images on a more abstract level and is of crucial importance for creative uses since it allows to generate new content. More precisely, given a dataset D = {(x, m ),..., (x (N), m (N) )} of N labeled elements (x, m) ∈ X × M, where X stands for the input space and M for the metadata space, we would like to obtain a model capable of learning a relevant attribute space Ψ ⊂ R d for some integer d > 0 and meaningful attribute functions φ: X × M → Ψ that we can then use to control generation. In a great majority of the recent proposed methods BID13; BID16, these attributes are assumed to be given. We identify two shortcomings: labeled data is not always available and this approach de facto excludes attributes that can be hard to formulate in an absolute way. The novelty of our approach is that these attributes can be either learned by the model (we name them free attributes) or imposed (fixed attributes). This problem is an ill-posed one on many aspects. Firstly, in the case of fixed attribute functions φ, there is no ground truth for variations since there is no x with two different attributes. Secondly, it can be hard to determine if a learned free attribute is relevant. However, we provide empirical evidence that our general approach is capable of learning such relevant attributes and that they can be used for generating meaningful variations. In this paper, we introduce the Variation Network (VarNet), a probabilistic neural network which provides means to manipulate an input by changing its high-level attributes. Our model has a sound probabilistic interpretation which makes the variations obtained by changing the attributes statistically meaningful. As a consequence, this probabilistic framework provides us with a novel mechanism to "control" or "shape" the learned free attributes which then gives interpretable controls over the variations. This architecture is general and provides a wide range of choices for the design of the attribute function φ: we can combine both free and fixed attributes and the fixed attributes can be either continuous or discrete. Our contributions are the following:• A widely applicable encoder-decoder architecture which generalizes existing approaches BID11; BID14; BID13 The input x,x are in X, the input space and the metadata m is in M, the metadata space. The latent template code z * lies in Z *, the template space, while the latent variable z lies in Z the latent space. The variable u is sampled from a zero-mean unitvariance normal distribution. Finally, the features φ(x, m) are in Ψ, the attribute space. The Neural Autoregressive Flows (NAF) BID10 are represented using two arrows, one pointing to the center of the other one; this denotes the fact that the actual parameters of first neural network are obtained by feeding meta-parameters into a second neural network. The discriminator D acts on Z * × Ψ.• An easy-to-use framework: any encoder-decoder architecture can be easily transformed into a VarNet in order to provide it with controlled input manipulation capabilities,• A novel and statistically sound approach to navigate in the latent space,• Ways to control the behavior of the free learned attributes. The plan of this paper is the following: Sect. 2 presents the VarNet architecture together with its training algorithm. For better clarity, we introduce separately all the components featured in our model and postpone the discussion about their interplay and the motivation behind our modeling choices in Sect. 3 and Sect. 4 discusses about the related works. In particular, we show that VarNet provides an interesting solution to many constrained generation problems already considered in the literature. Finally, we illustrate in Appendix A the possibilities offered by our proposed model and show that its faculty to generate variations in an intended way is of particular interest. We now introduce our novel encoder-decoder architecture which we name Variation Network. Our architecture borrows principles from the traditional Variational AutoEncoder (VAE) architecture BID11 and from the Wasserstein AutoEncoder (WAE) architecture BID15; BID14. It uses an adversarially learned regularization BID5; BID13, introduces a separate latent space for templates BID0 and decomposes the attributes on an adaptive basis BID17. It can be seen as a VAE with a particular decoder network or as a WAE with a particular encoder network. Our architecture is shown in FIG0 and our training algorithm is presented in Alg. 1.We detail in the following sections the different parts involved in our model. In Sect. 2.1, we focus on the encoder-decoder part of VarNet and explain Eq., and. In Sect. 2.2, we introduce the adversarially-learned regularization whose aim is to disentangle attributes from templates (Eq. FORMULA3 and FORMULA7). Section 2.3 discusses the special parametrization that we adopted for the attribute space Ψ. Require: DISPLAYFORM0, reconstruction cost c, reproducing kernel k, batch size n 1: for Fixed number of iterations do 2:Sample x:= (x 1, . . ., x n) and m:= (m 1, . . ., m n) where DISPLAYFORM1 Compute z:= {z 1, . . ., z n} where DISPLAYFORM2 Samplex:= {x 1, . . .,x n} wherex i ∼ p(·|z i), Sample random features {ψ i} i=1.. n from feature space Ψ using ν (see Sect. 2.3) 7:Letz:= {z 1, . . .,z n} wherez i ∼ p(·) Discriminator training phase DISPLAYFORM0 10:Gradient ascent step on the discriminator parameters using ∇L Disc Encoder-decoder training phase 12: DISPLAYFORM0 where DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 13:Gradient ascent step on all parameters except the discriminator parameters (encoder and decoder parameters, feature function parameters, features vectors and NAF f) using ∇L EncDec 14: end for Similar to the VAE architectures, we suppose that our data x ∈ X depends on some latent variable z ∈ Z through some decoder p(x|z) parametrized by a neural network. We introduce a prior p(z) over this latent space so that the joint probability distribution is expressed as p(x, z) = p(x|z)p(z). Since the posterior distribution p(z|x) is usually intractable, an approximate posterior distribution q(z|x) parametrized by a neural network is usually introduced. The novelty of our approach is on how we write this encoder network. Firstly, we introduce an attribute space Ψ ⊂ R d, where d is the dimension of the attribute space, on which we condition the encoder which we now denote as q(·|x, ψ ∈ Ψ). More details about the attribute space Ψ are given in Sect. 2.3. For the moment, we can consider it to be a subspace of R d from which we can sample from. The objective in doing so is that decoding z ∼ q(·|x, ψ) using p(x|z) will in a samplex that is a variation of x but with features ψ. Secondly, in order to correctly reconstruct x, introduce an attribute function φ: X × M → Ψ computed from x and its metadata m with values in the attribute space Ψ. This attribute function is a deterministic neural network that will be learned during training and whose aim is to compute attributes of x. For an input (x, m) ∈ D, we want to decouple a template obtained from x from its attributes φ(x, m) computed from x and (possibly) from its metadata m. This is done by introducing another latent space Z * that we term template space together with a approximated posterior distribution q * (z * |x)parametrized by a neural network and a fixed prior p * (z *). The idea is then to compute z from z * by applying a transformation parametrized only by the feature space Ψ. In practice, this is done by using a Neural Autoregressive Flow (NAF) BID10 f ψ: Z * → Z parametrized by ψ ∈ Ψ. Neural autoregressive flows are universal density estimation models which are capable of sampling any random variable Y by applying a learned transformation over a base random variable X (Thm. 1 in BID10).Given a reconstruction loss c on X, we have the following mean reconstruction loss: DISPLAYFORM0 We regularize the latent spaces Z * and Z by adding the usual KL term appearing in the VAE Evidence Lower Bound (ELBO) on Z *: DISPLAYFORM1 and an MMD-based regularization on Z similar the one used in WAEs (see Alg. 2 in BID15): DISPLAYFORM2 where k: Z ×Z → R is an positive-definite reproducing kernel and H k the associated Reproducing Kernel Hilbert Space (RKHS) BID1.The equations FORMULA5, FORMULA6 and of Alg. 1 are estimators on a mini-batch of size n of equations FORMULA8, FORMULA9 and FORMULA10 respectively, being the unbiased U-statistic estimator of BID7. Our encoder q(z|x, ψ) thus depends exclusively on x and on the feature space Ψ. However, there is no reason, for a random attribute ψ ∈ Ψ = φ(x, m), that p(x|z) where z ∼ q(z|x, φ) generates variations of the original x with features φ. Indeed, all needed information for reconstructing x is potentially already contained in z. We propose to add an adversarially-learned cost on the latent variable z * to force the encoder q * to discard information about the attributes of x: Specifically, we train a discriminator neural network D: Z * ×Ψ → whose role is to evaluate the probability D(z *, ψ) that there exists a (x, m) ∈ D such that ψ = φ(x, m) and z * ∼ q * (·|x). In other words, the aim of the discriminator is to determine if the attributes ψ and the template code z * originate from the same (x, m) ∈ D or if the features ψ are randomly generated. We postpone the explanation on how we sample random features ψ ∈ Ψ in Sect. 2.3 and suppose for the moment that we have access to a distribution ν(ψ) over Ψ from which we can sample. The encoder-decoder architecture presented in Sect. 2.1 is trained to fool the discriminator: this means that for a given (x, m) ∈ D it tries to produce a template code z * ∼ q * (·|x) which contains no information about the features φ(x, m).In an optimal setting, i.e. when the discriminator is unable to match any z * ∈ Z * with a particular feature ψ ∈ Ψ, the space of template codes and the space of attributes are decorrelated. All the missing information needed to reconstruct x given z * ∼ q * (·|x) lies in the transformation f φ(x,m). Since these transformations between the template space Z * and the latent space Z only depend on the feature space Ψ, they tend to be applicable over all template codes z * and generalize well. During generation time, it is then possible to change the attributes of a sample without changing its template. The discriminator is trained to maximize DISPLAYFORM0 while the encoder-decoder architecture is trained to minimize DISPLAYFORM1 Estimators of Eq. FORMULA3 and FORMULA3 are given by Eq. FORMULA3 and FORMULA7 respectively. We adopt a particular parametrization of our attribute function φ: X × M so that we are able to sample fake attributes without the need to rely on an existing (x, m) ∈ D pair. In the following, we make a distinction between two different cases: the case of continuous free attributes and the case of fixed continuous or discrete attributes. In order to handle free attributes, which denote attributes that are not specified a priori but learned. For this, we introduce d Ψ attribute vectors v i of dimension d together with an attention module α: X × M → dΨ, where d Ψ is the intrinsic dimension of the attribute space Ψ. By denoting α i the coordinates of α, we then write our attribute function φ as DISPLAYFORM0 This approach is similar to the style tokens approach presented in BID17. The v i's are global and do not depend on a particular instance (x, m). By varying the values of the α i's between, we can then span a d Ψ -dimensional hypercube in R d which stands for our attribute space Ψ. It is worth noting that the v i's are also learned and thus constitute an adaptive basis of the attribute space. In order to define a probability distribution ν over Ψ (note that this subspace also varies during training), we are free to choose any distribution ν α over dΨ. We then sample random attributes from ν by In the continuous case, we write our attribute function DISPLAYFORM1 DISPLAYFORM2 while in the discrete case, we just consider DISPLAYFORM3 where e m is a d Ψ -dimensional embedding of the symbol m. It is important to note that even if the attributes are fixed, the v i's or the embeddings e m are learned during training. These two equations define a natural probability distribution ν over Ψ: DISPLAYFORM4 We now detail our objective and notably explain our particular choice concerning the regularizations on the latent spaces Z * and Z. In Sect. 3.1, we will see that these insights suggest an additional way to "control" the influence of the learned free attributes. In Sect. 3.2, we further discuss about the multiple possibilities that we have concerning the implementation of the attribute function. We list, in Sect. 3.3, the different sampling schemes of VarNet. Finally, Sect. 3.4 is dedicated to implementation details. We discuss our choice concerning the regularizations of the latent spaces and specifically why we chose a KL regularization on Z * and an MMD loss on Z.We found that using a MMD-based regularization on the template space Z * ed in approximated posterior distributions q * (·|x) with very small variances (almost deterministic mappings). One explanation of this behavior is that the MMD regularization tries to enforce that the aggregated posterior DISPLAYFORM0 ) matches the prior p *: it does not act on the individual conditional probability distributions q * (·|x). This degenerate behavior is a side-effect of our adversarial regularization since stochastic encoders have been successfully used in WAEs BID14. When using the the Kullback-Leibler regularization on Z *, this effect disappear which makes the KL regularization that we considered more suited for VarNet since it helps to keep our model out of a degenerate regime. For some applications, it can still be of interest to have a control over the variance of the conditional probability distributions q * (·|x). Similar to the approach of BID9; BID2, we propose to multiply the KL term by a scalar parameter β > 0. For β = 1, we retrieve the original formulation. For β ∈]0, 1[, decreasing the value of β from one to zero decreases the variance of the q * (·|x). We found no gain in considering values of β greater than 1. Examples where this tuning provides an interesting application are given in Sect. A.2.We now consider the regularization over Z. This regularization is in fact superfluous and could be removed. However, we noticed that adding this MMD regularization helped obtaining better reconstruction losses. In this section, we focus on the parametrization of the attribute function φ: X × Z → R d and propose some useful use cases. The formulation of Sect. 2.3 is in fact too restrictive and considered only one attribute function. It is in fact possible to mix different attributes functions by simply concatenating the ing vectors. By doing so, we can then combine free and fixed attributes in a natural way but also consider different attention modules α. We can indeed use neural networks with different properties similarly to what is done in BID4 but also consider different distributions over the attention vectors α i.It is important to note that the free attributes presented in Sect. 2.3.1 can only capture global attributes, which are attributes that are relevant for all elements of the dataset D. In the presence of discrete labels m, it can be interesting to consider label-dependent free attributes, which are attributes specific to a subset of the dataset. In this case, the attribute function φ can be written as DISPLAYFORM0 where e m,i designates the i th attribute vector of the label m. With all these possibilities at hand, it is possible to devise numerous applications in which the notions of template and attribute of an input x may have diverse interpretations. Our choice of using a discriminator over Ψ instead of, for instance, over the values of α themselves allow to encompass within the same framework discrete and continuous fixed attributes. This makes the combinations of such attributes functions natural. We quickly review the different sampling schemes of VarNet. We believe that this wide range of usages makes VarNet a promising model for a wide range of applications. We can for instance:• generate random samplesx from the estimated dataset distribution: DISPLAYFORM0 • samplex with given attributes ψ: DISPLAYFORM1 • generate a variations of an input x with attributes ψ: DISPLAYFORM2 • generate random variations of an input x: x ∼ p(·|z) with z = f ψ (z *) where z * ∼ q * (·|x) and ψ ∼ ν(·).Note that for sampling generate random samplesx, we do that by sampling z * ∼ p * (·) from the prior, ψ ∼ ν(·) from the distribution of the attributes and then decoding z = f ψ (z *) decoding it using the decoder p(·|z) instead of just decoding a z * ∼ p * (·) sampled from the prior. This is due to the fact that, as already mentioned, this MMD regularization is not an essential element of the VarNet architecture: its role is more about fixing the "scale" of the Z space rather than enforcing that the aggregated posterior distribution exactly matches the prior. In the case of continuous attributes of the form Eq. FORMULA3 or FORMULA3, VarNet also provides a new way to navigate in the latent space Z. Indeed, for a given template latent code z *, it is possible to move continuously in the latent space Z by simply changing continuously the values of the α i and then DISPLAYFORM3 The image by the above transformation in the Z space of the d Ψ dimensional hypercube d ψ constitutes the space of variations of the template z *. Since our feature space bears a measure ν, this space of variations has a probabilistic interpretation. To the best of our knowledge, we think that it is the first time that a meaningful probabilistic interpretation about the displacement in the latent space in terms of attributes is given: We'll see in Appendix A.3 that two similar variations applied on different templates can induce radically different displacements in the latent space Z. We hope that this new technique will be useful in many applications and help go beyond the traditional (but unjustified) linear or spherical interpolations BID18. Our architecture is general and any decoder and encoder networks can be used. We chose to use a NAF 1 for our encoder network. This choice has the advantage of using a more expressive posterior distribution compared to the often-used diagonal Gaussian posterior distributions. Our priors p * and p are zero-mean unit-variance Gaussian distributions. For the MMD regularization, we used the parameters used in BID15 (λ = 10 and k(x, y) = C/(C+ x−y 2 2) the inverse multiquadratics kernel with C = 2dim(Z)). For the scalar coefficient γ, we found that a value of 10 worked well on all our experiments. For the sampling of the α values in the free attributes case, we considered ν α to be a uniform distribution over d ψ. In the fixed attribute case, we simply obtain a random sample {ψ i} n i=1 by shuffling the already computed batches of {φ(x i, m i)} n i=1 (lines 4 and 6 in Alg.1). The Variation Network generalizes many existing models used for controlled input manipulation by providing a unified probabilistic framework for this task. We now review the related literature and discuss the connections with VarNet. The problem of controlled input manipulation has been considered in the Fader networks paper BID13, where the authors are able to modify in a continuous manner the attributes of an input image. Similar to us, this approach uses an encoder-decoder architecture together with an adversarial loss used to decouple templates and attributes. The major difference with VarNet is that this model has a deterministic encoder which limits the sampling possibilities as discussed in Sect. A.2. Also, this approach can only deal with fixed attributes while VarNet is able to also learn meaningful free attributes. In fact, VAEs BID11,; BID14 and Fader networks can be seen as special cases of VarNet. Recently, the Style Tokens paper BID17 proposed a solution to learn relevant free attributes in the context of text-to-speech. The similarities with our approach is that the authors condition an encoder model on an adaptive basis of style tokens (what we called attribute space in this work). VarNet borrows this idea but cast it in a probabilistic framework, where a distribution over the attribute space is imposed and where the encoder is stochastic. Our approach also allows to take into account fixed attributes, which we saw can help shaping the free attributes. Traditional ways to explore the latent space of VAEs is by doing linear (or spherical BID18) interpolations between two points. However, there are two major caveats in this approach: the requirement of always needing two points in order to explore the latent space is cumbersome and the interpolation scheme is arbitrary and bears no probabilistic interpretation. Concerning the first point, a common approach is to find, a posteriori, directions in the latent space that accounts for a particular change of the (fixed) attributes BID16. These directions are then used to move in the latent space. Similarly, BID8 proposes a model where these directions of interest are given a priori. Concerning the second point, BID12 proposes to compute interpolation paths minimizing some energy functional which in interpolation curves rather than interpolation straight lines. However, this interpolation scheme is computationally demanding since an optimization problem must be solved for each point of the interpolation path. Another trend in controlled input manipulation is to make a posteriori analysis on a trained generative model BID6; BID0; BID16 BID3 using different means. One possible advantage of these methods compared to ours is that different attribute manipulations can be devised after the training of the generative model. But, these procedures are still costly and so provide any real-time applications where a user could provide on-the-fly the attributes they would like to modify. One of these approaches BID3 consists in using the trained decoder to obtained a mapping Z → X and then performing gradient descent on an objective which accounts for the constraints or change of the attributes. Another related approach proposed in BID6 consists in training a Generative Adversarial Network which learns to move in the vicinity of a given point in the latent space so that the decoded output enforces some constraints. The major difference of these two approaches with our work is that these movements are done in a unique latent space, while in our case we consider separate latent spaces. But more importantly, these approaches implicitly consider that the variation of interest lies in a neighborhood of the provided input. In BID0 the authors introduce an additional latent space called interpretable lens used to interpret the latent space of a generative model. This space shares similarity with our latent space Z * and they also propose a joint optimization for their model, where the encoder-decoder architecture and the interpretable lens are learned jointly. The difference with our approach is that the authors optimize an "interpretability" loss which requires labels and still need to perform a posteriori analysis to find relevant directions in the latent space. We presented the Variation Network, a generative model able to vary attributes of a given input. The novelty is that these attributes can be fixed or learned and have a sound probabilistic interpretation. Many sampling schemes have been presented together with a detailed discussion and examples. We hope that the flexibility in the design of the attribute function and the simplicity, from an implementation point of view, in transforming existing encoder-decoder architectures (it suffices to provide the encoder and decoder networks) will be of interest in many applications. For future work, we would like to extend our approach in two different ways: being able to deal with partially-given fixed attributes and handling discrete free attributes. We also want to investigate the of use stochastic attribute functions φ. Indeed, it appeared to us that using deterministic attribute functions was crucial and we would like to go deeper in the understanding of the interplay between all VarNet components. We now apply VarNet on MNIST in order to illustrate the different sampling schemes presented in Sect. A.In all these experiments, we choose to use a simple MLP with one hidden layer of size 400 for the encoder and decoder networks. We present and comment for different attribute functions and different sampling schemes. The different attribute functions we considered are• 1Free: one-dimensional free attribute space (Eq. FORMULA3 We display in Figure 2 samples obtained with the sampling procedures Eq. FORMULA3 From Fig. 2b, we see that the fixed label attribute have clearly been taken into account, but it can be hard to grasp which high-level attribute the free attribute function has captured. In order to visualize this, we plot in Fig. 3 a visualization of the space of variations spanned by a given template latent code z * . From these plots, it appears that the attribute vector encodes a notion of rotation meaningful for this digit dataset and it is interesting to note how different templates produce different "writing styles". Free attributes can thus be particularly interesting for capturing high-level features, such like rotation, that cannot be described in an absolute way or which are ill-defined. By observing carefully Fig. 3, we note that the variations generated by varying the free attribute applies to all digit classes, irrespective of their label. In such a case, it is impossible to obtain different "writing conventions" for the same digit (like cursive/printscript style for the digit "2") by only modifying the attributes. We show in FIG3 that, by considering free label-dependent attributes, we are able to smoothly go from one "writing convention" to the other one. We can gain further insight about the notion of template and attribute using the sampling scheme of Eq.. This sampling exploits the stochasticity of the encoder q * (·|x) in order to generate variations of a given input x using a fixed attribute ψ. An example of such variations is given in FIG4. The underlying idea is that, even for a given attribute ψ, there are multiple ways to generate variations of x with attributes ψ. We believe that this stochasticity is essential since, in many applications, there should not exist only one way to make variations. The parametrization of the attribute function has a crucial effect on the high-level features that they will able to capture. For instance, if we do not provide any label information, the information present in the template and the information contained in the attribute function can differ drastically. FIG5 show different space of variations where no label information is provided. The concepts captured in these cases are then related to thinness/roundness. Our intuition is that the free attributes capture the most general attributes of the dataset. For some applications, variation spaces such as the one displayed in FIG5, 6b or 6d are not desirable because they may tend to move too "far away" from the original input. As discussed in Sect. 3.1, it is possible to reduce how "spread" the spaces of variation are by modifying the β parameter multiplying the KL term in the objective Eq.. An example of such a variation space is displayed in FIG5.From all examples above, we see that our architecture is indeed capable of decoupling templates from learned attributes and that we have two ways of controlling the free attributes that are learned: by modifying the KL term in the objective Eq. and by carefully devising the attribute function. Indeed, the learned free attributes can capture different high-level features depending on the other fixed attributes they are coupled with. FIG5 and 6c display the space of variations using the 2Free attribute function for two different input. FIG5 display the space of variations using the 1Free attribute function. FIG5 was generated using a model trained with a low KL penalty (β = 0.1) VarNet proposes a novel solution to explore the latent spaces. Usual techniques to navigate in the space of VAEs such as interpolations or the use of attribute vectors (distinct from what we called attribute vectors in this work) are mostly intrinsically-based on moving using straight lines. This assumes that the underlying geometry is euclidean, which is not the case, and forgets about the probabilistic framework. Also, computing attribute vectors requires data with binary labels which are not always available. On the contrary, our approach grants a sound probabilistic interpretation of the attributes and the variations they generate. Indeed, when the discriminator is fooled by the encoder-decoder architecture, the attributes are distributed according to ν which has a simple interpretation (it is the push-forward of the ν α distribution which is considered to be a uniform distribution in all these examples). Also, thinking about variations as a subspace of smaller dimension than the whole latent space makes much sense for us. Figure 7 shows a visualization in the latent space Z of the variation spaces spanned by moving with constant steps in the attribute space Ψ. Two key elements appear: constant steps in the attribute space do not induce constant steps in the Z space and variation spaces are extremely diverse (they are not translated versions of a unique variation space). For us, this advocates for the fact that displacements in the latent spaces using straight lines have a priori no meaningful interpretation: the same change of attributes for two different inputs can lead to radically different displacements in the latent space. More generally, our proposition of parametrizing attribute-related displacements in a latent space using flows conditioned on a simpler space is appealing from a conceptual point of view since we do not mix, in the same latent space, its probabilistic interpretation given by the prior and its ability to grant meaningful ways to vary attributes.
The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation.
1,796
scitldr
Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect effects. In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns. Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes. Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa. Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain. While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are insensitive to this signal. We will publicly release both datasets. What makes a document's sentiment positive? What makes a loan applicant creditworthy? What makes a job candidate qualified? What about a photograph truly makes it depict a dolphin? Moreover, what does it mean for a feature to be relevant to such a determination? Statistical learning offers one framework for approaching these questions. First, we swap out the semantic question for a more readily answerable associative question. For example, instead of asking what comprises a document's sentiment, we recast the question as which documents are likely to be labeled as positive (or negative)? Then, in this associative framing, we interpret as relevant, those features that are most predictive of the label. However, despite the rapid adoption and undeniable commercial success of associative learning, this framing seems unsatisfying. Alongside deep learning's predictive wins, critical questions have piled up concerning spuriousness, artifacts, reliability, and discrimination, that the purely associative perspective appears ill-equipped to answer. For example, in computer vision, researchers have found that deep neural networks rely on surface-level texture or clues in the image's to recognize foreground objects even when that seems both unnecessary and somehow wrong: the beach is not what makes a seagull a seagull. And yet researchers struggle to articulate precisely why models should not rely on such patterns. In NLP, these issues have emerged as central concerns in the literature on annotation artifacts and bias (in the societal sense). Across myriad tasks, researchers have demonstrated that models tend to rely on spurious associations (; ; ;). Notably, some models for question-answering tasks may not actually be sensitive to the choice of the question , while in Natural Language Inference (NLI), classifiers trained on hypotheses only (vs hypotheses and premises) perform surprisingly well . However, papers seldom make clear what, if anything, spuriousness means within the standard supervised learning framework. ML systems are trained to exploit the mutual information between features and a label to make accurate predictions. Statistical learning does not offer a conceptual distinction between between spurious and non-spurious associations. Causality, however, offers a coherent notion of spuriousness. Spurious associations owe to common cause rather than to a (direct or indirect) causal path. We might consider a factor of variation to be spuriously correlated with a label of interest if intervening upon it (counterfactually) would not impact the applicability of the label or vice versa. While our paper does not rely on the mathematical machinery of causality, we draw inspiration from the underlying philosophy to design a new dataset creation procedure in which humans counterfactually augment datasets. Returning to NLP, even though the raw data does not come neatly disentangled into manipulable factors, people nevertheless speak colloquially of editing documents to manipulate specific aspects . For example, the following interventions seem natural: (i) Revise the letter to make it more positive; (ii) Edit the second sentence so that it appears to contradict the first. The very notion of targeted revisions like (i) suggests a generative process in which the sentiment is but one (manipulable) cause of the final document. These edits might be thought of as intervening on sentiment while holding all upstream features constant. However even if some other factor has no influence on sentiment, if they share some underlying common cause (confounding), then we might expect aspects of the final document to be predictive of sentiment owing to spurious association. In this exploratory paper, we design a human-in-the-loop system for counterfactually manipulating documents. Our hope is that by intervening only upon the factor of interest, we might disentangle the spurious and non-spurious associations, yielding classifiers that hold up better when spurious associations do not transport out of sample. We employ crowd workers not to label documents, but rather to edit them, manipulating the text to make a targeted (counterfactual) class apply. For sentiment analysis, we direct the worker: revise this negative movie review to make it positive, without making any gratuitous changes. We might regard the second part of this directive as a sort of least action principle, ensuring that we perturb only those spans necessary to alter the applicability of the label. For NLI, a 3-class classification task (entailment, contradiction, neutral), we ask the workers to modify the premise while keeping the hypothesis intact, and vice versa, seeking two sets of edits corresponding to each of the (two) counterfactual classes. Using this platform, we collect thousands of counterfactually-manipulated examples for both sentiment analysis and NLI, extending the IMDb and SNLI datasets, respectively. The is two new datasets (each an extension of a standard resource) that enable us to both probe fundamental properties of language and train classifiers less reliant on spurious signal. We show that classifiers trained on original IMDb reviews fail on counterfactually-revised data and vice versa. We further show that spurious correlations in these datasets are picked up by even linear models, however, augmenting the revised examples breaks up these correlations (e.g., genre ceases to be predictive of sentiment). For a Bidirectional LSTM trained on IMDb reviews, classification accuracy goes down from 79.3% to 55.7% when evaluated on original vs revised reviews. The same classifier trained on revised reviews achieves an accuracy of 62.5% on original reviews compared to 89.1% on their revised counterparts. These numbers go to 81.7% and 92.0% respectively when the classifier is retrained on the combined dataset. Similar behavior is observed for linear classifiers. We discovered that BERT is more resilient to such drops in performance on sentiment analysis. Despite that, it appears to rely on spurious associations in SNLI hypotheses identified by. We show that if fine-tuned on SNLI sentence pairs, BERT fails on pairs with revised premise and vice versa, experiencing more than a 30 point drop in accuracy. However, fine-tuned on the combined set, it performs much better across all datasets. Similarly, a Bi-LSTM trained on hypotheses alone can accurately classify 69% of the SNLI dataset but performs worse than the majority class baseline when evaluated on the revised dataset. When trained on hypotheses only from the combined dataset, its performance is expectedly worse than simply selecting the majority class on both SNLI as well as the revised dataset. Several papers demonstrate cases where NLP systems appear not to learn what humans consider to be the difference that makes the difference. For example, otherwise state-of-the-art models have been shown to be vulnerable to synthetic transformations such as distractor phrases , to misclassify paraphrased task ) and to fail on template-based modifications . demonstrate that simply replacing words by synonyms or hypernyms, which should not alter the applicable label, nevertheless breaks ML-based NLI systems. and show that classifiers correctly classified the hypotheses alone in about 69% of SNLI corpus. They further discover that crowd workers adopted specific annotation strategies and heuristics for data generation. identify similar issues exist with automatically-constructed benchmarks for question-answering . discover that reported numbers in question-answering benchmarks could often be achieved by the same models when restricted to be blind either to the question or to the passages.; and showed how imbalances in training data lead to unintended bias in the ing models, and, consequently, potentially unfair applications. substitute words to test the behavior of sentiment analysis algorithms in the presence of stylistic variation, finding that similar word pairs produce significant differences in sentiment score. Several papers explore richer feedback mechanisms for classification. Some ask annotators to highlight rationales, spans of text indicative of the label (; ;). For each document, remove the rationales to generate contrast documents, learning classifiers to distinguish original documents from their contrasting counterparts. While this feedback is easier to collect than ours, how to leverage it for training deep NLP models, where features are not neatly separated, remains less clear. programmatically alter text to invert gender bias and combined the original and manipulated data yielding gender-balanced dataset for learning word embeddings. In the simplest experiments, they swap each gendered word for its other-gendered counterpart. For example, the doctor ran because he is late becomes the doctor ran because she is late. However, they do not substitute names even if they co-refer to a gendered pronoun. Building on their work, describe a data augmentation approach for mitigating gender stereotypes associated with animate nouns for morphologically-rich languages like Spanish and Hebrew. They use a Markov random field to infer how the sentence must be modified while altering the grammatical gender of particular nouns to preserve morpho-syntactic agreement. In contrast, describe a method for probabilistic automatic in-place substitution of gendered words in a corpus. Unlike Lu et al., they propose an explicit treatment of first names by pre-defining name-pairs for swapping, thus expanding Lu et al.'s list of gendered word pairs significantly. We use Amazon's Mechanical Turk crowdsourcing platform to recruit editors to counterfactually revise each dataset. To ensure high quality of the collected data, we restricted the pool to U.S. residents that had already completed at least 500 HITs and had an over 97% HIT approval rate. For each HIT, we conducted pilot tests to identify appropriate compensation per assignment, receive feedback from workers and revise our instructions accordingly. A total of 713 workers contributed throughout the whole process, of which 518 contributed edits reflected in the final datasets. The original IMDb dataset consists of 50000 reviews divided equally across train and test splits. To keep the task of editing from growing unwieldy, we filter out the longest 20% of reviews, leaving 20000 reviews in the train split from which we randomly sample 2500 reviews, enforcing a 50:50 class balance. Following revision by the crowd workers, we partition this dataset into train/validation/test splits containing 1707, 245 and 488 examples, respectively. We present each review to two workers, instructing to revise the review such that (a) the document remains coherent and (b) the new label (given) accurately describes the revised document. Moreover, we instruct the workers not to make gratuitous modifications. Over a four week period, we manually inspected each generated review and rejected the ones that were outright wrong (sentiment was still the same or the review was a spam). After review, we rejected roughly 2% of revised reviews. For 60 original reviews, we did not approve any among the counterfactually-revised counterparts supplied by the workers. To construct the new dataset, we chose one revised review (at random) corresponding to each original review. In qualitative analysis, we identified eight common patterns among the edits (Table 2). For each review, having access to its counterfactually-revised counterpart enables us to isolate which parts the review humans believe are truly indicative of sentiment. These are the parts that were removed, replaced, or inserted into the original review to generate a new review that has the opposite sentiment. We identify the position indices where such replacements or insertions were made and create a binary vector representing the edits in each original review. To analyze inter-editor agreement, we compute the Jaccard similarity between the vectors corresponding to each revised review (Table 1). We observe that there is a higher agreement between two workers on smaller reviews and it decreases with the length of the review. Natural Language Inference Unlike sentiment analysis, SNLI is 3-way classification task, with inputs consisting of two sentences, a premise and a hypothesis and the three possible labels being entailment, contradiction, and neutral. The label is meant to describe the relationship between the facts stated in each sentence. We randomly sampled 1750, 250, and 500 pairs from the train, validation, and test sets of SNLI respectively, constraining the new data to have balanced classes. In Table 2: Most prominent categories of edits performed by humans for sentiment analysis (Original/Revised, in order). Red spans were replaced by Blue spans. Recasting fact as hoped for The world of Atlantis, hidden beneath the earth's core, is fantastic The world of Atlantis, hidden beneath the earth's core is supposed to be fantastic Suggesting sarcasm thoroughly captivating thriller-drama, taking a deep and realistic view thoroughly mind numbing "thriller-drama", taking a "deep" and "realistic" (who are they kidding?) view Inserting modifiers The presentation of simply Atlantis' landscape and setting The presentation of Atlantis' predictable landscape and setting Replacing modifiers "Election" is a highly fascinating and thoroughly captivating thriller-drama "Election" is a highly expected and thoroughly mind numbing "thriller-drama" Inserting phrases Although there's hardly any action, the ending is still shocking. Although there's hardly any action (or reason to continue watching past 10 minutes), the ending is still shocking. Diminishing via qualifiers which, while usually containing some reminder of harshness, become more and more intriguing. which, usually containing some reminder of harshness, became only slightly more intriguing. Granted, not all of the story makes full sense, but the film doesn't feature any amazing new computer-generated visual effects. Granted, some of the story makes sense, but the film doesn't feature any amazing new computer-generated visual effects. Changing ratings one of the worst ever scenes in a sports movie. 3 stars out of 10. one of the wildest ever scenes in a sports movie. 8 stars out of 10. one HIT, we asked workers to revise the hypothesis while keeping the premise intact, seeking edits corresponding to each of the two counterfactual classes. We refer to this data as Revised Hypothesis (RH). In another HIT, we asked workers to revise the original premise, while leaving the original hypothesis intact, seeking similar edits, calling it Revised Premise (RP). Following data collection, we employed a different set of workers to verify whether the given label accurately described the relationship between each premise-hypothesis pair. We presented each pair to three workers and performed a majority vote. When all three reviewers were in agreement, we approved or rejected the pair based on their decision, else, we verified the data ourselves. Finally, we only kept premise-hypothesis pairs for which we had valid revised data in both RP and RH, corresponding to both counterfactual labels. As a , we discarded ≈ 9% data. RP and RH, each comprised of 3332 pairs in train, 400 in validation, and 800 in test, leading to a total of 6664 pairs in train, 800 in validation, and 1600 in test in the revised dataset. In qualitative analysis, we identified some common patterns among hypothesis and premise edits (Table 3, 4). We collected all data after IRB approval and measured the time taken to complete each HIT to ensure that all workers were paid more than the federal minimum wage. During our pilot studies, workers spent roughly 5 minutes per revised review, and 4 minutes per revised sentence (for NLI). We paid workers $0.65 per revision, and $0.15 per verification, totalling $10778.14 for the study. Our experiments rely on the following five models: Support Vector Machines (SVMs), Naïve Bayes (NB) classifiers, Bidirectional Long Short-Term Memory Networks (Bi-LSTMs; Graves & Schmid- huber, 2005), ELMo models with LSTM, and fine-tuned BERT models . For brevity, we discuss only implementation details necessary for reproducibility. We use scikit-learn implementations of SVMs and Naïve Bayes for sentiment analysis. We train these models on TF-IDF bag of words feature representations of the reviews. We identify parameters for both classifiers using grid search conducted over the validation set. Bi-LSTM When training Bi-LSTMs for sentiment analysis, we restrict the vocabulary to the most frequent 20000 tokens, replacing out of vocabulary tokens by UNK. We fix the maximum input length at 300 tokens and pad smaller reviews. Each token is represented by a randomly-initialized 50-dimensional embedding. Our model consists of a bidirectional LSTM (hidden size 50) with recurrent dropout (probability 0.5) and global max-pooling following the embedding layer. To generate output, we feed this (fixed-length) representation through a fully-connected hidden layer with ReLU activation (hidden size 50), and then a fully-connected output layer with softmax activation. We train all models for a maximum of 20 epochs using Adam , with a learning rate of 1e-3 and a batch size of 32. We apply early stopping when validation loss does not decrease for 5 epochs. We also experimented with a larger Bi-LSTM which led to overfitting. We use the architecture described in to evaluate hypothesis-only baselines. 1 ELMo-LSTM We compute contextualized word representations (ELMo) using character-based word representations and bidirectional LSTMs . The module outputs a 1024- dimensional weighted sum of representations from the 3 Bi-LSTM layers used in ELMo. We represent each word by a 128-dimensional embedding concatenated to the ing 1024-dimensional ELMo representation, leading to a 1152-dimensional hidden representation. Following Batch Normalization, this is passed through an LSTM (hidden size 128) with recurrent dropout (probability 0.2). The output from this LSTM is then passed to a fully-connected output layer with softmax activation. We train this model for up to 20 epochs with same early stopping criteria as for Bi-LSTM, using the Adam optimizer with a learning rate of 1e-3 and a batch size of 32. BERT We use an off-the-shelf uncased BERT Base model, fine-tuning for each task. 2 To account for BERT's sub-word tokenization, we set the maximum token length is set at 350 for sentiment analysis and 50 for NLI. We fine-tune BERT up to 20 epochs with same early stopping criteria as for Bi-LSTM, using the BERT Adam optimizer with a batch size of 16 (to fit on a Tesla V-100 GPU). We found learning rates of 5e-5 and 1e-5 to work best for sentiment analysis and NLI respectively. We find that for sentiment analysis, linear models trained on the original 1.7k reviews achieve 80% accuracy when evaluated on original reviews but only 51% (level of random guessing) on revised reviews (Table 5). Linear models trained on revised reviews achieve 91% accuracy on revised reviews but only 58.3% on the original test set. We see similar pattern for Bi-LSTMs where accuracy drops substantially in both directions. Interestingly, while BERT models suffer drops too, they are less pronounced, perhaps a benefit of the exposure to a larger dataset where the spurious patterns may not have held. Classifiers trained on combined datasets perform well on both, often within ≈ 3 pts of models trained on the same amount of data taken only from the original distribution. Thus, there may be a price to pay for breaking the reliance on spurious associations, but it may not be substantial. We also conduct experiments to evaluate our sentiment models vis-a-vis their generalization out-ofsample to new domains. We evaluate models on Amazon reviews on data aggregated over six genres: beauty, fashion, appliances, giftcards, magazines, and software, the Twitter sentiment dataset , 3 and Yelp reviews released as part of the Yelp dataset challenge. We show that in almost all cases, models trained on the counterfactually-augmented IMDb dataset perform better than models trained on comparable quantities of original data. To gain intuition about what is learnable absent the edited spans, we tried training several models on passages where the edited spans have been removed from training set sentences (but not test set). SVM, Naïve Bayes, and Bi-LSTM achieve 57.8%, 59.1%, 60.2% accuracy, respectively, on this task, suggesting that there is substantial signal in these potentially immaterial sections. However, BERT performs worse than random guessing. In one simple demonstration of the benefits of our approach, we note that seemingly irrelevant words such as: romantic, will, my, has, especially, life, works, both, it, its, lives and gives (correlated with positive sentiment), and horror, own, jesus, cannot, even, instead, minutes, your, effort, script, seems and something (correlated with negative sentiment) are picked up as high-weight features by linear models trained on either original or revised reviews as top predictors. However, because humans never edit these during revision owing to their lack of semantic relevance, combining the original and revised datasets breaks these associations and these terms cease to be predictive of sentiment (Fig 4). Models trained on original data but at the same scale as combined data are able to perform slightly better on the original test set but still fail on the revised reviews. All models trained on 19k original reviews receive a slight boost in accuracy on revised data (except Naïve Bayes), yet their performance significantly worse compared to specialized models. Retraining models on a combination of the original 19k reviews with revised 1.7k reviews leads to significant increases in accuracy for all models on classifying revised reviews, while slightly improving the accuracy on classifying the original reviews. This underscores the importance of including counterfactuallyrevised examples in training data. Natural Language Inference Fine-tuned on 1.67k original sentence pairs, BERT achieves 72.2% accuracy on SNLI dataset but it is only able to accurately classify 39.7% sentence pairs from the RP set (Table 7). Fine-tuning BERT on the full SNLI training set (500k sentence pairs) in similar behavior. Fine-tuning it on RP sentence pairs improves its accuracy to 66.3% on RP but causes a drop of roughly 20 pts on SNLI. On RH sentence pairs, this in an accuracy of 67% on RH and 71.9% on SNLI test set but 47.4% on the RP set. To put these numbers in context, each individual hypothesis sentence in RP is associated with two labels, each in the presence of a different premise. A model that relies on hypotheses only would at best perform slightly better than choosing the majority class when evaluated on this dataset. However, fine-tuning BERT on a combination of RP and RH leads to consistent performance on all datasets as the dataset design forces models to look at both premise and hypothesis. Combining original sentences with RP and RH improves these numbers even further. We compare this with the performance obtained by fine-tuning it on 8.3k sentence pairs sampled from SNLI training set, and show that while the two perform roughly within 4 pts of each other when evaluated on SNLI, the former outperforms latter on both RP and RH. To further isolate this effect, Bi-LSTM trained on SNLI hypotheses only achieves 69% accuracy on SNLI test set, which drops to 44% if it is retrained on combination of original, RP and RH data (Table 8). Note that this combined dataset consists of five variants of each original premisehypothesis pair. Of these five pairs, three consist of the same hypothesis sentence, each associated with different truth value given the respective premise. Using these hypotheses only would provide conflicting feedback to a classifier during training, thus causing the drop in performance. Further, we notice that the gain of the latter over majority class baseline comes primarily from the original data, as the same model retrained only on RP and RH data experiences a further drop of 11.6% in accuracy, performing worse than just choosing the majority class at all times. One reasonable concern might be that our models would simply distinguish whether an example were from the original or revised dataset and thereafter treat them differently. The fear might be that our models would exhibit a hypersensitivity (rather than insensitivity) to domain. To test the potential for this behavior, we train several models to distinguish between original and revised data (Table 9). BERT identifies original reviews from revised reviews with 77.3% accuracy. In case of NLI, BERT and Naïve Bayes perform roughly within 3 pts of the majority class baseline (66.7%) whereas SVM performs substantially worse. By leveraging humans not only to provide labels but also to intervene upon the data, revising documents to alter the applicability of various labels, we are able to derive insights about the underlying semantic concepts. Moreover we can leverage the augmented data to train classifiers less dependent on spurious associations. Our study demonstrates the promise of leveraging human-in-the-loop feedback to disentangle the spurious and non-spurious associations, yielding classifiers that hold up better when spurious associations do not transport out of sample. Our methods appear useful on both sentiment analysis and NLI, two contrasting tasks. In sentiment analysis, expressions of opinion matter more than stated facts, while in NLI this is reversed. SNLI poses another challenge in that it is a 3-class classification task using two input sentences. In future work, we plan to extend these techniques, finding ways to leverage humans in the loop to build more robust systems for question answering and summarization (among others).
Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations.
1,797
scitldr
Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively. With the significant progress of recent machine learning research, various machine learning models have been being rapidly adopted to countless real-world applications. This rapid adaptation increasingly questions the machine learning model's credibility, fairness, and more generally interpretability. In the line of this research, researchers have explored various notions of model interpretability. Some researchers directly answer the trustability or the fairness of a model , while some other researchers seek to actually improve the model's performance by understanding the model's weak points . Even though the goal of such various model interpretability tasks varies, vast majority of them are built upon extracting relevant features for a prediction, so called feature-based explanation. Feature-based explanation is commonly based on measuring the fidelity of the explanation to the model, which is essentially how close the sum of attribution scores for a set of features approximates the function value difference before and after removing the set of features. Depending on their design, the fidelity-based attribution evaluation varies: completeness , sensitivity-n , infidelity , and causal local explanation metric . The idea of smallest sufficient region (SSR) and smallest destroying region (SDR) is worth noting because it considers the ranking of the feature attribution scores, not the actual score itself. Intuitively, for a faithful attribution score, removing the most salient features would naturally lead to a large difference in prediction score. Therefore, SDR-based evaluations measure how much the function value changes when the most high-valued salient features are removed. Although the aforementioned attribution evaluations made success in many cases, setting features with an arbitrary reference values to zero-out the input is limited, in the sense that it only considers the prediction at the reference value while ignoring the rest of the input space. Furthermore, the choice of reference value inherently introduces bias. For example, if we set the feature value to 0 in rgb images, this introduces a bias in the attribution map that favors the bright pixels. As a , explanations that optimize upon such evaluations often omit important dark objects and the pertinent negative features in the image, which is the part of the image that does not contain object but is crucial to the prediction . An alternative way to remove pixels is to use sampling from some predefined distribution or a generative model , which nevertheless could still introduce some bias with respect to the defined distribution. Moreover, they require a generative model that approximates the data distribution, which may not be available in certain domains. In this paper, we remove such inherit bias by taking a different perspective on the input perturbation. We start from an intuition that if a set of features are important to make a specific prediction, keeping them in the same values would preserve the prediction even though other irrelevant features are modified. In other words, the model would be more sensitive on the changes of those important or relevant features than the ones that are not. Unlike the foremost approaches including SDR and SSR that perturbs features to a specific reference point, we consider the minimum norm of perturbation to arbitrary directions, not just to a reference point, that can change model's prediction, also known as "minimum adversarial perturbation" in the literature (; b). Based on this idea, we define new evaluation criteria to test the importance of a set of features. By computing the minimum adversarial perturbation on the complementary set of features that can alter the model's decision, we could test the degree of importance of the set. Although explicitly computing the importance value is NP-hard , and showed that the perturbations computed by adversarial attacks can serve as reasonably tight upper bounds, which lead to an efficient approximation for the proposed evaluation. Furthermore, we can derive a new explanation framework by formulating the model explanation to a two-player min-max game between explanator and adversarial attacker. The explanator aims to find a set of important features to maximize the minimum perturbation computed by the attacker. This framework empirically performs much better than previous approaches quantitatively, with very inspiring examples. To summarize our contributions: • We define new evaluation criteria for feature-based explanations based on robustness analysis. The evaluation criteria consider the worst case perturbations when a set of features are anchored, which does not introduce bias into the evaluation. • We design efficient algorithms to generate explanations that maximize the proposed criteria, which perform favorably against baseline methods on the proposed evaluation criteria. • Experiments in computer vision and NLP models demonstrate that the proposed explanation can indeed identify some important features that are not captured by previous methods. Furthermore, our method is able to extract a set of features that contrasts the current prediction to a target class. 2.1 PROBLEM NOTATION Let us consider the following setting: a general K-way classification problem with input space X ⊆ R d, output space Y = {1, . . ., K}, and a predictor function f: X → Y where f (x) denotes the output class for some input example x = [x 1, . . ., x d] ∈ X. Then, for a particular prediction f (x) = y, despite the different forms of existing feature-based explanations ranging from attributing an importance value to each feature, ranking the features by their importance, to simply identify a set of important features, a common goal of them is to extract a compact set of relevant features with respect to the prediction. We note that however, given an explanation that identifies a set of said to be relevant features, how can we evaluate the quality of such explanation, or in other words, justify whether the distinguished features are truly relevant to the prediction? While one generally has no ground truth about the underlying true relevance of the features, recent studies take an axiomatic approach to define what properties the relevant features should hold and evaluate the explanations through verifying if the identified relevant features satisfy the properties. One such properties that is widely adopted in the literature is to assume that the importance of a set of features corresponds to the degree of change in prediction when the features are removed from the original input. Nevertheless, as we discussed in the previous section, the practice of approximating removal of features by setting their value to some reference point poses the risk of introducing bias in the evaluation. As a , to escape from the caveat, we follow a similar concept but propose two new criteria to evaluate the importance of features based on the following assumptions. Assumption 1 When the values of the most salient features are anchored (fixed), perturbation on the complementary set of features has weaker influence on the model's prediction. In other words, the model could tolerate a larger degree of perturbation on the less important and non-anchored features. Assumption 2 If perturbation is allowed on a set of important features, a small perturbation could easily change the model prediction even when we fix the values for the rest of the features. Based on these two assumptions, we propose a new framework for evaluating explanations. The evaluation is based on the adversarial robustness when a set of features are fixed, which is formally defined below. Definition 2.1 The minimum adversarial perturbation norm on a set of features S, which we will also name as Robustness-S, can be defined as: where S = U \ S is the complementary set of features, and δ S = 0 means that the perturbation value on features in S is constraint to be 0. Assume that we are given an explanation that partitions the input features into a relevant set S r and an irrelevant set S r. Assumption 1 implies that the quality of the relevant set can be measured by * Sr -the robustness of irrelevant set when the relevant set is anchored. Specifically, Assumption 1 infers that a higher robustness on S r follows from a larger coverage of pertinent features in set S r; and thus an explanation is considered better if it leads to a higher robustness against perturbation in S r. On the other hand, based on Assumption 2, an explanation that has included important salient features in S r should lead to a smaller robustness level on * Sr. Therefore, Assumption 1 and 2 build up our proposed evaluation criteria Robustness-S r and Robustness-S r respectively, as listed below. Robustness-S r measures the minimum adversarial distortion Sr when the set of important features S r, typically represented by the high-weight features in an attribution map, are anchored and perturbation is only allowed in low-weight regions. The higher the score the better the explanation. To measure Robustness-S r, we would need to first determine the size of |S r |. We can set |S r | to the amount of anchors that an user is interested in or we may vary the size of |S r | and evaluate the corresponding Robustness-S r at different points. By varying the size of |S r |, we could plot an evaluation curve for each explanation and in turn measure the area under curve (AUC), which corresponds to the average Robustness-S r at different sizes of relevant set. Robustness-S r measures the minimum distortion distance Sr when the set of important features S r are the only region that is perturbable, and the rest of feature values are anchored. Contrary to Robustness-S r, lower scores on this metric indicate better explanation. We similarly define AUC of Robustness-S r as the average of Robustness-S r when we vary the size of |S r |. Evaluation Procedure Note that both Robustness-S r and Robustness-S r are sensitive to the cardinality of S r. For instance, including all features in S r will make = 0. Therefore, we will only use these criteria to evaluate relevant sets with the same cardinality. For example, to evaluate a feature attribution method that assigns a weight with each feature, we can sort the features by the decending order of weights and then for each set of top-K features with K = 1, 2,..., d, we evaluate Robustness-S r (S r) and plot a curve. A larger (smaller) area under curve indicates a better feature attribution ranking. (See examples in Figure 1). Untargeted v.s. Targeted Explanation Definition 2.1 corresponds to the untargeted adversarial robustness -a perturbation that changes the predicted class to any label except y is considered as a successful attack. Instead of doing this, our formulation can also extend to targeted adversarial robustness, where we replace by * where t is the targeted class. Using this definition, our approach will try to address the question " Why is this example classified as y instead of t", and the important features that optimize this criterion will highlight the contrast between class y and t. We will give several interesting in the experiment section. Comparing to existing measurement The proposed criteria at the first glance look similar to SSR-and SDR-based measurements. We note that, however, the key differences between our proposed criteria and SSR-(SDR-) based criteria are in two-folds: 1) Conceptually, to measure whether a set of features is important, instead of concerning the prediction change before and after removing the features, we consider whether perturbation on the feature values would significantly alter the prediction. 2) Practically, our proposed criteria allow us to eschew the difficulty of modeling feature removal as discussed in section 1. In fact, as most implementations of removal-based criteria set the values of the features of interest to some fixed reference point, our criteria could be viewed as generalized versions where we consider all possible reference points by allowing perturbations in any directions. As a , the proposed criteria enjoys a broader view of prediction behavior around the input, and in turn could capture a broader range of important features like the pertinent negative features in , as we shall show in the experiment section. Robustness Evaluation under Fixed Anchor Set It is known that computing the exact minimum distortion distance in modern neural networks is intractable , so many different methods have been developed to estimate the value. Adversarial attacks, such as C&W and PGD attack , aim to find a feasible solution of, which leads to an upper bound of * S. They are based on gradient based optimizers which are usually efficient. On the other hand, neural network verification methods aim to provide a lower bound of * S to ensure that the model prediction will not change within certain perturbation range; a;. However, these methods are usually time consuming (often > 50 times slower than a backpropagation). The proposed framework can be combined with any method that aims to approximately compute, including attack, verification, and some other statistical estimations. However, for simplicity we only choose to evaluate by the state-of-the-art projected gradient descent (PGD) attack , since the verification methods are too slow and often lead to much looser estimation as reported in some recent studies . Given the new evaluation criteria, a natural follow-up question is how to design explanations that optimize the measurements. Recall that under the proposed criteria the goal of an optimal explanation is to maximize (minimize) robustness-S r (robustness-S r) under the cardinality constraint on S r. Searching for such explanations thus leads to the following optimization problems, for Robustness-S r and for Robustness-S r: where g(x, S) computes the value in Eq., the minimum distortion distance when the features in setS r is not allowed to be perturbed, and K is a pre-defined size constraint on the set S r. Directly solving and is challenging since g is an implicit function computed by solving approximately, and furthermore, the discrete input constraint makes it intractible to find the optimal solution. As a , we propose a greedy-styled algorithm, where we iteratively add the most promising feature into S r that optimizes the objective at each local step until S r reaches the size constraint. In other words, we initialize the set S r as empty, and sequentially solve the following subproblem at every step t: where S t r is the anchor set at step t, and S 0 r = ∅. We repeat this subprocedure until the size of set S t r reaches K. We name this method as Greedy. A straightforward way for solving is to exhaustively search over every single feature. However, considering a single feature at a time ignores the correlation between features, which tends to introduce noise (see our experimental Table 1 : Area under curve of the proposed criteria for various explanations on MNIST. The higher the better for Robustness-S r ; the lower the better for Robustness-S r . Robustness measured with. we consider multiple features at a single step, searching over all possible combinations will become intractable. For example, considering all possible combinations of two features requires O(d 2) evaluations of function g at every step. To consider the joint influence between features efficiently, we propose a smoothed regression version of solving in the following subsection. As considering the correlation between features by searching over all possible subsets of S t r at every step t is computationally infeasible, we instead propose to approximate the function g by learning a mapping from the binary space of {0, 1} d, where ones indicate the inclusion of corresponding feature indices and zeros otherwise, to their ing function value g(x, {0, 1} d ). Specifically, we can sample a subset Q ⊆ {0, 1} d and then consider the following linear regression: After the regression is learned, we can treat the coefficients w that correspond to each feature as their approximated effect on the function value of g when they are included into the set S r. By learning such regression where we sample from the possible subsets, we are able to capture the joint relationships between features, and as well smooth out possible noises. In fact, the greedy approach can be viewed as a special case of Reg-Greedy where the sampled subset Q, in, in each iterative step contains exactly the one-hot encoded vectors with the "on" indices correspond to the remaining feature indices. That is, each one-hot vector indicates the inclusion of a corresponding single feature into the relevant set. In this case, the coefficients of the learned linear regression would be equivalent to the difference in objective value before and after the corresponding feature is included into the relevant set. To take into account feature interactions, Reg-Greedy samples from the whole distribution of {0, 1} d where most of the sampled vectors in Q contains multiple "on" indices. In this way, the learned regression captures feature correlations on the objective value and could smooth out possible noises encountered by greedy. There has been a great line of research on studying the interaction between features including the well-known Shapley value which tackles the problem through cooperative game theory perspective. proposed a way to use regression with a special kernel to approximate the Shapley value. However, sampling from the whole distribution of {0, 1} d could still incur exponential complexity, and using only a reasonable amount of samples might not be able to precisely capture the behavior of the highly nonlinear objective function g. Therefore, we propose the Regression Greedy (Reg-Greedy) approach, where we still run greedy steps to incrementally add indices to S t r, but at each iteration we run this regression and use the weights to decide which index to be added to S t r. Note that at each step the samples Q must be in a restricted domain, where indices that are already chosen in S t r should be 1 and we sample 0/1 only for the rest of the indices. We distinguish Reg-Greedy from onestep regression (One-Step Reg) which directly determines the importance of each feature by merely solving once. By combining regression in a greedy procedure, we are able to gradually narrow down our sampling space (by sampling only from a restricted domain), focusing on the feature interactions between remaining features and the ones that are already added into the relevant set. This enables us to find from the remaining features that have the greatest interaction with the current relevant set, and could in turn maximally optimize the objective value when added into the relevant set. In practice, a sample complexity of O(d) for learning the regression could generally work well. We present both qualitative and quantitative comparisons in the experiments. For · p in and, we consider p = 2, i.e., the ℓ 2 norm for all experiments. In quantitative , including evaluation curves and the corresponding AUC, we report the average over 50 random examples. For Figure 1: Different Robustness-S r (left) with varying |S r | and Robustness-S r (right) with varying |S r |. For Robustness-S r (left), the higher the better; for Robustness-S r (right), the lower the better. We omit points in the plot with value too high to fit in the scale of y-axis. Table 4: Rank correlation between explanations with respect to original and randomized model. the proposed algorithms, we consider Reg-Greedy (Sec 3.2), One-Step Reg (Sec 3.2) and Greedy (Sec 3.1). For other baselines we include vanilla gradient (Grad) and integrated gradient (IG) from gradient-based approaches; leave-one-out (LOO), or occlusion-1, and SHAP from perturbation-based approaches , and black-box meaningful perturbation (BBMP) from SSR/SDR-based approaches. We perform our experiments on two image datasets, MNIST and ImageNet, as well as a text dataset YahooAnswers. The proposed measurements: Robustness-S r and Robustness-S r. We compare different explanations under the two proposed criteria, robustness-S r and robustness-S r, and plot their evaluation curves respectively. For ease of comparison, we calculate the area under curve (AUC) for each corresponding evaluation. We list the in Table 3, and leave the plots in appendix A. As shown in Table 3, under both criteria, comparing to regression-based methods, the pure greedy method usually suffers degraded performances that could be due to the ignorance of feature correlations, which ultimately in the introduction of noise as shown in Figure 2. Furthermore from the table, we observe that the proposed regression-greedy method consistently outperforms others on both criteria. On one hand, this suggests that the proposed algorithm indeed successfully optimizes towards the criteria; on the other hand, this might indicate the proposed criteria do capture different characteristics of explanations which most of the current explanations do not possess. Another somewhat interesting finding from the table is that while vanilla gradient has generally been viewed as a baseline method, it nonetheless performs competitively on the proposed criteria. To investigate deeper into such observation, we shall visualize the explanations in the following subsection. For simplicity we will just apply Reg-Greedy with Robustness-S r criterion in the qualitative comparisons with previous methods. Existing commonly adopted measurements: Insertion and Deletion. Indeed, it might not be surprising that Reg-Greedy achieves the best performances on the proposed criteria it is explicitly designed to optimize. To more objectively showcase the usefulness of the proposed explanation, we compare Reg-Greedy with other explanations on existing commonly used quantitative measurements. Particularly, we adopt the Deletion and Insertion criteria proposed by , which are generalized variants of the region perturbation criterion presented in. The Deletion criterion measures the probability drop in the predicted class as top-relevant features, indicated by the given explanation, are progressively removed from the input. On the other hand, the Insertion criterion measures the increase in probability of the predicted class as top-relevant features are gradually revealed from the input whose features are originally all masked. Similar to our proposed criteria, a quick drop (and thus a small area under curve) or a sharp increase (that leads to a large area under curve) in Deletion and Insertion respectively suggest a good explanation as the selected top-important features could indeed greatly influence the prediction. In the experiments, we follow to remove features by setting their values to randomly sampled values. We plot the evaluation curves and report corresponding AUCs in Figure 12 and Table 3. On these additional two criteria, we observe that our proposed method consistently performs favorably against other explanations. Sanity Check. As pointed out in recent literature that an appropriate explanation should at least be loosely related the model being explained , to ensure that our proposed explanation does indeed reflect the model behavior, we conduct the sanity check proposed by (Adebayo et al., NeurIPS'18) to check if our explanations are adequately different when the model parameters are randomly re-initialized. In the experiment, we randomly re-initialize the last fully-connected layer of the neural network model. We then compute the rank correlation between explanation computed w.r.t. the original model and that w.r.t. the randomized model. From Table 4, we observe that Reg-Greedy has a much lower rank correlation comparing to Grad, IG, and LOO, suggesting that Reg-Greedy is indeed sensitive to model parameter change and is able to pass the sanity check. Visualized Explanations on MNIST. Figure 3 illustrates the top features identified by various explanation methods. From this figure, we observe that Gradient, IG, SHAP mainly highlights the white pixels in the digit, while gradient and IG are more noisy compared to SHAP. In the contrary, Reg-Greedy focuses on both the "crucial positive" of the digits "pertinent negative" of regions around the digit. For example, in the first row, a 7 might have been predicted as a 4 or 0 if the pixels highlighted by Reg-Greedy are set to 1. Similarly, a 1 may be turned to a 4 or a 7 given additional white pixels to its left, and a 9 may become a 7 if deleted the lower circular part of its head. As a , Reg-Greedy focuses on "the region in which perturbing will lead to easier prediction change", which includes both the crucial positive pixels and pertinent negative pixels, and provides additional insights that are not captured by the baseline explanations. The superiority of Reg-Greedy is also validated by the better performance on the Robustness-S r score. Targeted Explanation. Recall that in section 2.2, we discussed about the possibility of defining the robustness measurement by considering a targeted distortion distance as formulated in. Here, we provide examples, as shown in Figure 4, where we answer the question of "why the input digit is an A but not a B" by defining a targeted perturbation distance towards class B as our robustness measurement. In each row of the figure, we provide targeted explanation towards two different target classes for a same input image. Interestingly, as the target classes changes, the generated explanation varies in an interpretatble way. For example, in the first row, we explain why the input digit 7 is not classified as a 9 (middle column) or a 2 (rightmost column). The ing explanation against 9 highlights the upper-left part of the 7. Semantically, this region is indeed pertinent to the classification between 7 and 9, since turning on the highlighted pixel values in the region (currently black in the original image) will then make the 7 resemble a 9. However, the targeted explanation against 2 highlights a very different but also meaningful region, which is the lower-right part of the 7; since adding a horizontal stroke on the area would turn a 7 into a 2. This finding demonstrates a special characteristic of our explanation which cannot be easily found in most of the existing methods. While the capability of capturing not only the crucial positive but also the pertinent negative features have also been observed in some recently proposed explanations such as Layer-wise Relevance Propagation (LRP) , as reported in , as well as the explanation technique proposed in. Both of the above mentioned methods are not explicitly designed to handle the targeted explanation task which attempt to answer the question "what are the important features that lead to the prediction of class A but not class B", and thus has different limitations. For example, the ability of LRP to capture pertinent negative features in fact heavily depends on the input range. where inputs are normalized to have zero mean and a standard deviation of one, the black will have non-zero value, and LRP would have non-zero attributions on the black pixels which allows the explanation to capture pertinent negative features. However, as later shown in , if the input pixel intensity is normalized into the range between 0 and 1 (where pixels have the values of 0), LRP failed to highlight the pertinent negative pixels, as would always have zero attribution (since LRP is equivalent to multiplication between Grad and input in a Rectified Linear Unit (ReLU) network as shown in ). , unlike our targeted explanation where we know exactly which targeted class the explanation is suggesting against (and by varying the targeted class we observe varying corresponding explanation given), their method by design does not convey such information. The pertinent negative features highlighted by their method by construction is not directly related to a specific target class, and users in fact need to infer what target class the pertinent negative features are preventing against. To further grasp the difference, we compare our explanation with theirs in Figure 5 (we borrow the from for visualization of their method). Qualitatively, we also observe that our method seems to be giving the most natural explanations. For example, in the first row of left image where the highlighted features are against the class 0, in addition to the left vertical gap (which when presence would make 2 looks like a 0) that is roughly highlighted by all three methods, our method is the Figure 6: Visualization of different explanations on ImageNet, where the predicted class for each input is "fish", "bird", "dog", and "sea lion". only one that highlights the right tail part (green circled) of the digit 2 which might also serve as crucial evidence of 2 against 0. Furthermore, as we change the targeted class to 7 (the second row), while LRP seems to be providing similar explanations, we observe that our explanation has a drastic change and highlights the green circled part which when turned off will make 2 becomes a 7. These might suggest our method is more capable of handling such targeted explanation task. Visualized Explanations on ImageNet. On ImageNet, we as well compare different explanations quantitatively on both of the proposed criteria. We plot the evaluation curves (in appendix A), and compute the corresponding AUC, as listed in Table 2. In general, we observe similar trends as the experiments shown in MNIST. In particular, Reg-Greedy enjoys an overall superior performances than existing explanations on the criteria. In addition, several visualization in Figure 6 also qualitatively demonstrate that our method provides more compact explanations that focuses more on the actual object being classified. Text Classification. We demonstrate how our explanation method could be applied to text classification models. Note that a length-n sentence is usually represented by n embedding vectors, and thus when applying our Greedy algorithm, at each iteration we will try to add each embedding vector to the set S r and choose the one with largest reward. Since there are only at most n choices, the Greedy algorithm doesn't suffer much from noise and has similar behavior with Reg-Greedy. We perform experiments on an LSTM network which learns to classify a given sentence into one of the ten classes (Society, Science, Health, . . .). We showcase an example with explanations generated with different methods in Figure 7. We note that although the top-5 relevant keyword sets generated by the three methods do not vary much, the rankings within the highlighted keywords for each explanation are in fact different. We observe that our method Greedy tends to generate explanation that matches human intuition the most. Particularly, to predict the label of "sport", one might consider "cleats", "football", and "cut" as the strongest indications towards the concept "sport". Our work proposes an objective measurement of feature-based explanation by measuring the "minimum adversarial perturbation" in adversarial literature, which is estimated by adversarial attack. We provide a necessarily incomplete review on related works in objective measurement of explanations and adversarial robustness. Objective Measurements for Explanations Evaluation of explanations has been a difficult problem mainly due to the absence of ground truth . Although one could rely on human intuitions to assess the quality of the generated explanations , for example, judging whether the explanation focuses on the object of interest in an image classification task, these evaluations subject to human perceptions are prone to fall into the pitfall of favoring user-friendly explanations, such as attributions that visually aligns better with the input image, which might not reflect the model behavior . As a , in addition to subjective measurements, recent literature has also proposed objective measurements, which is also called functionally-grounded evaluations . We roughly categorize existing objective measurements into two families. This first family of explanation evaluation is called fidelity-based measurement. This includes that Completeness or Sum to Delta which requires the sum of attributions to equal the prediction difference of the original input and baseline ; sensitivity-n which further generalizes completeness to any subset of the feature ; local accuracy;; and infidelity which is a framework that encompasses several . The general philosophy for this line of methods is to require the sum of attribution value faithfully reflect the change in prediction function value given the presence or absence of certain subset of features. The second family of explanation evaluation are removal-based and preservation-based measurements, which focus on identifying the most important set of features with respect to a particular prediction. The underlying assumption made is that by removing the most (least) salient feature, the ing function value should drop (increase) the most. proposed this idea as an evaluation to evaluate the ranking of featureattribution score. Later on, derive explanations by solving an optimization problem to optimize the evaluation. proposed to learn the explanation generating process by training an auxiliary model. We note the implicitly in the evaluation process of both fidelity and removal (preservation) based measurement involves computing the change in function value given some set of features being absent. However, it is difficult to carefully model the concept of feature absence in practice, as most models by construction are not able to handle inputs with real missing features. As a , previous work has compromised by using approximation to estimate the effect of removing certain features. This includes setting the values of the features to be removed by zero or the mean value , blurred value , random value , or more advanced generative model that attempts to model the given data distribution . Unfortunately, such approximations that represent feature absence by setting the their values to some predefined distribution would inevitably introduce bias into the evaluation process. With the presence of this inherent caveat, we are thus inspired to adopt another angle to tackle the explanation problem. Adversarial Robustness Adversarial robustness has been extensively studied in the past few years. The adversarial robustness of a machine learning model on a given sample can be defined as the shortest distance from the sample to the decision boundary, which corresponds to our definition in. Algorithms have been proposed for finding adversarial examples (feasible solutions of), including (; ;). However, those algorithms only work for neural networks, while for other models such as tree based models or nearest neighbor classifiers, adversarial examples can be found by decision based attacks (; ;). Therefore the proposed framework can also be used in other decision based classifiers. On the other hand, several works aim to solve the neural network verification problem, which is equivalent to finding a lower bound of. Examples include;. In principal, our work can also apply these verification methods for getting an approximate solution of, but in practice they are very slow to run and often gives loose lower bounds on regular trained networks. Our work is also closely related to related works that consider the question "For situation A, why was the outcome B and not C", which we call counterfactual explanations. add group sparsity regularization to adversarial attack to enforce semantic structure for the perturbation, which is more interpretable. find a set of features that once fixed, probability of the prediction is high when perturbing other features. show how one could change the input feature such that the system would output a different class, where the change is limited to replacing a part of input feature by a part of an distractor image. consider the pertinent negative in a binary setting by solving a carefully designed loss function. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. Furthermore, we develop a new explanation method to find important set of features to optimize this new criterion. Experimental demonstrate that the proposed new explanations are indeed capturing significant feature sets across multiple domains. Figure 8: Comparisons between our proposed methods under different criteria. From left to right: untargeted Robustness-S r, targeted Robustness-S r, untargeted Robustness-S r, targeted Robustness-S r. We omit points in the plot with value too high to fit in the scale of y-axis.
We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively.
1,798
scitldr
Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive on various challenging tasks. However, typical GANs require fully-observed data during training. In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data. The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution. We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer. We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption. Generative adversarial networks (GANs) BID0 provide a powerful modeling framework for learning complex high-dimensional distributions. Unlike likelihood-based methods, GANs are referred to as implicit probabilistic models BID8. They represent a probability distribution through a generator that learns to directly produce samples from the desired distribution. The generator is trained adversarially by optimizing a minimax objective together with a discriminator. In practice, GANs have been shown to be very successful in a range of applications including generating photorealistic images BID3. Other than generating samples, many downstream tasks require a good generative model, such as image inpainting BID9 BID15.Training GANs normally requires access to a large collection of fully-observed data. However, it is not always possible to obtain a large amount of fully-observed data. Missing data is well-known to be prevalent in many real-world application domains where different data cases might have different missing entries. This arbitrary missingness poses a significant challenge to many existing machine learning models. Following BID6, the generative process for incompletely observed data can be described as shown below where x ∈ R n is a complete data vector and m ∈ {0, 1} n is a binary mask 2 that determines which entries in x to reveal: DISPLAYFORM0 Let x obs denote the observed elements of x, and x mis denote the missing elements according to the mask m. In addition, let θ denote the unknown parameters of the data distribution, and φ denote the unknown parameters for the mask distribution, which are usually assumed to be independent of θ. In the standard maximum likelihood setting, the unknown parameters are estimated by maximizing the 1 Our implementation is available at https://github.com/steveli/misgan 2 The complementm is usually referred to as the missing data indicator in the literature.following marginal likelihood, integrating over the unknown missing data values:p(x obs, m) = p θ (x obs, x mis)p φ (m|x obs, x mis)dx mis. characterize the missing data mechanism p φ (m|x obs, x mis) in terms of independence relations between the complete data x = [x obs, x mis] and the masks m:• Missing completely at random (MCAR): p φ (m|x) = p φ (m),• Missing at random (MAR): p φ (m|x) = p φ (m|x obs),• Not missing at random (NMAR): m depends on x mis and possibly also x obs.Most work on incomplete data assumes MCAR or MAR since under these assumptions p(x obs, m) can be factorized into p θ (x obs)p φ (m|x obs). With such decoupling, the missing data mechanism can be ignored when learning the data generating model while yielding correct estimates for θ. When p θ (x) does not admit efficient marginalization over x mis, estimation of θ is usually performed by maximizing a variational lower bound, as shown below, using the EM algorithm or a more general approach BID6 ):log p θ (x obs) ≥ E q(xmis|xobs) [log p θ (x obs, x mis) − log q(x mis |x obs)].The primary contribution of this paper is the development of a GAN-based framework for learning high-dimensional data distributions in the presence of incomplete observations. Our framework introduces an auxiliary GAN for learning a mask distribution to model the missingness. The masks are used to "mask" generated complete data by filling the indicated missing entries with a constant value. The complete data generator is trained so that the ing masked data are indistinguishable from real incomplete data that are masked similarly. Our framework builds on the ideas of AmbientGAN . AmbientGAN modifies the discriminator of a GAN to distinguish corrupted real samples from corrupted generated samples under a range of corruption processes (or measurement processes). For images, examples of the measurement processes include random dropout, blur, block-patch, and so on. Missing data can be seen as a special type of corruption, except that we have access to the missing pattern in addition to the corrupted measurements. Moreover, AmbientGAN assumes the measurement process is known or parameterized only by a few parameters, which is not the case in general missing data problems. We provide empirical evidence that the proposed framework is able to effectively learn complex, highdimensional data distributions from highly incomplete data when the GAN generator incorporates suitable priors on the data generating process. We further show how the architecture can be used to generate high-quality imputations. In the missing data problem, we know exactly which entries in each data examples are missing. Therefore, we can represent an incomplete data case as a pair of a partially-observed data vector x ∈ R n and a corresponding mask m ∈ {0, 1} n that indicates which entries in x are observed: x d is observed if m d = 1 otherwise x d is missing and might contain an arbitrary value that we should ignore. With this representation, an incomplete dataset is denoted D = {(x i, m i)} i=1,...,N (we assume instances are i.i.d. samples). We choose this representation instead of x obs because it leads to a cleaner description of the proposed MisGAN framework. It also suggests how MisGAN can be implemented efficiently in practice as both x and m are fixed-length vectors. We begin by defining a masking operator f τ that fills in missing entries with a constant value τ: DISPLAYFORM0 wherem denotes the complement of m and denotes element-wise multiplication. Two key ideas underlie the MisGAN framework. First, in addition to the complete data generator, we explicitly model the missing data process using a mask generator. Since the masks in the incomplete dataset are fully observed, we can estimate their distribution using a standard GAN. Second, we train the complete data generator adversarially by masking its outputs using generated masks and f τ and comparing to real incomplete data that are similarly masked by f τ.Specifically, we use two generator-discriminator pairs (G m, D m) and (G x, D x) for the masks and data respectively. In this paper, we focus on the missing completely at random (MCAR) case, where the two generators are independent of each other and have their own noise distributions p z and p ε. We define the following two loss functions, one for the masks and the other for the data: DISPLAYFORM1 The losses above follow the Wasserstein GAN formulation BID1, although the proposed framework is compatible with many GAN variations BID0; BID1. We optimize the generators and the discriminators according to the following objectives: DISPLAYFORM2 DISPLAYFORM3 where F x, F m are defined such that D x, D m are both 1-Lipschitz for Wasserstein GANs BID1. Practically, we follow the common practice of alternating between a few steps of optimizing the discriminators and one step of optimizing the generators BID0 BID1 BID1. The coefficient α is introduced when optimizing the mask generator G m with the aim of minimizing a combination of L m and L x. Although in theory we could choose α = 0 to train G m and D m without using the data, we find that choosing a small value such as α = 0.2 improves performance. This encourages the generated masks to match the distribution of the real masks and the masked generated complete samples to match masked real data. The overall structure of MisGAN is illustrated in FIG0.Note that the data discriminator D x takes as input the masked samples as if the data are fullyobserved. This allows us to use any existing architecture designed for complete data to construct the data discriminator. There is no need to develop customized neural network modules for dealing with missing data. For example, D x can be a standard convolutional network for image applications. Note that the masks are binary-valued. Since discrete data generating processes have zero gradient almost everywhere, to carry out gradient-based training for GANs, we relax the output of the mask generator G m from {0, 1} n to n. We employ a sigmoid activation σ λ (x) = 1/(1 + exp(−x/λ)) with a low temperature 0 < λ < 1 to encourage saturation and make the output closer to zero or one. Finally, we note that the discriminator D x in MisGAN is unaware of which entries are missing in the masked input samples, and does not even need to know which value τ is used for masking. In the next section, we present a theoretical analysis providing support for the idea that this type of masking process does not necessarily make it more difficult to recover the complete data distribution. The experiments provide compelling empirical evidence for the effectiveness of the proposed framework. In Section 2 we described how the discriminator D x in MisGAN takes as input the masked samples using without knowing what value τ is used or which entries in the input vector are missing. In this section, we discuss the following two important questions: i) Does the choice of the filled-in value τ affect the ability to recover the data distribution? ii) Does information about the location of missing values affect the ability to recover the data distribution?We address these questions in a simplified scenario where each dimension of the data vector takes values from a finite set P. For n-dimensional data, let M = {0, 1}n be the set of all possible masks and I = P n be the set of all possible data vectors. Also let D M and D I be the set of all possible probability distributions on M and I respectively, whose elements are non-negative and sum to one. We first discuss the case where the filled-in value τ is chosen from P.Given τ ∈ P and q ∈ D M, we can construct a left transition matrix T q,τ ∈ R I×I defined below where the (t, s)-th entry specifies the transition probability from a data vector s ∈ I to an outcome t ∈ I masked by f τ, which involves all possible masks under which s is converted into t by filling in the indicated missing entries with τ: DISPLAYFORM0 Let p * x ∈ D I be the unknown true data distribution we want to estimate. In the presence of missing data specified by q, the masked samples then follow the distribution p y = T q,τ p * x. Without imposing extra application-specific constraints, MisGAN with a fixed mask generator can be viewed as solving the linear system p y = T q,τ p x, where p x ∈ D I is the unknown data distribution to solve for. Here we assume that p y and T q,τ are given, as those can be estimated separately from a collection of fully-observed masks and masked samples. Note that a transition matrix preserves the sum of the vectors it is applied to since 1 T q,τ = 1. For p x to be a valid distribution vector, we only need the non-negativity constraint because any solution p x automatically sums to one. That is, estimating the data generating process in the presence of missing data based on the masking scheme used in MisGAN is equivalent to solving the linear system DISPLAYFORM1 In Theorem 1, we state a key property of the transition matrix T q,τ that leads to the answer to our questions. The proof of Theorem 1 is in Appendix A. Theorem 1. Given q ∈ D M, all transition matrices T q,τ with τ ∈ P have the same null space. Theorem 1 implies that if the solution to the constrained linear system FORMULA7 is not unique for a given τ 0 ∈ P, that is, there exists some non-negative DISPLAYFORM2 In other words, we have the following corollary: Corollary 1. Whether the true data distribution is uniquely recoverable is independent of the choice of the filled-in value τ.Here we only discuss the case when the probability of observing all features q is zero, where q denotes the scalar entry of q indexed by 1 ∈ M. Otherwise, the linear system is uniquely solvable as the transition matrix T q,τ0 has full rank. With the non-negativity constraint, it is possible that the solution for the linear system FORMULA7 is unique when the true data distribution p * x is sparse. Specifically, if there exists two indices s 1, s 2 ∈ I such that p * x (s 1) = p * x (s 2) = 0 and also v(s 1) > 0 and v(s 2) < 0 for all v ∈ Null(T q,τ) \ {0}, then the solution to is unique. Sparsity of the data distribution is a reasonable assumption in many situations. For example, natural images are typically considered to lie on a low dimensional manifold, which means most of the instances in I should have almost zero probability. On the other hand, when the missing rate is high, that is, if the masks in M that have many zeros are more probable, the null space of T q,τ will be larger and therefore it is more likely that the non-negative solution is not unique. proposed a sufficient condition on the sparsity of the non-negative solutions to a general underdetermined linear system that guarantees unique optimality. Next we note that in the case of τ ∈ P, an entry with value τ in a masked sample t ∈ I may come either from an observed entry with value τ in the unmasked sample or from an unobserved entry through the masking operation in. One might wonder if this prevents an algorithm from recovering the true distribution when it is otherwise possible to do so. In other words, if we take the location of the missing values into account, would that make the missing data problem less ill-posed? However, this is not the case, as we state in Corollary 2. The proof is in Appendix B where we discuss the case of τ / ∈ P. Corollary 2. If the linear system T q,τ p x = T q,τ p * x does not have a unique non-negative solution, then for this missing data problem, we cannot uniquely recover the true data distribution even if we take the location of the missing values into account. Note that the analysis in this section characterizes how difficult the missing data problem is, which is independent of the choice of the algorithm that solves it. In practice, it is useful to incorporate application-specific prior knowledge into the model to regularize the problem when it is ill-posed. For example, for modeling natural images, convolutional networks are commonly used to exploit the local structure of the data. In addition, decoder-based deep generative models such as GANs implicitly enforce some sparsity constraints due to the use of low dimensional latent codes in the generator, which also helps to regularize the problem. Finally, the following theorem justifies the training objective of MisGAN for the missing data problem (see Appendix A for details). Theorem 2. Given a mask distribution p φ (m), two distributions p θ (x) and p θ (x) induce the same distribution for f τ (x, m) if and only if they have the same marginals p θ (x obs |m) = p θ (x obs |m) for all masks m with p φ (m) > 0. Missing data imputation is an important task when dealing with incomplete data. In this section, we show how to impute missing data according to p(x mis |x obs) by equipping MisGAN with an imputer G i accompanied by a corresponding discriminator D i. The imputer is a function of the incomplete example (x, m) and a random vector ω drawn from a noise distribution p ω. It outputs the completed sample with the observed part in x kept intact. To train the imputer-equipped MisGAN, we define the loss for the imputer in addition to and: DISPLAYFORM3 We jointly learn the data generating process and the imputer according to the following objectives: DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where we use β = 0.1 in the experiments when optimizing G x. This encourages the generated complete data to match the distribution of the imputed real data in addition to having the masked generated data match the masked real data. The overall structure for MisGAN imputation is illustrated in Figure 2.We can also train a stand-alone imputer using only with a pre-trained data generator G x. The architecture is as shown in Figure 2 with the faded parts removed. Moreover, it is also possible to train the imputer to target a different missing distribution p m with a pre-trained data generator G x alone without access to the original (incomplete) training data: DISPLAYFORM7 We construct the imputer G i (x, m, ω) as follows: DISPLAYFORM8 Figure 2: Architecture for MisGAN imputation. The complete data generator G x and the imputer G i can be trained jointly with all the components. We can also independently train the imputer G i without the faded parts if the data generator G x has been pre-trained.where G i generates the imputed with the same dimensionality as its input, x m + ω m, which could be implemented by a deep neural network. The masking outside of G i ensures that the observed part of x stays the same in the output of the imputer G i. The similar masking on the input of G i, x m + ω m, ensures that the amount of noise injected to G i scales with the number of missing dimensions. This is intuitive in the sense that when a data case is almost fully-observed, we expect less variety in p(x mis |x obs) and vice versa. Note that the noise ω needs to have the same dimensionality as x. In this section, we first assess various properties of MisGAN on the MNIST dataset: we demonstrate qualitatively how MisGAN behaves under different missing patterns and different architectures. We then conduct an ablation study to justify the construction of MisGAN. Finally, we compare MisGAN with various baseline methods on the missing data imputation task over three datasets under a series of missingness settings. Data We evaluate MisGAN on three datasets: MNIST, CIFAR-10 and CelebA. MNIST is a dataset of handwritten digits images of size 28×28 BID5. We use the provided 60,000 training examples for the experiments. CIFAR-10 is a dataset of 32×32 color images from 10 classes BID4. Similarly, we use 50,000 training examples for the experiments. CelebA is a large-scale face attributes dataset BID7 that contains 202,599 face images, where we use the provided aligned and cropped images and resize them to 64×64. For all three datasets, the range of pixel values of each image is rescaled to.Missing data distributions We consider three types of missing data distribution: i) Square observation: all pixels are missing except for a square occurring at a random location on the image. ii) Dropout: each pixel is independently missing according to a Bernoulli distribution. iii) Variablesize rectangular observation: all pixels are missing except for a rectangular observed region. The width and height of the rectangle are independently drawn from 25% to 75% of the image length uniformly at random, which in a 75% missing rate on average. In this missing data distribution, each example may have a different number of missing pixels. The highest per-example missing data rate under this mechanism is 93.75%.Evaluation metric We use the Fréchet Inception Distance (FID) BID2 to evaluate the quality of the learned generative model. For MNIST, instead of the Inception network trained on ImageNet BID13, we use a basic LeNet model 4 trained on the complete MNIST training set, and then take the 50-dimensional output from the second-to-last fully-connected layer as the features to compute the FID. For CIFAR-10 and CelebA, we follow the procedure described in BID2 to compute the FID using the pretrained Inception-v3 model. When evaluating generative models using the FID, we use the same number of generated samples as the size of the training set. In this section, we study various properties of MisGAN using the MNIST dataset. Architectures We consider two kinds of architecture for MisGAN: convolutional networks and fully connected networks. We follow the DCGAN architecture BID11 for (de)convolutional generators and discriminators to exploit the local structures of images. We call this model ConvMisGAN.To demonstrate the performance of MisGAN in the absence of the implicit structural regularization provided by the use of a convolutional network, we construct another MisGAN with only fullyconnected layers for both the generators and the discriminators, which we call FC-MisGAN.In the experiments, both Conv-MisGAN and FC-MisGAN are trained using the improved procedure for the Wasserstein GAN with gradient penalty BID1 ). Throughout we use τ = 0 for the masking operator and the temperature λ = 0.66 for the mask activation σ λ (x) described in Section 2.Baseline We compare MisGAN to a baseline model that is capable of learning from large-scale incomplete data: the generative convolutional arithmetic circuit (ConvAC) BID14. ConvAC is an expressive mixture model similar to sum-product networks BID10 with a compositional structure similar to deep convolutional networks. Most importantly, ConvAC admits tractable marginalization due to the product form of the base distributions for the mixtures, which makes it readily capable of learning with missing data. Results Figures 3 and 4 show the generated data samples as well as the learned mask samples produced by Conv-MisGAN and FC-MisGAN under the square observation and independent dropout missing mechanisms. From these , we can see that Conv-MisGAN produces visually better samples than FC-MisGAN on this problem. On the other hand, under the same missing rate, independent dropout leads to worse samples than square observations. Samples generated by ConvAC are shown in FIG0 in Appendix G.We quantitatively evaluate Conv-MisGAN, FC-MisGAN and ConvAC under two missing patterns with missing rates from 10% to 90% with a step of 10%. Figure 5 shows that MisGAN in general outperforms ConvAC as ConvAC tends to generate samples with aliasing artifacts as shown in FIG0. It also shows that in the square observation case, Conv-MisGAN and FC-MisGAN have similar performance in terms of their FIDs. However, under independent dropout, the performance of FC-MisGAN degrades significantly as the missing rate increases compared to Conv-MisGAN. This is because independent dropout with high missing rate makes the problem more challenging as it induces less overlapping co-occurrence among pixels, which degrades the signal for understanding the overall structure. This is illustrated in Figure 6 where the observed pattern comes from one of four equally probable 14×14 square quadrants with no overlap. Clearly this missing data problem is ill-posed and we could never uniquely determine the correlation between pixels across different quadrants without additional assumptions. The samples generated by the FC-MisGAN produce obvious discontinuity across the boundary of the quadrants as it does not rely on any prior knowledge about how pixels are correlated. The discontinuity artifact is less severe with Conv-MisGAN since the convolutional layers encourage local smoothness. This shows the importance of incorporating prior knowledge into the model when the problem is highly ill-posed. Ablation study We point out that the mask discriminator in MisGAN is important for learning the correct distribution robustly. Figure 7 shows two common failure scenarios that frequently happen with an AmbientGAN, which is essentially equivalent to a MisGAN without the mask discriminator. Figure 7 (left) shows a case where AmbientGAN produces perfectly consistent masked outputs, but the learned mask distribution is completely wrong. Since we use f τ =0 (x, m) = x m, it makes the role of x and m interchangeable when considering only the masked outputs. Even if we rescale the range of pixel values from to [−1, 1] to avoid this situation, AmbientGAN still fails often as shown in Figure 7 (right). In contrast, MisGAN avoids learning such degenerate solutions due to explicitly modeling the mask distribution. Missing data imputation We construct the imputer network G i defined in using a three-layer fully-connected network with 500 hidden units in the middle layers. imputation on different examples applying novel masks randomly drawn according to the same distribution. FIG5 (right) shows the imputation where each row corresponds to the same incomplete input. It demonstrates that the imputer can produce a variety of different imputed due to the random noise input to the imputer. We also note that if we modify to train the imputer together with the data generator from scratch without the mask generator/discriminator, it fails most of the time for a similar reason to why AmbientGAN fails. The learning problem is highly ill-posed without the agreement on the mask distribution. In this section, we quantitatively evaluate the performance of MisGAN on three datasets: MNIST, CIFAR-10, and CelebA. We focus on evaluating MisGAN on the missing data imputation task as it is widely studied and many baseline methods are available. Baselines We compare the MisGAN imputer to a range of baseline methods including the basic zero/mean imputation, matrix factorization, and the recently proposed Generative Adversarial Imputation Network (GAIN) BID16. GAIN is an imputation model that employs an imputer network to complete the missing data. It is trained adversarially with a discriminator that determines which entries in the completed data were actually observed and which were imputed. It has shown to outperform many state-of-the-art imputation methods. We impute all of the incomplete examples in the training set and use the FID between the imputed data and the original fully-observed data as the evaluation metric. Architecture We use convolutional generators and discriminators for MisGAN for all experiments in this section. For MNIST, we use the same fully-connected imputer network as described in the previous section; for CIFAR-10 and CelebA, we use a five-layer U-Net architecture BID12 for the imputer network G i in MisGAN.Results We compare all the methods under two missing patterns, square observation and independent dropout, with missing rates from 10% to 90%. FIG6 shows that MisGAN consistently outperforms other methods in all cases, especially under high missing rates. In our experiments, we found GAIN training to be quite unstable for the block missingness. We also observed that there is a "sweet spot" for the number of training epochs when training GAIN. If trained longer, the imputation behavior will gradually become similar to constant imputation (see Appendix H for details). On the other hand, we find that training MisGAN is more stable than training GAIN across all scenarios in the experiments. The imputation of MisGAN and GAIN are shown in Appendix E, F, and H. This work presents and evaluates a highly flexible framework for learning standard GAN data generators in the presence of missing data. Although we only focus on the MCAR case in this work, MisGAN can be easily extended to cases where the output of the data generator is provided to the mask generator. These modifications can capture both MAR and NMAR mechanisms. The question of learnability requires further investigation as the analysis in Section 3 no longer holds due to dependence between the transition matrix and the data distribution under MAR and NMAR. We have tried this modified architecture in our experiments and it showed similar as to the original MisGAN. This suggests that the extra dependencies may not adversely affect learnability. We leave the formal evaluation of this modified framework for future work. A PROOF OF THEOREM 1 AND THEOREM 2Let P be the finite set of feature values. For the n-dimensional case, let M = {0, 1} n be the set of masks and I = P n be the set of all possible feature vectors. Also let D M be the set of probability distributions on M, which implies m 0 and v∈I m(v) = 1 for all m ∈ M, where m(v) denotes the entry of m indexed by v. Given τ ∈ P and q ∈ D M, define the transformation DISPLAYFORM0 where is the entry-wise multiplication and 1{·} is the indicator function. Given m ∈ M, define an equivalent relation ∼ m on I by v ∼ m u iff v m = u m, and denote by [v] m the equivalence class containing v. Given q ∈ D M, let S q ⊂ M be the support of q, that is, DISPLAYFORM1 Given τ ∈ P and v ∈ I, let M τ,v denote the set of masks consistent with v in the sense that q(m) > 0 and v m = τm, that is, DISPLAYFORM2 Proof. This is clear from the following equation DISPLAYFORM3 which can be obtained from as follows, DISPLAYFORM4 Proposition 2. For any τ ∈ P, q ∈ D M and x ∈ R I, the vector T q,τ x determines the collection of marginals {x ([v] DISPLAYFORM5 Proof. Fix τ ∈ P, q ∈ D M and x ∈ R I. Since v m + τm ∈ [v] m, it suffices to show that we can solve for x ([v] m ) in terms of T q,τ x for m ∈ M τ,v = ∅. We use induction on the size of M τ,v.First consider the base case |M τ,v | = 1. Consider v 0 ∈ I with M τ,v0 = {m 0}. By FORMULA0, DISPLAYFORM6, which proves the base case. Now assume we can solve for x ([v] m ) in terms of T q,τ x for m ∈ S q and v ∈ I with |M τ,v | ≤ k. Consider v 0 ∈ I with |M τ,v0 | = k + 1; if no such v 0 exists, the holds trivially. Let M τ,v0 = {m 0, m 1, . . ., m k}. We need to show that T q,τ x determines x([v 0] m ) for = 0, 1,..., k. By again, DISPLAYFORM7 Let m = k =0 m, which may or may not belong to S q. Note that DISPLAYFORM8 and hence DISPLAYFORM9 Plugging FORMULA0 into FORMULA0 yields DISPLAYFORM10 Note that DISPLAYFORM11 It follows from FORMULA0 and FORMULA0 Theorem 1 is a direct consequence of Proposition 1 and Proposition 2 as the collection of marginals {x ([v] m ): v ∈ I, m ∈ S q } is independent of τ. Therefore, if x 1, x 2 ∈ R I satisfy T q,τ0 x 1 = T q,τ0 x 2 for some τ 0 ∈ P, then T q,τ x 1 = T q,τ x 2 for all τ ∈ P. Theorem 1 is a special case when x 1 = 0.Moreover, Proposition 2 also shows that MisGAN overall learns the distribution p(x obs, m), as x([v] m ) is equivalent to p(x obs |m) and T q,τ x is essentially the distribution of f τ (x, m) under the optimally learned missingness q = p(m). Theorem 2 basically restates Proposition 1 and Proposition 2. This is also true when τ / ∈ P according to Appendix B. Corollary 2 can be shown by augmenting the set of feature values by P = P ∪ {ψ} with a novel symbol ψ / ∈ P. If we choose τ = ψ for the masking operator, whenever we spot a ψ in a masked sample, we know that it corresponds to a missing entry. We can also construct the corresponding transition matrix T q,ψ ∈ R I ×I where I = (P) n given the mask distribution q ∈ D M before. In this setting, the generative model for missing data is equivalent to solving the linear system T q,ψ p x = T q,ψ p * x so that p x ∈ R I is non-negative and p x (s) = 0 for all s ∈ I \ I, where the true distribution p * x is given by p * x (s) = p * x (s) for all s ∈ I and zeros elsewhere. Theorem 1 implies that if the solution to original problem FORMULA7 is not unique, the non-negative solution to the augmented linear system with the extra constraint on I \ I with τ = ψ is not unique either. Root mean square error (RMSE) is a commonly used metric for evaluating the performance of missing data imputation, which computes the RMSE of the imputed missing values against the ground truth. However, in a complex system, the conditional distribution p(x mis |x obs) is likely to be highly multimodal. It's not guaranteed that the ground truth of the missing values in the incomplete dataset created under the missing completely at random (MCAR) assumption correspond to the global mode of p(x mis |x obs). A good imputation model might produce samples from p(x mis |x obs) associated with a higher density than the ground truth (or from other modes that are similarly probable). In this case, it will lead to a large error in terms of metrics like RMSE as multiple modes might be far away from each other in a complex distribution. Therefore, we instead compute the FID between the distribution of the completed data and the distribution of the originally fully-observed data as our evaluation metric. This provides a practical way to assess how close a model imputes according to p(x mis |x obs) by comparing two groups of samples collectively. As a concrete example, FIG0 compares the two evaluation metrics on MNIST, our distributionbased FID and the ground truth-based RMSE. It shows that the rankings on most of the missing rates are not consistent across the two metrics. In particular, under 90% missing rate, MisGAN is worse than GAIN and matrix factorization in terms of RMSE, but significantly better in terms of FID. FIG0 plots the imputation of the three methods mentioned above. We can clearly see that MisGAN produces the best completion even though its RMSE is much higher than the other two. It's not surprising as the mean of p(x mis |x obs) minimizes the squared error in expectation, even if the mean might have low density. This probably explains why the blurry completion produced by matrix factorization achieve the lowest RMSE. All of the generators and discriminators in Conv-MisGAN follow the architecture used by the DCGAN model BID11 with 128-dimensional latent code. As For the imputer network for MisGAN trained on CIFAR-10 and CelebA, we follow the U-Net implementation of the CycleGAN and pix2pix work 6. In the experiments, we use 5-layer U-Nets for both CIFAR-10 and CelebA.For training Wasserstein GAN with gradient penalty, We use all the default hyperparameters reported in BID1. For all the datasets, MisGAN is trained for 300 epochs. We train MisGAN imputer for 1000 epochs for MNIST and CIFAR-10 as the networks are smaller and 600 epochs for CelebA.For ConvAC, we use the same architecture described in BID14. We train ConvAC for 1000 epochs using Adam optimizer with learning rate 10 G OF CONVAC FIG0 shows the samples generated by ConvAC trained with the square observation missing pattern on MNIST.H MISSING DATA IMPUTATION WITH GAIN FIG0 shows the imputation of GAIN on different epochs during training with the 20×20 square observation missingnss. We found that this is a common phenomenon for the square observation missing pattern. To obtain better for GAIN, we analyze the FIDs during the course of training and use the model that achieves the best FID to favorably compare with MisGAN for the square observation case. For CIFAR-10, we use the from the 500th epoch; for CelebA, we use the from the 50th epoch. Otherwise, we train GAIN for 1000 epochs for CIFAR-10 and 300 epochs for CelebA. Our implementation is adapted from the code released by the authors of GAIN.
This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data.
1,799
scitldr